Feb 02 14:28:33 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Feb 02 14:28:33 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 02 14:28:33 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 02 14:28:33 localhost kernel: BIOS-provided physical RAM map:
Feb 02 14:28:33 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 02 14:28:33 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 02 14:28:33 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 02 14:28:33 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 02 14:28:33 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 02 14:28:33 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 02 14:28:33 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 02 14:28:33 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 02 14:28:33 localhost kernel: NX (Execute Disable) protection: active
Feb 02 14:28:33 localhost kernel: APIC: Static calls initialized
Feb 02 14:28:33 localhost kernel: SMBIOS 2.8 present.
Feb 02 14:28:33 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 02 14:28:33 localhost kernel: Hypervisor detected: KVM
Feb 02 14:28:33 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 02 14:28:33 localhost kernel: kvm-clock: using sched offset of 5031269501 cycles
Feb 02 14:28:33 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 02 14:28:33 localhost kernel: tsc: Detected 2799.998 MHz processor
Feb 02 14:28:33 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 02 14:28:33 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 02 14:28:33 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 02 14:28:33 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 02 14:28:33 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 02 14:28:33 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 02 14:28:33 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 02 14:28:33 localhost kernel: Using GB pages for direct mapping
Feb 02 14:28:33 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Feb 02 14:28:33 localhost kernel: ACPI: Early table checksum verification disabled
Feb 02 14:28:33 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 02 14:28:33 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 14:28:33 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 14:28:33 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 14:28:33 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 02 14:28:33 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 14:28:33 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 02 14:28:33 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 02 14:28:33 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 02 14:28:33 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 02 14:28:33 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 02 14:28:33 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 02 14:28:33 localhost kernel: No NUMA configuration found
Feb 02 14:28:33 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 02 14:28:33 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb 02 14:28:33 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 02 14:28:33 localhost kernel: Zone ranges:
Feb 02 14:28:33 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 02 14:28:33 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 02 14:28:33 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 02 14:28:33 localhost kernel:   Device   empty
Feb 02 14:28:33 localhost kernel: Movable zone start for each node
Feb 02 14:28:33 localhost kernel: Early memory node ranges
Feb 02 14:28:33 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 02 14:28:33 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 02 14:28:33 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 02 14:28:33 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 02 14:28:33 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 02 14:28:33 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 02 14:28:33 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 02 14:28:33 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Feb 02 14:28:33 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 02 14:28:33 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 02 14:28:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 02 14:28:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 02 14:28:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 02 14:28:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 02 14:28:33 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 02 14:28:33 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 02 14:28:33 localhost kernel: TSC deadline timer available
Feb 02 14:28:33 localhost kernel: CPU topo: Max. logical packages:   8
Feb 02 14:28:33 localhost kernel: CPU topo: Max. logical dies:       8
Feb 02 14:28:33 localhost kernel: CPU topo: Max. dies per package:   1
Feb 02 14:28:33 localhost kernel: CPU topo: Max. threads per core:   1
Feb 02 14:28:33 localhost kernel: CPU topo: Num. cores per package:     1
Feb 02 14:28:33 localhost kernel: CPU topo: Num. threads per package:   1
Feb 02 14:28:33 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 02 14:28:33 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 02 14:28:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 02 14:28:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 02 14:28:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 02 14:28:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 02 14:28:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 02 14:28:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 02 14:28:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 02 14:28:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 02 14:28:33 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 02 14:28:33 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 02 14:28:33 localhost kernel: Booting paravirtualized kernel on KVM
Feb 02 14:28:33 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 02 14:28:33 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 02 14:28:33 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 02 14:28:33 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Feb 02 14:28:33 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Feb 02 14:28:33 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 02 14:28:33 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 02 14:28:33 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Feb 02 14:28:33 localhost kernel: random: crng init done
Feb 02 14:28:33 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 02 14:28:33 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 02 14:28:33 localhost kernel: Fallback order for Node 0: 0 
Feb 02 14:28:33 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 02 14:28:33 localhost kernel: Policy zone: Normal
Feb 02 14:28:33 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 02 14:28:33 localhost kernel: software IO TLB: area num 8.
Feb 02 14:28:33 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 02 14:28:33 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Feb 02 14:28:33 localhost kernel: ftrace: allocated 194 pages with 3 groups
Feb 02 14:28:33 localhost kernel: Dynamic Preempt: voluntary
Feb 02 14:28:33 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 02 14:28:33 localhost kernel: rcu:         RCU event tracing is enabled.
Feb 02 14:28:33 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 02 14:28:33 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Feb 02 14:28:33 localhost kernel:         Rude variant of Tasks RCU enabled.
Feb 02 14:28:33 localhost kernel:         Tracing variant of Tasks RCU enabled.
Feb 02 14:28:33 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 02 14:28:33 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 02 14:28:33 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 02 14:28:33 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 02 14:28:33 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 02 14:28:33 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 02 14:28:33 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 02 14:28:33 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 02 14:28:33 localhost kernel: Console: colour VGA+ 80x25
Feb 02 14:28:33 localhost kernel: printk: console [ttyS0] enabled
Feb 02 14:28:33 localhost kernel: ACPI: Core revision 20230331
Feb 02 14:28:33 localhost kernel: APIC: Switch to symmetric I/O mode setup
Feb 02 14:28:33 localhost kernel: x2apic enabled
Feb 02 14:28:33 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Feb 02 14:28:33 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 02 14:28:33 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Feb 02 14:28:33 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 02 14:28:33 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 02 14:28:33 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 02 14:28:33 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 02 14:28:33 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 02 14:28:33 localhost kernel: Spectre V2 : Mitigation: Retpolines
Feb 02 14:28:33 localhost kernel: RETBleed: Mitigation: untrained return thunk
Feb 02 14:28:33 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 02 14:28:33 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 02 14:28:33 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 02 14:28:33 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 02 14:28:33 localhost kernel: active return thunk: retbleed_return_thunk
Feb 02 14:28:33 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 02 14:28:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 02 14:28:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 02 14:28:33 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 02 14:28:33 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 02 14:28:33 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 02 14:28:33 localhost kernel: Freeing SMP alternatives memory: 40K
Feb 02 14:28:33 localhost kernel: pid_max: default: 32768 minimum: 301
Feb 02 14:28:33 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 02 14:28:33 localhost kernel: landlock: Up and running.
Feb 02 14:28:33 localhost kernel: Yama: becoming mindful.
Feb 02 14:28:33 localhost kernel: SELinux:  Initializing.
Feb 02 14:28:33 localhost kernel: LSM support for eBPF active
Feb 02 14:28:33 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 02 14:28:33 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 02 14:28:33 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 02 14:28:33 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 02 14:28:33 localhost kernel: ... version:                0
Feb 02 14:28:33 localhost kernel: ... bit width:              48
Feb 02 14:28:33 localhost kernel: ... generic registers:      6
Feb 02 14:28:33 localhost kernel: ... value mask:             0000ffffffffffff
Feb 02 14:28:33 localhost kernel: ... max period:             00007fffffffffff
Feb 02 14:28:33 localhost kernel: ... fixed-purpose events:   0
Feb 02 14:28:33 localhost kernel: ... event mask:             000000000000003f
Feb 02 14:28:33 localhost kernel: signal: max sigframe size: 1776
Feb 02 14:28:33 localhost kernel: rcu: Hierarchical SRCU implementation.
Feb 02 14:28:33 localhost kernel: rcu:         Max phase no-delay instances is 400.
Feb 02 14:28:33 localhost kernel: smp: Bringing up secondary CPUs ...
Feb 02 14:28:33 localhost kernel: smpboot: x86: Booting SMP configuration:
Feb 02 14:28:33 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 02 14:28:33 localhost kernel: smp: Brought up 1 node, 8 CPUs
Feb 02 14:28:33 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Feb 02 14:28:33 localhost kernel: node 0 deferred pages initialised in 10ms
Feb 02 14:28:33 localhost kernel: Memory: 7763936K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618400K reserved, 0K cma-reserved)
Feb 02 14:28:33 localhost kernel: devtmpfs: initialized
Feb 02 14:28:33 localhost kernel: x86/mm: Memory block size: 128MB
Feb 02 14:28:33 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 02 14:28:33 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 02 14:28:33 localhost kernel: pinctrl core: initialized pinctrl subsystem
Feb 02 14:28:33 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 02 14:28:33 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 02 14:28:33 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 02 14:28:33 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 02 14:28:33 localhost kernel: audit: initializing netlink subsys (disabled)
Feb 02 14:28:33 localhost kernel: audit: type=2000 audit(1770042513.555:1): state=initialized audit_enabled=0 res=1
Feb 02 14:28:33 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 02 14:28:33 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 02 14:28:33 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 02 14:28:33 localhost kernel: cpuidle: using governor menu
Feb 02 14:28:33 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 02 14:28:33 localhost kernel: PCI: Using configuration type 1 for base access
Feb 02 14:28:33 localhost kernel: PCI: Using configuration type 1 for extended access
Feb 02 14:28:33 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 02 14:28:33 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 02 14:28:33 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 02 14:28:33 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 02 14:28:33 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 02 14:28:33 localhost kernel: Demotion targets for Node 0: null
Feb 02 14:28:33 localhost kernel: cryptd: max_cpu_qlen set to 1000
Feb 02 14:28:33 localhost kernel: ACPI: Added _OSI(Module Device)
Feb 02 14:28:33 localhost kernel: ACPI: Added _OSI(Processor Device)
Feb 02 14:28:33 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 02 14:28:33 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 02 14:28:33 localhost kernel: ACPI: Interpreter enabled
Feb 02 14:28:33 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 02 14:28:33 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Feb 02 14:28:33 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 02 14:28:33 localhost kernel: PCI: Using E820 reservations for host bridge windows
Feb 02 14:28:33 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 02 14:28:33 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 02 14:28:33 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [3] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [4] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [5] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [6] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [7] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [8] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [9] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [10] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [11] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [12] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [13] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [14] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [15] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [16] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [17] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [18] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [19] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [20] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [21] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [22] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [23] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [24] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [25] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [26] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [27] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [28] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [29] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [30] registered
Feb 02 14:28:33 localhost kernel: acpiphp: Slot [31] registered
Feb 02 14:28:33 localhost kernel: PCI host bridge to bus 0000:00
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 02 14:28:33 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 02 14:28:33 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 02 14:28:33 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 02 14:28:33 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 02 14:28:33 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 02 14:28:33 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 02 14:28:33 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 02 14:28:33 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 02 14:28:33 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 02 14:28:33 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 02 14:28:33 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 02 14:28:33 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 02 14:28:33 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 02 14:28:33 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 02 14:28:33 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 02 14:28:33 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 02 14:28:33 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 02 14:28:33 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 02 14:28:33 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 02 14:28:33 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 02 14:28:33 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 02 14:28:33 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 02 14:28:33 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 02 14:28:33 localhost kernel: iommu: Default domain type: Translated
Feb 02 14:28:33 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 02 14:28:33 localhost kernel: SCSI subsystem initialized
Feb 02 14:28:33 localhost kernel: ACPI: bus type USB registered
Feb 02 14:28:33 localhost kernel: usbcore: registered new interface driver usbfs
Feb 02 14:28:33 localhost kernel: usbcore: registered new interface driver hub
Feb 02 14:28:33 localhost kernel: usbcore: registered new device driver usb
Feb 02 14:28:33 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 02 14:28:33 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 02 14:28:33 localhost kernel: PTP clock support registered
Feb 02 14:28:33 localhost kernel: EDAC MC: Ver: 3.0.0
Feb 02 14:28:33 localhost kernel: NetLabel: Initializing
Feb 02 14:28:33 localhost kernel: NetLabel:  domain hash size = 128
Feb 02 14:28:33 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 02 14:28:33 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Feb 02 14:28:33 localhost kernel: PCI: Using ACPI for IRQ routing
Feb 02 14:28:33 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 02 14:28:33 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 02 14:28:33 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Feb 02 14:28:33 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 02 14:28:33 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 02 14:28:33 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 02 14:28:33 localhost kernel: vgaarb: loaded
Feb 02 14:28:33 localhost kernel: clocksource: Switched to clocksource kvm-clock
Feb 02 14:28:33 localhost kernel: VFS: Disk quotas dquot_6.6.0
Feb 02 14:28:33 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 02 14:28:33 localhost kernel: pnp: PnP ACPI init
Feb 02 14:28:33 localhost kernel: pnp 00:03: [dma 2]
Feb 02 14:28:33 localhost kernel: pnp: PnP ACPI: found 5 devices
Feb 02 14:28:33 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 02 14:28:33 localhost kernel: NET: Registered PF_INET protocol family
Feb 02 14:28:33 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 02 14:28:33 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 02 14:28:33 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 02 14:28:33 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 02 14:28:33 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 02 14:28:33 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 02 14:28:33 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 02 14:28:33 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 02 14:28:33 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 02 14:28:33 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 02 14:28:33 localhost kernel: NET: Registered PF_XDP protocol family
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 02 14:28:33 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 02 14:28:33 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 02 14:28:33 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 02 14:28:33 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 24282 usecs
Feb 02 14:28:33 localhost kernel: PCI: CLS 0 bytes, default 64
Feb 02 14:28:33 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 02 14:28:33 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 02 14:28:33 localhost kernel: Trying to unpack rootfs image as initramfs...
Feb 02 14:28:33 localhost kernel: ACPI: bus type thunderbolt registered
Feb 02 14:28:33 localhost kernel: Initialise system trusted keyrings
Feb 02 14:28:33 localhost kernel: Key type blacklist registered
Feb 02 14:28:33 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 02 14:28:33 localhost kernel: zbud: loaded
Feb 02 14:28:33 localhost kernel: integrity: Platform Keyring initialized
Feb 02 14:28:33 localhost kernel: integrity: Machine keyring initialized
Feb 02 14:28:33 localhost kernel: Freeing initrd memory: 88000K
Feb 02 14:28:33 localhost kernel: NET: Registered PF_ALG protocol family
Feb 02 14:28:33 localhost kernel: xor: automatically using best checksumming function   avx       
Feb 02 14:28:33 localhost kernel: Key type asymmetric registered
Feb 02 14:28:33 localhost kernel: Asymmetric key parser 'x509' registered
Feb 02 14:28:33 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 02 14:28:33 localhost kernel: io scheduler mq-deadline registered
Feb 02 14:28:33 localhost kernel: io scheduler kyber registered
Feb 02 14:28:33 localhost kernel: io scheduler bfq registered
Feb 02 14:28:33 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 02 14:28:33 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 02 14:28:33 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 02 14:28:33 localhost kernel: ACPI: button: Power Button [PWRF]
Feb 02 14:28:33 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 02 14:28:33 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 02 14:28:33 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 02 14:28:33 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 02 14:28:33 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 02 14:28:33 localhost kernel: Non-volatile memory driver v1.3
Feb 02 14:28:33 localhost kernel: rdac: device handler registered
Feb 02 14:28:33 localhost kernel: hp_sw: device handler registered
Feb 02 14:28:33 localhost kernel: emc: device handler registered
Feb 02 14:28:33 localhost kernel: alua: device handler registered
Feb 02 14:28:33 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 02 14:28:33 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 02 14:28:33 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 02 14:28:33 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 02 14:28:33 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 02 14:28:33 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 02 14:28:33 localhost kernel: usb usb1: Product: UHCI Host Controller
Feb 02 14:28:33 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Feb 02 14:28:33 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 02 14:28:33 localhost kernel: hub 1-0:1.0: USB hub found
Feb 02 14:28:33 localhost kernel: hub 1-0:1.0: 2 ports detected
Feb 02 14:28:33 localhost kernel: usbcore: registered new interface driver usbserial_generic
Feb 02 14:28:33 localhost kernel: usbserial: USB Serial support registered for generic
Feb 02 14:28:33 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 02 14:28:33 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 02 14:28:33 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 02 14:28:33 localhost kernel: mousedev: PS/2 mouse device common for all mice
Feb 02 14:28:33 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 02 14:28:33 localhost kernel: rtc_cmos 00:04: registered as rtc0
Feb 02 14:28:33 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-02T14:28:33 UTC (1770042513)
Feb 02 14:28:33 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 02 14:28:33 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 02 14:28:33 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 02 14:28:33 localhost kernel: usbcore: registered new interface driver usbhid
Feb 02 14:28:33 localhost kernel: usbhid: USB HID core driver
Feb 02 14:28:33 localhost kernel: drop_monitor: Initializing network drop monitor service
Feb 02 14:28:33 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 02 14:28:33 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 02 14:28:33 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 02 14:28:33 localhost kernel: Initializing XFRM netlink socket
Feb 02 14:28:33 localhost kernel: NET: Registered PF_INET6 protocol family
Feb 02 14:28:33 localhost kernel: Segment Routing with IPv6
Feb 02 14:28:33 localhost kernel: NET: Registered PF_PACKET protocol family
Feb 02 14:28:33 localhost kernel: mpls_gso: MPLS GSO support
Feb 02 14:28:33 localhost kernel: IPI shorthand broadcast: enabled
Feb 02 14:28:33 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Feb 02 14:28:33 localhost kernel: AES CTR mode by8 optimization enabled
Feb 02 14:28:33 localhost kernel: sched_clock: Marking stable (889002460, 146214474)->(1131467198, -96250264)
Feb 02 14:28:33 localhost kernel: registered taskstats version 1
Feb 02 14:28:33 localhost kernel: Loading compiled-in X.509 certificates
Feb 02 14:28:33 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb 02 14:28:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 02 14:28:33 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 02 14:28:33 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 02 14:28:33 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 02 14:28:33 localhost kernel: Demotion targets for Node 0: null
Feb 02 14:28:33 localhost kernel: page_owner is disabled
Feb 02 14:28:33 localhost kernel: Key type .fscrypt registered
Feb 02 14:28:33 localhost kernel: Key type fscrypt-provisioning registered
Feb 02 14:28:33 localhost kernel: Key type big_key registered
Feb 02 14:28:33 localhost kernel: Key type encrypted registered
Feb 02 14:28:33 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 02 14:28:33 localhost kernel: Loading compiled-in module X.509 certificates
Feb 02 14:28:33 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb 02 14:28:33 localhost kernel: ima: Allocated hash algorithm: sha256
Feb 02 14:28:33 localhost kernel: ima: No architecture policies found
Feb 02 14:28:33 localhost kernel: evm: Initialising EVM extended attributes:
Feb 02 14:28:33 localhost kernel: evm: security.selinux
Feb 02 14:28:33 localhost kernel: evm: security.SMACK64 (disabled)
Feb 02 14:28:33 localhost kernel: evm: security.SMACK64EXEC (disabled)
Feb 02 14:28:33 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 02 14:28:33 localhost kernel: evm: security.SMACK64MMAP (disabled)
Feb 02 14:28:33 localhost kernel: evm: security.apparmor (disabled)
Feb 02 14:28:33 localhost kernel: evm: security.ima
Feb 02 14:28:33 localhost kernel: evm: security.capability
Feb 02 14:28:33 localhost kernel: evm: HMAC attrs: 0x1
Feb 02 14:28:33 localhost kernel: Running certificate verification RSA selftest
Feb 02 14:28:33 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 02 14:28:33 localhost kernel: Running certificate verification ECDSA selftest
Feb 02 14:28:33 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 02 14:28:33 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 02 14:28:33 localhost kernel: clk: Disabling unused clocks
Feb 02 14:28:33 localhost kernel: Freeing unused decrypted memory: 2028K
Feb 02 14:28:33 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Feb 02 14:28:33 localhost kernel: Write protecting the kernel read-only data: 30720k
Feb 02 14:28:33 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Feb 02 14:28:33 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 02 14:28:33 localhost kernel: Run /init as init process
Feb 02 14:28:33 localhost kernel:   with arguments:
Feb 02 14:28:33 localhost kernel:     /init
Feb 02 14:28:33 localhost kernel:   with environment:
Feb 02 14:28:33 localhost kernel:     HOME=/
Feb 02 14:28:33 localhost kernel:     TERM=linux
Feb 02 14:28:33 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Feb 02 14:28:33 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 02 14:28:33 localhost systemd[1]: Detected virtualization kvm.
Feb 02 14:28:33 localhost systemd[1]: Detected architecture x86-64.
Feb 02 14:28:33 localhost systemd[1]: Running in initrd.
Feb 02 14:28:33 localhost systemd[1]: No hostname configured, using default hostname.
Feb 02 14:28:33 localhost systemd[1]: Hostname set to <localhost>.
Feb 02 14:28:33 localhost systemd[1]: Initializing machine ID from VM UUID.
Feb 02 14:28:33 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Feb 02 14:28:33 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 02 14:28:33 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 02 14:28:33 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Feb 02 14:28:33 localhost kernel: usb 1-1: Manufacturer: QEMU
Feb 02 14:28:34 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 02 14:28:34 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 02 14:28:34 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 02 14:28:34 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 02 14:28:34 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 02 14:28:34 localhost systemd[1]: Reached target Initrd /usr File System.
Feb 02 14:28:34 localhost systemd[1]: Reached target Local File Systems.
Feb 02 14:28:34 localhost systemd[1]: Reached target Path Units.
Feb 02 14:28:34 localhost systemd[1]: Reached target Slice Units.
Feb 02 14:28:34 localhost systemd[1]: Reached target Swaps.
Feb 02 14:28:34 localhost systemd[1]: Reached target Timer Units.
Feb 02 14:28:34 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 02 14:28:34 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Feb 02 14:28:34 localhost systemd[1]: Listening on Journal Socket.
Feb 02 14:28:34 localhost systemd[1]: Listening on udev Control Socket.
Feb 02 14:28:34 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 02 14:28:34 localhost systemd[1]: Reached target Socket Units.
Feb 02 14:28:34 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 02 14:28:34 localhost systemd[1]: Starting Journal Service...
Feb 02 14:28:34 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 02 14:28:34 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 02 14:28:34 localhost systemd[1]: Starting Create System Users...
Feb 02 14:28:34 localhost systemd[1]: Starting Setup Virtual Console...
Feb 02 14:28:34 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 02 14:28:34 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 02 14:28:34 localhost systemd-journald[304]: Journal started
Feb 02 14:28:34 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/91f8129188304d3aad9af49b9247697f) is 8.0M, max 153.6M, 145.6M free.
Feb 02 14:28:34 localhost systemd[1]: Started Journal Service.
Feb 02 14:28:34 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Feb 02 14:28:34 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Feb 02 14:28:34 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 02 14:28:34 localhost systemd[1]: Finished Create System Users.
Feb 02 14:28:34 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 02 14:28:34 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 02 14:28:34 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 02 14:28:34 localhost systemd[1]: Finished Setup Virtual Console.
Feb 02 14:28:34 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 02 14:28:34 localhost systemd[1]: Starting dracut cmdline hook...
Feb 02 14:28:34 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 02 14:28:34 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Feb 02 14:28:34 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 02 14:28:34 localhost systemd[1]: Finished dracut cmdline hook.
Feb 02 14:28:34 localhost systemd[1]: Starting dracut pre-udev hook...
Feb 02 14:28:34 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 02 14:28:34 localhost kernel: device-mapper: uevent: version 1.0.3
Feb 02 14:28:34 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 02 14:28:34 localhost kernel: RPC: Registered named UNIX socket transport module.
Feb 02 14:28:34 localhost kernel: RPC: Registered udp transport module.
Feb 02 14:28:34 localhost kernel: RPC: Registered tcp transport module.
Feb 02 14:28:34 localhost kernel: RPC: Registered tcp-with-tls transport module.
Feb 02 14:28:34 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 02 14:28:34 localhost rpc.statd[440]: Version 2.5.4 starting
Feb 02 14:28:34 localhost rpc.statd[440]: Initializing NSM state
Feb 02 14:28:34 localhost rpc.idmapd[445]: Setting log level to 0
Feb 02 14:28:34 localhost systemd[1]: Finished dracut pre-udev hook.
Feb 02 14:28:34 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 02 14:28:34 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Feb 02 14:28:34 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 02 14:28:34 localhost systemd[1]: Starting dracut pre-trigger hook...
Feb 02 14:28:34 localhost systemd[1]: Finished dracut pre-trigger hook.
Feb 02 14:28:34 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 02 14:28:34 localhost systemd[1]: Created slice Slice /system/modprobe.
Feb 02 14:28:34 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 02 14:28:34 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 02 14:28:34 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 02 14:28:34 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 02 14:28:34 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 02 14:28:34 localhost systemd[1]: Reached target Network.
Feb 02 14:28:34 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 02 14:28:34 localhost systemd[1]: Starting dracut initqueue hook...
Feb 02 14:28:34 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 02 14:28:34 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 02 14:28:34 localhost kernel:  vda: vda1
Feb 02 14:28:34 localhost kernel: libata version 3.00 loaded.
Feb 02 14:28:34 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Feb 02 14:28:34 localhost kernel: scsi host0: ata_piix
Feb 02 14:28:34 localhost kernel: scsi host1: ata_piix
Feb 02 14:28:34 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 02 14:28:34 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 02 14:28:34 localhost systemd-udevd[476]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 14:28:34 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb 02 14:28:34 localhost systemd[1]: Reached target Initrd Root Device.
Feb 02 14:28:34 localhost kernel: ata1: found unknown device (class 0)
Feb 02 14:28:34 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 02 14:28:34 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 02 14:28:34 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 02 14:28:34 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 02 14:28:34 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 02 14:28:34 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Feb 02 14:28:35 localhost systemd[1]: Mounting Kernel Configuration File System...
Feb 02 14:28:35 localhost systemd[1]: Mounted Kernel Configuration File System.
Feb 02 14:28:35 localhost systemd[1]: Reached target System Initialization.
Feb 02 14:28:35 localhost systemd[1]: Reached target Basic System.
Feb 02 14:28:35 localhost systemd[1]: Finished dracut initqueue hook.
Feb 02 14:28:35 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Feb 02 14:28:35 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Feb 02 14:28:35 localhost systemd[1]: Reached target Remote File Systems.
Feb 02 14:28:35 localhost systemd[1]: Starting dracut pre-mount hook...
Feb 02 14:28:35 localhost systemd[1]: Finished dracut pre-mount hook.
Feb 02 14:28:35 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Feb 02 14:28:35 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Feb 02 14:28:35 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb 02 14:28:35 localhost systemd[1]: Mounting /sysroot...
Feb 02 14:28:35 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 02 14:28:35 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Feb 02 14:28:35 localhost kernel: XFS (vda1): Ending clean mount
Feb 02 14:28:35 localhost systemd[1]: Mounted /sysroot.
Feb 02 14:28:35 localhost systemd[1]: Reached target Initrd Root File System.
Feb 02 14:28:35 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 02 14:28:35 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 02 14:28:35 localhost systemd[1]: Reached target Initrd File Systems.
Feb 02 14:28:35 localhost systemd[1]: Reached target Initrd Default Target.
Feb 02 14:28:35 localhost systemd[1]: Starting dracut mount hook...
Feb 02 14:28:35 localhost systemd[1]: Finished dracut mount hook.
Feb 02 14:28:35 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 02 14:28:35 localhost rpc.idmapd[445]: exiting on signal 15
Feb 02 14:28:35 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 02 14:28:35 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 02 14:28:35 localhost systemd[1]: Stopped target Network.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Timer Units.
Feb 02 14:28:35 localhost systemd[1]: dbus.socket: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 02 14:28:35 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Initrd Default Target.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Basic System.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Initrd Root Device.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Initrd /usr File System.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Path Units.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Remote File Systems.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Slice Units.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Socket Units.
Feb 02 14:28:35 localhost systemd[1]: Stopped target System Initialization.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Local File Systems.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Swaps.
Feb 02 14:28:35 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped dracut mount hook.
Feb 02 14:28:35 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped dracut pre-mount hook.
Feb 02 14:28:35 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Feb 02 14:28:35 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 02 14:28:35 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped dracut initqueue hook.
Feb 02 14:28:35 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped Apply Kernel Variables.
Feb 02 14:28:35 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Feb 02 14:28:35 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped Coldplug All udev Devices.
Feb 02 14:28:35 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped dracut pre-trigger hook.
Feb 02 14:28:35 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 02 14:28:35 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped Setup Virtual Console.
Feb 02 14:28:35 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 02 14:28:35 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 02 14:28:35 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Closed udev Control Socket.
Feb 02 14:28:35 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Closed udev Kernel Socket.
Feb 02 14:28:35 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped dracut pre-udev hook.
Feb 02 14:28:35 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped dracut cmdline hook.
Feb 02 14:28:35 localhost systemd[1]: Starting Cleanup udev Database...
Feb 02 14:28:35 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 02 14:28:35 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Feb 02 14:28:35 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Stopped Create System Users.
Feb 02 14:28:35 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Finished Cleanup udev Database.
Feb 02 14:28:35 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 02 14:28:35 localhost systemd[1]: Reached target Switch Root.
Feb 02 14:28:35 localhost systemd[1]: Starting Switch Root...
Feb 02 14:28:36 localhost systemd[1]: Switching root.
Feb 02 14:28:36 localhost systemd-journald[304]: Journal stopped
Feb 02 14:28:37 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Feb 02 14:28:37 localhost kernel: audit: type=1404 audit(1770042516.192:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb 02 14:28:37 localhost kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 14:28:37 localhost kernel: SELinux:  policy capability open_perms=1
Feb 02 14:28:37 localhost kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 14:28:37 localhost kernel: SELinux:  policy capability always_check_network=0
Feb 02 14:28:37 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 14:28:37 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 14:28:37 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 14:28:37 localhost kernel: audit: type=1403 audit(1770042516.327:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 02 14:28:37 localhost systemd[1]: Successfully loaded SELinux policy in 139.612ms.
Feb 02 14:28:37 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 40.504ms.
Feb 02 14:28:37 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 02 14:28:37 localhost systemd[1]: Detected virtualization kvm.
Feb 02 14:28:37 localhost systemd[1]: Detected architecture x86-64.
Feb 02 14:28:37 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 14:28:37 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 02 14:28:37 localhost systemd[1]: Stopped Switch Root.
Feb 02 14:28:37 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 02 14:28:37 localhost systemd[1]: Created slice Slice /system/getty.
Feb 02 14:28:37 localhost systemd[1]: Created slice Slice /system/serial-getty.
Feb 02 14:28:37 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Feb 02 14:28:37 localhost systemd[1]: Created slice User and Session Slice.
Feb 02 14:28:37 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 02 14:28:37 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Feb 02 14:28:37 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb 02 14:28:37 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 02 14:28:37 localhost systemd[1]: Stopped target Switch Root.
Feb 02 14:28:37 localhost systemd[1]: Stopped target Initrd File Systems.
Feb 02 14:28:37 localhost systemd[1]: Stopped target Initrd Root File System.
Feb 02 14:28:37 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Feb 02 14:28:37 localhost systemd[1]: Reached target Path Units.
Feb 02 14:28:37 localhost systemd[1]: Reached target rpc_pipefs.target.
Feb 02 14:28:37 localhost systemd[1]: Reached target Slice Units.
Feb 02 14:28:37 localhost systemd[1]: Reached target Swaps.
Feb 02 14:28:37 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Feb 02 14:28:37 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Feb 02 14:28:37 localhost systemd[1]: Reached target RPC Port Mapper.
Feb 02 14:28:37 localhost systemd[1]: Listening on Process Core Dump Socket.
Feb 02 14:28:37 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Feb 02 14:28:37 localhost systemd[1]: Listening on udev Control Socket.
Feb 02 14:28:37 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 02 14:28:37 localhost systemd[1]: Mounting Huge Pages File System...
Feb 02 14:28:37 localhost systemd[1]: Mounting POSIX Message Queue File System...
Feb 02 14:28:37 localhost systemd[1]: Mounting Kernel Debug File System...
Feb 02 14:28:37 localhost systemd[1]: Mounting Kernel Trace File System...
Feb 02 14:28:37 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 02 14:28:37 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 02 14:28:37 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 02 14:28:37 localhost systemd[1]: Starting Load Kernel Module drm...
Feb 02 14:28:37 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Feb 02 14:28:37 localhost systemd[1]: Starting Load Kernel Module fuse...
Feb 02 14:28:37 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb 02 14:28:37 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 02 14:28:37 localhost systemd[1]: Stopped File System Check on Root Device.
Feb 02 14:28:37 localhost systemd[1]: Stopped Journal Service.
Feb 02 14:28:37 localhost systemd[1]: Starting Journal Service...
Feb 02 14:28:37 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 02 14:28:37 localhost systemd[1]: Starting Generate network units from Kernel command line...
Feb 02 14:28:37 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 02 14:28:37 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Feb 02 14:28:37 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 02 14:28:37 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 02 14:28:37 localhost kernel: fuse: init (API version 7.37)
Feb 02 14:28:37 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 02 14:28:37 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb 02 14:28:37 localhost systemd[1]: Mounted Huge Pages File System.
Feb 02 14:28:37 localhost systemd-journald[678]: Journal started
Feb 02 14:28:37 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb 02 14:28:36 localhost systemd[1]: Queued start job for default target Multi-User System.
Feb 02 14:28:36 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 02 14:28:37 localhost systemd[1]: Started Journal Service.
Feb 02 14:28:37 localhost systemd[1]: Mounted POSIX Message Queue File System.
Feb 02 14:28:37 localhost systemd[1]: Mounted Kernel Debug File System.
Feb 02 14:28:37 localhost systemd[1]: Mounted Kernel Trace File System.
Feb 02 14:28:37 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 02 14:28:37 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 02 14:28:37 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 02 14:28:37 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 02 14:28:37 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Feb 02 14:28:37 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 02 14:28:37 localhost systemd[1]: Finished Load Kernel Module fuse.
Feb 02 14:28:37 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb 02 14:28:37 localhost systemd[1]: Finished Generate network units from Kernel command line.
Feb 02 14:28:37 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Feb 02 14:28:37 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 02 14:28:37 localhost kernel: ACPI: bus type drm_connector registered
Feb 02 14:28:37 localhost systemd[1]: Mounting FUSE Control File System...
Feb 02 14:28:37 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 02 14:28:37 localhost systemd[1]: Starting Rebuild Hardware Database...
Feb 02 14:28:37 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Feb 02 14:28:37 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 02 14:28:37 localhost systemd[1]: Starting Load/Save OS Random Seed...
Feb 02 14:28:37 localhost systemd[1]: Starting Create System Users...
Feb 02 14:28:37 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 02 14:28:37 localhost systemd[1]: Finished Load Kernel Module drm.
Feb 02 14:28:37 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 02 14:28:37 localhost systemd[1]: Mounted FUSE Control File System.
Feb 02 14:28:37 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb 02 14:28:37 localhost systemd-journald[678]: Received client request to flush runtime journal.
Feb 02 14:28:37 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Feb 02 14:28:37 localhost systemd[1]: Finished Load/Save OS Random Seed.
Feb 02 14:28:37 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 02 14:28:37 localhost systemd[1]: Finished Create System Users.
Feb 02 14:28:37 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 02 14:28:37 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 02 14:28:37 localhost systemd[1]: Reached target Preparation for Local File Systems.
Feb 02 14:28:37 localhost systemd[1]: Reached target Local File Systems.
Feb 02 14:28:37 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb 02 14:28:37 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb 02 14:28:37 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 02 14:28:37 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb 02 14:28:37 localhost systemd[1]: Starting Automatic Boot Loader Update...
Feb 02 14:28:37 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb 02 14:28:37 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 02 14:28:37 localhost bootctl[694]: Couldn't find EFI system partition, skipping.
Feb 02 14:28:37 localhost systemd[1]: Finished Automatic Boot Loader Update.
Feb 02 14:28:37 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 02 14:28:37 localhost systemd[1]: Starting Security Auditing Service...
Feb 02 14:28:37 localhost systemd[1]: Starting RPC Bind...
Feb 02 14:28:37 localhost systemd[1]: Starting Rebuild Journal Catalog...
Feb 02 14:28:37 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb 02 14:28:37 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb 02 14:28:37 localhost systemd[1]: Finished Rebuild Journal Catalog.
Feb 02 14:28:37 localhost systemd[1]: Started RPC Bind.
Feb 02 14:28:37 localhost augenrules[705]: /sbin/augenrules: No change
Feb 02 14:28:37 localhost augenrules[720]: No rules
Feb 02 14:28:37 localhost augenrules[720]: enabled 1
Feb 02 14:28:37 localhost augenrules[720]: failure 1
Feb 02 14:28:37 localhost augenrules[720]: pid 700
Feb 02 14:28:37 localhost augenrules[720]: rate_limit 0
Feb 02 14:28:37 localhost augenrules[720]: backlog_limit 8192
Feb 02 14:28:37 localhost augenrules[720]: lost 0
Feb 02 14:28:37 localhost augenrules[720]: backlog 4
Feb 02 14:28:37 localhost augenrules[720]: backlog_wait_time 60000
Feb 02 14:28:37 localhost augenrules[720]: backlog_wait_time_actual 0
Feb 02 14:28:37 localhost augenrules[720]: enabled 1
Feb 02 14:28:37 localhost augenrules[720]: failure 1
Feb 02 14:28:37 localhost augenrules[720]: pid 700
Feb 02 14:28:37 localhost augenrules[720]: rate_limit 0
Feb 02 14:28:37 localhost augenrules[720]: backlog_limit 8192
Feb 02 14:28:37 localhost augenrules[720]: lost 0
Feb 02 14:28:37 localhost augenrules[720]: backlog 2
Feb 02 14:28:37 localhost augenrules[720]: backlog_wait_time 60000
Feb 02 14:28:37 localhost augenrules[720]: backlog_wait_time_actual 0
Feb 02 14:28:37 localhost augenrules[720]: enabled 1
Feb 02 14:28:37 localhost augenrules[720]: failure 1
Feb 02 14:28:37 localhost augenrules[720]: pid 700
Feb 02 14:28:37 localhost augenrules[720]: rate_limit 0
Feb 02 14:28:37 localhost augenrules[720]: backlog_limit 8192
Feb 02 14:28:37 localhost augenrules[720]: lost 0
Feb 02 14:28:37 localhost augenrules[720]: backlog 1
Feb 02 14:28:37 localhost augenrules[720]: backlog_wait_time 60000
Feb 02 14:28:37 localhost augenrules[720]: backlog_wait_time_actual 0
Feb 02 14:28:37 localhost systemd[1]: Started Security Auditing Service.
Feb 02 14:28:37 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb 02 14:28:37 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb 02 14:28:37 localhost systemd[1]: Finished Rebuild Hardware Database.
Feb 02 14:28:37 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 02 14:28:37 localhost systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Feb 02 14:28:37 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb 02 14:28:37 localhost systemd[1]: Starting Update is Completed...
Feb 02 14:28:37 localhost systemd[1]: Finished Update is Completed.
Feb 02 14:28:37 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 02 14:28:37 localhost systemd[1]: Reached target System Initialization.
Feb 02 14:28:37 localhost systemd[1]: Started dnf makecache --timer.
Feb 02 14:28:37 localhost systemd[1]: Started Daily rotation of log files.
Feb 02 14:28:37 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 02 14:28:37 localhost systemd[1]: Reached target Timer Units.
Feb 02 14:28:37 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 02 14:28:37 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb 02 14:28:37 localhost systemd[1]: Reached target Socket Units.
Feb 02 14:28:37 localhost systemd[1]: Starting D-Bus System Message Bus...
Feb 02 14:28:37 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 02 14:28:37 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb 02 14:28:37 localhost systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 14:28:37 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 02 14:28:37 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 02 14:28:37 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 02 14:28:37 localhost systemd[1]: Started D-Bus System Message Bus.
Feb 02 14:28:37 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb 02 14:28:37 localhost systemd[1]: Reached target Basic System.
Feb 02 14:28:37 localhost dbus-broker-lau[762]: Ready
Feb 02 14:28:37 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 02 14:28:37 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb 02 14:28:37 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb 02 14:28:37 localhost systemd[1]: Starting NTP client/server...
Feb 02 14:28:37 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb 02 14:28:37 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb 02 14:28:37 localhost systemd[1]: Starting IPv4 firewall with iptables...
Feb 02 14:28:37 localhost systemd[1]: Started irqbalance daemon.
Feb 02 14:28:37 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb 02 14:28:37 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 14:28:37 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 14:28:37 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 14:28:37 localhost systemd[1]: Reached target sshd-keygen.target.
Feb 02 14:28:37 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb 02 14:28:37 localhost systemd[1]: Reached target User and Group Name Lookups.
Feb 02 14:28:37 localhost systemd[1]: Starting User Login Management...
Feb 02 14:28:37 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb 02 14:28:37 localhost chronyd[790]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 02 14:28:37 localhost chronyd[790]: Loaded 0 symmetric keys
Feb 02 14:28:37 localhost chronyd[790]: Using right/UTC timezone to obtain leap second data
Feb 02 14:28:37 localhost chronyd[790]: Loaded seccomp filter (level 2)
Feb 02 14:28:37 localhost systemd[1]: Started NTP client/server.
Feb 02 14:28:37 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 02 14:28:37 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 02 14:28:38 localhost systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 02 14:28:38 localhost systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 02 14:28:38 localhost kernel: kvm_amd: TSC scaling supported
Feb 02 14:28:38 localhost kernel: kvm_amd: Nested Virtualization enabled
Feb 02 14:28:38 localhost kernel: kvm_amd: Nested Paging enabled
Feb 02 14:28:38 localhost kernel: kvm_amd: LBR virtualization supported
Feb 02 14:28:38 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb 02 14:28:38 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb 02 14:28:38 localhost kernel: Console: switching to colour dummy device 80x25
Feb 02 14:28:38 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 02 14:28:38 localhost kernel: [drm] features: -context_init
Feb 02 14:28:38 localhost kernel: [drm] number of scanouts: 1
Feb 02 14:28:38 localhost kernel: [drm] number of cap sets: 0
Feb 02 14:28:38 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 02 14:28:38 localhost systemd-logind[786]: New seat seat0.
Feb 02 14:28:38 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 02 14:28:38 localhost kernel: Console: switching to colour frame buffer device 128x48
Feb 02 14:28:38 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 02 14:28:38 localhost systemd[1]: Started User Login Management.
Feb 02 14:28:38 localhost iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Feb 02 14:28:38 localhost systemd[1]: Finished IPv4 firewall with iptables.
Feb 02 14:28:38 localhost cloud-init[837]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 02 Feb 2026 14:28:38 +0000. Up 5.92 seconds.
Feb 02 14:28:38 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Feb 02 14:28:38 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Feb 02 14:28:38 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpole8j5v5.mount: Deactivated successfully.
Feb 02 14:28:38 localhost systemd[1]: Starting Hostname Service...
Feb 02 14:28:38 localhost systemd[1]: Started Hostname Service.
Feb 02 14:28:38 np0005605268.novalocal systemd-hostnamed[851]: Hostname set to <np0005605268.novalocal> (static)
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Reached target Preparation for Network.
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Starting Network Manager...
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1300] NetworkManager (version 1.54.3-2.el9) is starting... (boot:3da1c12d-3f65-4f20-960d-600dea66a7e3)
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1318] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1482] manager[0x55fca9a55000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1550] hostname: hostname: using hostnamed
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1551] hostname: static hostname changed from (none) to "np0005605268.novalocal"
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1557] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1716] manager[0x55fca9a55000]: rfkill: Wi-Fi hardware radio set enabled
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1719] manager[0x55fca9a55000]: rfkill: WWAN hardware radio set enabled
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1808] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1808] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1809] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1809] manager: Networking is enabled by state file
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1811] settings: Loaded settings plugin: keyfile (internal)
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1848] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1870] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1887] dhcp: init: Using DHCP client 'internal'
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1891] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1904] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1916] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1926] device (lo): Activation: starting connection 'lo' (04c2cb28-6382-41c1-9610-496161f13eea)
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1935] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1939] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1963] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1967] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1969] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1971] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1973] device (eth0): carrier: link connected
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1975] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1981] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1987] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1991] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1991] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1995] manager: NetworkManager state is now CONNECTING
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.1996] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.2004] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.2006] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Started Network Manager.
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Reached target Network.
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.2237] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.2241] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 02 14:28:39 np0005605268.novalocal NetworkManager[855]: <info>  [1770042519.2248] device (lo): Activation: successful, device activated.
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Reached target NFS client services.
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: Reached target Remote File Systems.
Feb 02 14:28:39 np0005605268.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 02 14:28:40 np0005605268.novalocal NetworkManager[855]: <info>  [1770042520.2745] dhcp4 (eth0): state changed new lease, address=38.129.56.16
Feb 02 14:28:40 np0005605268.novalocal NetworkManager[855]: <info>  [1770042520.2759] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 02 14:28:40 np0005605268.novalocal NetworkManager[855]: <info>  [1770042520.2784] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 14:28:40 np0005605268.novalocal NetworkManager[855]: <info>  [1770042520.2805] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 14:28:40 np0005605268.novalocal NetworkManager[855]: <info>  [1770042520.2807] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 14:28:40 np0005605268.novalocal NetworkManager[855]: <info>  [1770042520.2810] manager: NetworkManager state is now CONNECTED_SITE
Feb 02 14:28:40 np0005605268.novalocal NetworkManager[855]: <info>  [1770042520.2813] device (eth0): Activation: successful, device activated.
Feb 02 14:28:40 np0005605268.novalocal NetworkManager[855]: <info>  [1770042520.2817] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 02 14:28:40 np0005605268.novalocal NetworkManager[855]: <info>  [1770042520.2820] manager: startup complete
Feb 02 14:28:40 np0005605268.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 02 14:28:40 np0005605268.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 02 Feb 2026 14:28:40 +0000. Up 7.98 seconds.
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: |  eth0  | True |         38.129.56.16         | 255.255.255.0 | global | fa:16:3e:30:1e:5d |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: |  eth0  | True | fe80::f816:3eff:fe30:1e5d/64 |       .       |  link  | fa:16:3e:30:1e:5d |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb 02 14:28:40 np0005605268.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 02 14:28:41 np0005605268.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Feb 02 14:28:41 np0005605268.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Feb 02 14:28:41 np0005605268.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Feb 02 14:28:41 np0005605268.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Feb 02 14:28:41 np0005605268.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Feb 02 14:28:41 np0005605268.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: Generating public/private rsa key pair.
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: The key fingerprint is:
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: SHA256:+iVk0WTJuAwheojtwKwcIZX6ZNHrS3xocm5B56azktU root@np0005605268.novalocal
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: The key's randomart image is:
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: +---[RSA 3072]----+
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |.o.o. .. oo.     |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |+o+o... .+o      |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |o=+... o...      |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |+o+.o . o.       |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |.=.+ =  S        |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |  o X E+         |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |   O *. . .      |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |  o *  . o       |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |   o.o  .        |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: +----[SHA256]-----+
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: Generating public/private ecdsa key pair.
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: The key fingerprint is:
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: SHA256:ekMfDqbFUKSJFlw+9lGfpySwKm2K6QjE1wFZyqINes8 root@np0005605268.novalocal
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: The key's randomart image is:
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: +---[ECDSA 256]---+
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |   o+o..+ .      |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |  ..++ + + . .   |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |.. oo.B o . + .  |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |+o...o.* . o o   |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |ooo...+ S . .    |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |...= + * + .     |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |. o E o o o      |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |.o     . .       |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |. .              |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: +----[SHA256]-----+
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: Generating public/private ed25519 key pair.
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: The key fingerprint is:
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: SHA256:KlUz/qAv9um+MTK7e62JZrpjN5AbSBrZGgOai+MSj/E root@np0005605268.novalocal
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: The key's randomart image is:
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: +--[ED25519 256]--+
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |                 |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |.                |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |o+      +        |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |B o    o o       |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |.O . .. S        |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |O . +. o o       |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |.B  .+= o..      |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |o.E +.B*.=.      |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: |.  .oO=XXo       |
Feb 02 14:28:42 np0005605268.novalocal cloud-init[919]: +----[SHA256]-----+
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Reached target Cloud-config availability.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Reached target Network is Online.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Starting Crash recovery kernel arming...
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Starting System Logging Service...
Feb 02 14:28:42 np0005605268.novalocal sm-notify[1003]: Version 2.5.4 starting
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Starting OpenSSH server daemon...
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Starting Permit User Sessions...
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Started Notify NFS peers of a restart.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Finished Permit User Sessions.
Feb 02 14:28:42 np0005605268.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Feb 02 14:28:42 np0005605268.novalocal sshd[1005]: Server listening on :: port 22.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Started OpenSSH server daemon.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Started Command Scheduler.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Started Getty on tty1.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Started Serial Getty on ttyS0.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Reached target Login Prompts.
Feb 02 14:28:42 np0005605268.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Feb 02 14:28:42 np0005605268.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Feb 02 14:28:42 np0005605268.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 38% if used.)
Feb 02 14:28:42 np0005605268.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Started System Logging Service.
Feb 02 14:28:42 np0005605268.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Feb 02 14:28:42 np0005605268.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Reached target Multi-User System.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Feb 02 14:28:42 np0005605268.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 14:28:42 np0005605268.novalocal kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Feb 02 14:28:42 np0005605268.novalocal kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1099]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 02 Feb 2026 14:28:42 +0000. Up 9.74 seconds.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Feb 02 14:28:42 np0005605268.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Feb 02 14:28:42 np0005605268.novalocal dracut[1265]: dracut-057-102.git20250818.el9
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1282]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 02 Feb 2026 14:28:42 +0000. Up 10.12 seconds.
Feb 02 14:28:42 np0005605268.novalocal sshd-session[1287]: Unable to negotiate with 38.102.83.114 port 44648: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Feb 02 14:28:42 np0005605268.novalocal dracut[1268]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1299]: #############################################################
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1304]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 02 14:28:42 np0005605268.novalocal sshd-session[1295]: Connection reset by 38.102.83.114 port 44664 [preauth]
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1309]: 256 SHA256:ekMfDqbFUKSJFlw+9lGfpySwKm2K6QjE1wFZyqINes8 root@np0005605268.novalocal (ECDSA)
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1314]: 256 SHA256:KlUz/qAv9um+MTK7e62JZrpjN5AbSBrZGgOai+MSj/E root@np0005605268.novalocal (ED25519)
Feb 02 14:28:42 np0005605268.novalocal sshd-session[1311]: Unable to negotiate with 38.102.83.114 port 44666: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1321]: 3072 SHA256:+iVk0WTJuAwheojtwKwcIZX6ZNHrS3xocm5B56azktU root@np0005605268.novalocal (RSA)
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1323]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1325]: #############################################################
Feb 02 14:28:42 np0005605268.novalocal sshd-session[1322]: Unable to negotiate with 38.102.83.114 port 44678: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Feb 02 14:28:42 np0005605268.novalocal sshd-session[1283]: Connection closed by 38.102.83.114 port 44640 [preauth]
Feb 02 14:28:42 np0005605268.novalocal sshd-session[1342]: Connection reset by 38.102.83.114 port 44710 [preauth]
Feb 02 14:28:42 np0005605268.novalocal cloud-init[1282]: Cloud-init v. 24.4-8.el9 finished at Mon, 02 Feb 2026 14:28:42 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.29 seconds
Feb 02 14:28:42 np0005605268.novalocal sshd-session[1349]: Unable to negotiate with 38.102.83.114 port 44714: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Feb 02 14:28:42 np0005605268.novalocal sshd-session[1364]: Unable to negotiate with 38.102.83.114 port 44728: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Feb 02 14:28:43 np0005605268.novalocal sshd-session[1337]: Connection closed by 38.102.83.114 port 44694 [preauth]
Feb 02 14:28:43 np0005605268.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Feb 02 14:28:43 np0005605268.novalocal systemd[1]: Reached target Cloud-init target.
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: Module 'resume' will not be installed, because it's in the list to be omitted!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: memstrack is not available
Feb 02 14:28:43 np0005605268.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: memstrack is not available
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: *** Including module: systemd ***
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: *** Including module: fips ***
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: *** Including module: systemd-initrd ***
Feb 02 14:28:44 np0005605268.novalocal dracut[1268]: *** Including module: i18n ***
Feb 02 14:28:45 np0005605268.novalocal dracut[1268]: *** Including module: drm ***
Feb 02 14:28:45 np0005605268.novalocal dracut[1268]: *** Including module: prefixdevname ***
Feb 02 14:28:45 np0005605268.novalocal dracut[1268]: *** Including module: kernel-modules ***
Feb 02 14:28:45 np0005605268.novalocal kernel: block vda: the capability attribute has been deprecated.
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]: *** Including module: kernel-modules-extra ***
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]: *** Including module: qemu ***
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]: *** Including module: fstab-sys ***
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]: *** Including module: rootfs-block ***
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]: *** Including module: terminfo ***
Feb 02 14:28:46 np0005605268.novalocal dracut[1268]: *** Including module: udev-rules ***
Feb 02 14:28:46 np0005605268.novalocal chronyd[790]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Feb 02 14:28:46 np0005605268.novalocal chronyd[790]: System clock wrong by 1.385418 seconds
Feb 02 14:28:47 np0005605268.novalocal chronyd[790]: System clock was stepped by 1.385418 seconds
Feb 02 14:28:47 np0005605268.novalocal chronyd[790]: System clock TAI offset set to 37 seconds
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]: Skipping udev rule: 91-permissions.rules
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]: Skipping udev rule: 80-drivers-modprobe.rules
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]: *** Including module: virtiofs ***
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]: *** Including module: dracut-systemd ***
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]: *** Including module: usrmount ***
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]: *** Including module: base ***
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]: *** Including module: fs-lib ***
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]: *** Including module: kdumpbase ***
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]: *** Including module: microcode_ctl-fw_dir_override ***
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]:   microcode_ctl module: mangling fw_dir
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel" is ignored
Feb 02 14:28:48 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]: *** Including module: openssl ***
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]: *** Including module: shutdown ***
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]: *** Including module: squash ***
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]: *** Including modules done ***
Feb 02 14:28:49 np0005605268.novalocal dracut[1268]: *** Installing kernel module dependencies ***
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: Cannot change IRQ 25 affinity: Operation not permitted
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: IRQ 25 affinity is now unmanaged
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: Cannot change IRQ 31 affinity: Operation not permitted
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: IRQ 31 affinity is now unmanaged
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: Cannot change IRQ 28 affinity: Operation not permitted
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: IRQ 28 affinity is now unmanaged
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: Cannot change IRQ 32 affinity: Operation not permitted
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: IRQ 32 affinity is now unmanaged
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: Cannot change IRQ 30 affinity: Operation not permitted
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: IRQ 30 affinity is now unmanaged
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: Cannot change IRQ 29 affinity: Operation not permitted
Feb 02 14:28:50 np0005605268.novalocal irqbalance[783]: IRQ 29 affinity is now unmanaged
Feb 02 14:28:50 np0005605268.novalocal dracut[1268]: *** Installing kernel module dependencies done ***
Feb 02 14:28:50 np0005605268.novalocal dracut[1268]: *** Resolving executable dependencies ***
Feb 02 14:28:51 np0005605268.novalocal dracut[1268]: *** Resolving executable dependencies done ***
Feb 02 14:28:51 np0005605268.novalocal dracut[1268]: *** Generating early-microcode cpio image ***
Feb 02 14:28:51 np0005605268.novalocal dracut[1268]: *** Store current command line parameters ***
Feb 02 14:28:51 np0005605268.novalocal dracut[1268]: Stored kernel commandline:
Feb 02 14:28:51 np0005605268.novalocal dracut[1268]: No dracut internal kernel commandline stored in the initramfs
Feb 02 14:28:51 np0005605268.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 14:28:51 np0005605268.novalocal dracut[1268]: *** Install squash loader ***
Feb 02 14:28:52 np0005605268.novalocal dracut[1268]: *** Squashing the files inside the initramfs ***
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: *** Squashing the files inside the initramfs done ***
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: *** Hardlinking files ***
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: Mode:           real
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: Files:          50
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: Linked:         0 files
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: Compared:       0 xattrs
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: Compared:       0 files
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: Saved:          0 B
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: Duration:       0.000456 seconds
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: *** Hardlinking files done ***
Feb 02 14:28:53 np0005605268.novalocal dracut[1268]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Feb 02 14:28:54 np0005605268.novalocal kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Feb 02 14:28:54 np0005605268.novalocal kdumpctl[1017]: kdump: Starting kdump: [OK]
Feb 02 14:28:54 np0005605268.novalocal systemd[1]: Finished Crash recovery kernel arming.
Feb 02 14:28:54 np0005605268.novalocal systemd[1]: Startup finished in 1.152s (kernel) + 2.381s (initrd) + 16.837s (userspace) = 20.371s.
Feb 02 14:29:06 np0005605268.novalocal sshd-session[4300]: Accepted publickey for zuul from 38.102.83.114 port 60104 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Feb 02 14:29:06 np0005605268.novalocal systemd[1]: Created slice User Slice of UID 1000.
Feb 02 14:29:06 np0005605268.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb 02 14:29:06 np0005605268.novalocal systemd-logind[786]: New session 1 of user zuul.
Feb 02 14:29:06 np0005605268.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb 02 14:29:06 np0005605268.novalocal systemd[1]: Starting User Manager for UID 1000...
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Queued start job for default target Main User Target.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Created slice User Application Slice.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Started Daily Cleanup of User's Temporary Directories.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Reached target Paths.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Reached target Timers.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Starting D-Bus User Message Bus Socket...
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Starting Create User's Volatile Files and Directories...
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Listening on D-Bus User Message Bus Socket.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Reached target Sockets.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Finished Create User's Volatile Files and Directories.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Reached target Basic System.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Reached target Main User Target.
Feb 02 14:29:06 np0005605268.novalocal systemd[4304]: Startup finished in 144ms.
Feb 02 14:29:06 np0005605268.novalocal systemd[1]: Started User Manager for UID 1000.
Feb 02 14:29:06 np0005605268.novalocal systemd[1]: Started Session 1 of User zuul.
Feb 02 14:29:06 np0005605268.novalocal sshd-session[4300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 14:29:06 np0005605268.novalocal python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:29:09 np0005605268.novalocal python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:29:10 np0005605268.novalocal irqbalance[783]: Cannot change IRQ 27 affinity: Operation not permitted
Feb 02 14:29:10 np0005605268.novalocal irqbalance[783]: IRQ 27 affinity is now unmanaged
Feb 02 14:29:10 np0005605268.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 02 14:29:16 np0005605268.novalocal python3[4474]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:29:17 np0005605268.novalocal python3[4514]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb 02 14:29:19 np0005605268.novalocal python3[4540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpUGBdZbcTcNlEWitdTzjCzkKnO25KG5DXP9tUIQMgjMZ5sVmr5Qgi7HK8I6UHgXXKl5ipypltRl4o/rDG302R3v8iaY6KZvgS0L05QuJzUv9Jnl3oLxKYgvlOF4TDWM+P7sTNw/DwxNTnIqgOWq6ZZ1VU+d4SOoxyXGYDmxd3I1iXZWfnvQ5SBmasYM/pY9Wsj1ru0vAjV7u5l0elZMufd889Rqq/NRRBnWa2oWnwdGUCuyCtuYwSeu/0Y30slOuwBP7lchDWwabZ1S1hyEJTptIQzdbyFk6eNdJ1JAwmriI/eiItEMZBLkfVupCnEw3W5B0AaTBXiTZYugNsxnM2+w29sDemNbwdsn9t76XxsnOMFyxlxJC+fE83YxoS+/4ukYkk8p61b0zAXB9PubU713s9Xa4oyLfwYmyPHH2JT1maPSTMlPZY9/ZSE3irHGFr5dLeR66CFvp4t4QEgN4kk0fme+1vOqAFYtXxhx7wXo6oQcHZmKbtm1F4eYpBvZ8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:20 np0005605268.novalocal python3[4564]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:20 np0005605268.novalocal python3[4663]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:29:20 np0005605268.novalocal python3[4734]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770042560.252093-207-232671555831032/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=084b79dde0e14462adea15fb24540344_id_rsa follow=False checksum=6a5852a50e2939672d51c7681d450ff93d48ce63 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:21 np0005605268.novalocal python3[4857]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:29:21 np0005605268.novalocal python3[4928]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770042561.2597926-240-199919860637338/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=084b79dde0e14462adea15fb24540344_id_rsa.pub follow=False checksum=5b058542d4aa926515a505c7342414bb1a322b68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:23 np0005605268.novalocal python3[4976]: ansible-ping Invoked with data=pong
Feb 02 14:29:24 np0005605268.novalocal python3[5000]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:29:26 np0005605268.novalocal python3[5058]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb 02 14:29:27 np0005605268.novalocal python3[5090]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:27 np0005605268.novalocal python3[5114]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:27 np0005605268.novalocal python3[5138]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:28 np0005605268.novalocal python3[5162]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:28 np0005605268.novalocal python3[5186]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:28 np0005605268.novalocal python3[5210]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:30 np0005605268.novalocal sudo[5234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feeupekqacofedpdgtkdbiyffcigfqmu ; /usr/bin/python3'
Feb 02 14:29:30 np0005605268.novalocal sudo[5234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:30 np0005605268.novalocal python3[5236]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:30 np0005605268.novalocal sudo[5234]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:30 np0005605268.novalocal sudo[5312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xldyhjrocmeejidmqcjduzqdeprajzlk ; /usr/bin/python3'
Feb 02 14:29:30 np0005605268.novalocal sudo[5312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:31 np0005605268.novalocal python3[5314]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:29:31 np0005605268.novalocal sudo[5312]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:31 np0005605268.novalocal sudo[5385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvwvgrwvhujutyqvmnswxvvgnlgfejtw ; /usr/bin/python3'
Feb 02 14:29:31 np0005605268.novalocal sudo[5385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:31 np0005605268.novalocal python3[5387]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1770042570.7566228-21-259017093461005/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:31 np0005605268.novalocal sudo[5385]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:32 np0005605268.novalocal python3[5435]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:32 np0005605268.novalocal python3[5459]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:32 np0005605268.novalocal python3[5483]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:33 np0005605268.novalocal python3[5507]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:33 np0005605268.novalocal python3[5531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:33 np0005605268.novalocal python3[5555]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:33 np0005605268.novalocal python3[5579]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:34 np0005605268.novalocal python3[5603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:34 np0005605268.novalocal python3[5627]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:34 np0005605268.novalocal python3[5651]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:34 np0005605268.novalocal python3[5675]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:35 np0005605268.novalocal python3[5699]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:35 np0005605268.novalocal python3[5723]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:35 np0005605268.novalocal python3[5747]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:35 np0005605268.novalocal python3[5771]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:36 np0005605268.novalocal python3[5795]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:36 np0005605268.novalocal python3[5819]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:36 np0005605268.novalocal python3[5843]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:36 np0005605268.novalocal python3[5867]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:37 np0005605268.novalocal python3[5891]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:37 np0005605268.novalocal python3[5915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:37 np0005605268.novalocal python3[5939]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:37 np0005605268.novalocal python3[5963]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:38 np0005605268.novalocal python3[5987]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:38 np0005605268.novalocal python3[6011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:38 np0005605268.novalocal python3[6035]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:29:41 np0005605268.novalocal sudo[6059]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzloswziunsrwyupwoaizwfnxrzvziyb ; /usr/bin/python3'
Feb 02 14:29:41 np0005605268.novalocal sudo[6059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:42 np0005605268.novalocal python3[6061]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 02 14:29:42 np0005605268.novalocal systemd[1]: Starting Time & Date Service...
Feb 02 14:29:42 np0005605268.novalocal systemd[1]: Started Time & Date Service.
Feb 02 14:29:42 np0005605268.novalocal systemd-timedated[6063]: Changed time zone to 'UTC' (UTC).
Feb 02 14:29:42 np0005605268.novalocal sudo[6059]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:42 np0005605268.novalocal sudo[6090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqntwxiqaffzguhteqerjknwnppydatf ; /usr/bin/python3'
Feb 02 14:29:42 np0005605268.novalocal sudo[6090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:42 np0005605268.novalocal python3[6092]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:42 np0005605268.novalocal sudo[6090]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:43 np0005605268.novalocal python3[6168]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:29:43 np0005605268.novalocal python3[6239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1770042582.8753185-153-267176090729934/source _original_basename=tmpy4c7zqg9 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:43 np0005605268.novalocal python3[6339]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:29:44 np0005605268.novalocal python3[6410]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770042583.7548566-183-136433512459859/source _original_basename=tmpvaqaafxz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:44 np0005605268.novalocal sudo[6510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oobueuoxxpclnofjjtbymwataocajkid ; /usr/bin/python3'
Feb 02 14:29:44 np0005605268.novalocal sudo[6510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:45 np0005605268.novalocal python3[6512]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:29:45 np0005605268.novalocal sudo[6510]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:45 np0005605268.novalocal sudo[6583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dylinzutjpfbehcqrwmxytkpksaqcnta ; /usr/bin/python3'
Feb 02 14:29:45 np0005605268.novalocal sudo[6583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:45 np0005605268.novalocal python3[6585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770042584.8569462-231-50155439851456/source _original_basename=tmpbx7j9684 follow=False checksum=2b285b712f9610742f946634abc937bcb4e6beeb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:45 np0005605268.novalocal sudo[6583]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:46 np0005605268.novalocal python3[6633]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:29:46 np0005605268.novalocal python3[6659]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:29:46 np0005605268.novalocal sudo[6737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wesojfwghyiiyutikxrpegrfrzkdxcmg ; /usr/bin/python3'
Feb 02 14:29:46 np0005605268.novalocal sudo[6737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:47 np0005605268.novalocal python3[6739]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:29:47 np0005605268.novalocal sudo[6737]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:47 np0005605268.novalocal sudo[6810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dibtgvrqgaxwhonorergjnmyfdxecygx ; /usr/bin/python3'
Feb 02 14:29:48 np0005605268.novalocal sudo[6810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:48 np0005605268.novalocal python3[6812]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1770042586.8657134-273-219953553221863/source _original_basename=tmphd8wasv_ follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:29:48 np0005605268.novalocal sudo[6810]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:48 np0005605268.novalocal sudo[6861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clhuimowzlcnjnttlwgxqzeywyadyfle ; /usr/bin/python3'
Feb 02 14:29:48 np0005605268.novalocal sudo[6861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:29:48 np0005605268.novalocal python3[6863]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-9793-e3c4-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:29:48 np0005605268.novalocal sudo[6861]: pam_unix(sudo:session): session closed for user root
Feb 02 14:29:49 np0005605268.novalocal python3[6891]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-9793-e3c4-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb 02 14:29:50 np0005605268.novalocal python3[6919]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:30:10 np0005605268.novalocal sudo[6943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsrsdgqhhschdygtrijlxttnbtoqboxy ; /usr/bin/python3'
Feb 02 14:30:10 np0005605268.novalocal sudo[6943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:30:10 np0005605268.novalocal python3[6945]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:30:10 np0005605268.novalocal sudo[6943]: pam_unix(sudo:session): session closed for user root
Feb 02 14:30:12 np0005605268.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 02 14:30:47 np0005605268.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 02 14:30:47 np0005605268.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb 02 14:30:47 np0005605268.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb 02 14:30:47 np0005605268.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb 02 14:30:47 np0005605268.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb 02 14:30:47 np0005605268.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb 02 14:30:47 np0005605268.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb 02 14:30:47 np0005605268.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb 02 14:30:47 np0005605268.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb 02 14:30:47 np0005605268.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4162] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 02 14:30:47 np0005605268.novalocal systemd-udevd[6949]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4379] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4400] settings: (eth1): created default wired connection 'Wired connection 1'
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4402] device (eth1): carrier: link connected
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4404] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4408] policy: auto-activating connection 'Wired connection 1' (f01884bf-ed74-3e7e-8325-4a6b7f715beb)
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4411] device (eth1): Activation: starting connection 'Wired connection 1' (f01884bf-ed74-3e7e-8325-4a6b7f715beb)
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4412] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4414] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4416] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 14:30:47 np0005605268.novalocal NetworkManager[855]: <info>  [1770042647.4419] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 02 14:30:49 np0005605268.novalocal python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-feb9-de38-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:30:59 np0005605268.novalocal sudo[7053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdjxtxelxumjqqndfhhylxwtlmkvmceo ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 14:30:59 np0005605268.novalocal sudo[7053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:30:59 np0005605268.novalocal python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:30:59 np0005605268.novalocal sudo[7053]: pam_unix(sudo:session): session closed for user root
Feb 02 14:30:59 np0005605268.novalocal sudo[7126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npvrpmzpteqxvvaxkukusaruiasxnwau ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 14:30:59 np0005605268.novalocal sudo[7126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:30:59 np0005605268.novalocal python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770042658.9978392-102-19924512821981/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=11a85aa4b4fbde400f004af2cdb95ca98bcbc4a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:30:59 np0005605268.novalocal sudo[7126]: pam_unix(sudo:session): session closed for user root
Feb 02 14:31:00 np0005605268.novalocal sudo[7176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cedgbapvfuwzwwevjevjbabscmwguuaw ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 14:31:00 np0005605268.novalocal sudo[7176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:31:00 np0005605268.novalocal python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Stopped Network Manager Wait Online.
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Stopping Network Manager Wait Online...
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[855]: <info>  [1770042660.3885] caught SIGTERM, shutting down normally.
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Stopping Network Manager...
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[855]: <info>  [1770042660.3894] dhcp4 (eth0): canceled DHCP transaction
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[855]: <info>  [1770042660.3895] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[855]: <info>  [1770042660.3895] dhcp4 (eth0): state changed no lease
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[855]: <info>  [1770042660.3898] manager: NetworkManager state is now CONNECTING
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[855]: <info>  [1770042660.4057] dhcp4 (eth1): canceled DHCP transaction
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[855]: <info>  [1770042660.4059] dhcp4 (eth1): state changed no lease
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[855]: <info>  [1770042660.4105] exiting (success)
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Stopped Network Manager.
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: NetworkManager.service: Consumed 1.295s CPU time, 10.0M memory peak.
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Starting Network Manager...
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.4553] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3da1c12d-3f65-4f20-960d-600dea66a7e3)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.4556] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.4591] manager[0x563c987bf000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Starting Hostname Service...
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Started Hostname Service.
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5111] hostname: hostname: using hostnamed
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5112] hostname: static hostname changed from (none) to "np0005605268.novalocal"
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5117] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5121] manager[0x563c987bf000]: rfkill: Wi-Fi hardware radio set enabled
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5121] manager[0x563c987bf000]: rfkill: WWAN hardware radio set enabled
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5140] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5141] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5142] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5143] manager: Networking is enabled by state file
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5145] settings: Loaded settings plugin: keyfile (internal)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5148] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5166] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5172] dhcp: init: Using DHCP client 'internal'
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5174] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5179] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5183] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5190] device (lo): Activation: starting connection 'lo' (04c2cb28-6382-41c1-9610-496161f13eea)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5195] device (eth0): carrier: link connected
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5198] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5202] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5203] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5207] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5211] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5215] device (eth1): carrier: link connected
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5218] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5221] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (f01884bf-ed74-3e7e-8325-4a6b7f715beb) (indicated)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5222] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5226] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5231] device (eth1): Activation: starting connection 'Wired connection 1' (f01884bf-ed74-3e7e-8325-4a6b7f715beb)
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Started Network Manager.
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5235] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5240] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5242] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5244] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5246] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5248] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5250] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5253] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5255] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5262] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5265] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5272] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5275] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5288] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5292] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 02 14:31:00 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042660.5297] device (lo): Activation: successful, device activated.
Feb 02 14:31:00 np0005605268.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 02 14:31:00 np0005605268.novalocal sudo[7176]: pam_unix(sudo:session): session closed for user root
Feb 02 14:31:00 np0005605268.novalocal python3[7243]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-feb9-de38-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:31:02 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042662.0080] dhcp4 (eth0): state changed new lease, address=38.129.56.16
Feb 02 14:31:02 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042662.0101] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 02 14:31:02 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042662.0210] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 02 14:31:02 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042662.0238] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 02 14:31:02 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042662.0242] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 02 14:31:02 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042662.0246] manager: NetworkManager state is now CONNECTED_SITE
Feb 02 14:31:02 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042662.0250] device (eth0): Activation: successful, device activated.
Feb 02 14:31:02 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042662.0256] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 02 14:31:02 np0005605268.novalocal chronyd[790]: Selected source 206.108.0.131 (2.centos.pool.ntp.org)
Feb 02 14:31:12 np0005605268.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 14:31:30 np0005605268.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 02 14:31:37 np0005605268.novalocal systemd[4304]: Starting Mark boot as successful...
Feb 02 14:31:37 np0005605268.novalocal systemd[4304]: Finished Mark boot as successful.
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0314] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 02 14:31:46 np0005605268.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 14:31:46 np0005605268.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0660] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0663] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0669] device (eth1): Activation: successful, device activated.
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0675] manager: startup complete
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0676] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <warn>  [1770042706.0681] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0688] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb 02 14:31:46 np0005605268.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0827] dhcp4 (eth1): canceled DHCP transaction
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0829] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0830] dhcp4 (eth1): state changed no lease
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0842] policy: auto-activating connection 'ci-private-network' (7c114712-cf58-5bea-858e-99132cd0be47)
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0846] device (eth1): Activation: starting connection 'ci-private-network' (7c114712-cf58-5bea-858e-99132cd0be47)
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0848] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0851] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0856] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0864] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0942] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0948] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 14:31:46 np0005605268.novalocal NetworkManager[7192]: <info>  [1770042706.0959] device (eth1): Activation: successful, device activated.
Feb 02 14:31:56 np0005605268.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 14:32:00 np0005605268.novalocal sshd-session[4313]: Received disconnect from 38.102.83.114 port 60104:11: disconnected by user
Feb 02 14:32:00 np0005605268.novalocal sshd-session[4313]: Disconnected from user zuul 38.102.83.114 port 60104
Feb 02 14:32:00 np0005605268.novalocal sshd-session[4300]: pam_unix(sshd:session): session closed for user zuul
Feb 02 14:32:00 np0005605268.novalocal systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Feb 02 14:32:00 np0005605268.novalocal sshd-session[7291]: Accepted publickey for zuul from 38.102.83.114 port 47346 ssh2: RSA SHA256:6MqBH2X7LXmocyY6TeaOivEV/FItCxqrc1tGLmCm8YI
Feb 02 14:32:00 np0005605268.novalocal systemd-logind[786]: New session 3 of user zuul.
Feb 02 14:32:00 np0005605268.novalocal systemd[1]: Started Session 3 of User zuul.
Feb 02 14:32:00 np0005605268.novalocal sshd-session[7291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 14:32:00 np0005605268.novalocal sudo[7370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krlvizpgksgmpzbkpxgwrxrnzhtjcijd ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 14:32:00 np0005605268.novalocal sudo[7370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:32:01 np0005605268.novalocal python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:32:01 np0005605268.novalocal sudo[7370]: pam_unix(sudo:session): session closed for user root
Feb 02 14:32:01 np0005605268.novalocal sudo[7443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxkupcifrwzrkwwnlqtzceclebftkzmp ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 02 14:32:01 np0005605268.novalocal sudo[7443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:32:01 np0005605268.novalocal python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770042720.8801546-267-242186830835690/source _original_basename=tmpd5v65349 follow=False checksum=ea00caf3f5ac14f2a1f4d51a1efa1715ce1a149e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:32:01 np0005605268.novalocal sudo[7443]: pam_unix(sudo:session): session closed for user root
Feb 02 14:32:03 np0005605268.novalocal sshd-session[7294]: Connection closed by 38.102.83.114 port 47346
Feb 02 14:32:03 np0005605268.novalocal sshd-session[7291]: pam_unix(sshd:session): session closed for user zuul
Feb 02 14:32:03 np0005605268.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Feb 02 14:32:03 np0005605268.novalocal systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Feb 02 14:32:03 np0005605268.novalocal systemd-logind[786]: Removed session 3.
Feb 02 14:34:37 np0005605268.novalocal systemd[4304]: Created slice User Background Tasks Slice.
Feb 02 14:34:37 np0005605268.novalocal systemd[4304]: Starting Cleanup of User's Temporary Files and Directories...
Feb 02 14:34:37 np0005605268.novalocal systemd[4304]: Finished Cleanup of User's Temporary Files and Directories.
Feb 02 14:38:35 np0005605268.novalocal sshd-session[7475]: Accepted publickey for zuul from 38.102.83.114 port 60300 ssh2: RSA SHA256:6MqBH2X7LXmocyY6TeaOivEV/FItCxqrc1tGLmCm8YI
Feb 02 14:38:35 np0005605268.novalocal systemd-logind[786]: New session 4 of user zuul.
Feb 02 14:38:35 np0005605268.novalocal systemd[1]: Started Session 4 of User zuul.
Feb 02 14:38:35 np0005605268.novalocal sshd-session[7475]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 14:38:35 np0005605268.novalocal sudo[7502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-litxpslntgzkiyvndyibnloqqvqshujp ; /usr/bin/python3'
Feb 02 14:38:35 np0005605268.novalocal sudo[7502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:35 np0005605268.novalocal python3[7504]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-967d-ac41-000000002167-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:38:35 np0005605268.novalocal sudo[7502]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:36 np0005605268.novalocal sudo[7531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpxjnkssosudgvchfdarjecbqfogpjye ; /usr/bin/python3'
Feb 02 14:38:36 np0005605268.novalocal sudo[7531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:36 np0005605268.novalocal python3[7533]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:38:36 np0005605268.novalocal sudo[7531]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:36 np0005605268.novalocal sudo[7557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyohinxxuifvywtccrypwrgdmskeveyk ; /usr/bin/python3'
Feb 02 14:38:36 np0005605268.novalocal sudo[7557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:36 np0005605268.novalocal python3[7559]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:38:36 np0005605268.novalocal sudo[7557]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:36 np0005605268.novalocal sudo[7583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbqrmtqswwiuvdhzyiprdqnbosmvvxjf ; /usr/bin/python3'
Feb 02 14:38:36 np0005605268.novalocal sudo[7583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:36 np0005605268.novalocal python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:38:36 np0005605268.novalocal sudo[7583]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:36 np0005605268.novalocal sudo[7609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfrbhyvkfuxliuhjnksgnikxxvczroue ; /usr/bin/python3'
Feb 02 14:38:36 np0005605268.novalocal sudo[7609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:36 np0005605268.novalocal python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:38:36 np0005605268.novalocal sudo[7609]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:37 np0005605268.novalocal sudo[7635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sejvzzdtxxirnqrsnxwsopgqwehqhltn ; /usr/bin/python3'
Feb 02 14:38:37 np0005605268.novalocal sudo[7635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:37 np0005605268.novalocal python3[7637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:38:37 np0005605268.novalocal sudo[7635]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:37 np0005605268.novalocal sudo[7713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veklgzlydvovbmkozdhsnumnrelyhqdj ; /usr/bin/python3'
Feb 02 14:38:37 np0005605268.novalocal sudo[7713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:37 np0005605268.novalocal python3[7715]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:38:37 np0005605268.novalocal sudo[7713]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:38 np0005605268.novalocal sudo[7786]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpmccafqkywwbxaghwgdxfqplxfiankf ; /usr/bin/python3'
Feb 02 14:38:38 np0005605268.novalocal sudo[7786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:38 np0005605268.novalocal python3[7788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770043117.6948397-497-159844572862663/source _original_basename=tmp862kcy2p follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:38:38 np0005605268.novalocal sudo[7786]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:39 np0005605268.novalocal sudo[7836]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sioagxldsedycncboejluoehqyqhgrmx ; /usr/bin/python3'
Feb 02 14:38:39 np0005605268.novalocal sudo[7836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:39 np0005605268.novalocal python3[7838]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 14:38:39 np0005605268.novalocal systemd[1]: Reloading.
Feb 02 14:38:39 np0005605268.novalocal systemd-rc-local-generator[7856]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 14:38:39 np0005605268.novalocal sudo[7836]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:40 np0005605268.novalocal sudo[7892]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppineztmdpuercemjvfhcsftjjeaunvm ; /usr/bin/python3'
Feb 02 14:38:40 np0005605268.novalocal sudo[7892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:40 np0005605268.novalocal python3[7894]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb 02 14:38:40 np0005605268.novalocal sudo[7892]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:41 np0005605268.novalocal sudo[7918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czxsfbwxakuzlyvcbpoiovuurtufaurp ; /usr/bin/python3'
Feb 02 14:38:41 np0005605268.novalocal sudo[7918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:41 np0005605268.novalocal python3[7920]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:38:41 np0005605268.novalocal sudo[7918]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:41 np0005605268.novalocal sudo[7946]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqsvwpoinaokjvcupeyvoxftkquvhtks ; /usr/bin/python3'
Feb 02 14:38:41 np0005605268.novalocal sudo[7946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:41 np0005605268.novalocal python3[7948]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:38:41 np0005605268.novalocal sudo[7946]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:41 np0005605268.novalocal sudo[7974]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmvchajklfclvdsofpnsuclsvnchuxzn ; /usr/bin/python3'
Feb 02 14:38:41 np0005605268.novalocal sudo[7974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:41 np0005605268.novalocal python3[7976]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:38:41 np0005605268.novalocal sudo[7974]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:41 np0005605268.novalocal sudo[8002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkdsrnckrdsncmhfgkmgaidigqfuizmk ; /usr/bin/python3'
Feb 02 14:38:41 np0005605268.novalocal sudo[8002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:41 np0005605268.novalocal python3[8004]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:38:41 np0005605268.novalocal sudo[8002]: pam_unix(sudo:session): session closed for user root
Feb 02 14:38:42 np0005605268.novalocal python3[8031]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-967d-ac41-00000000216e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:38:43 np0005605268.novalocal python3[8061]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 14:38:45 np0005605268.novalocal sshd-session[7478]: Connection closed by 38.102.83.114 port 60300
Feb 02 14:38:45 np0005605268.novalocal sshd-session[7475]: pam_unix(sshd:session): session closed for user zuul
Feb 02 14:38:45 np0005605268.novalocal systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Feb 02 14:38:45 np0005605268.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Feb 02 14:38:45 np0005605268.novalocal systemd[1]: session-4.scope: Consumed 3.444s CPU time.
Feb 02 14:38:45 np0005605268.novalocal systemd-logind[786]: Removed session 4.
Feb 02 14:38:46 np0005605268.novalocal sshd-session[8066]: Accepted publickey for zuul from 38.102.83.114 port 45526 ssh2: RSA SHA256:6MqBH2X7LXmocyY6TeaOivEV/FItCxqrc1tGLmCm8YI
Feb 02 14:38:46 np0005605268.novalocal systemd-logind[786]: New session 5 of user zuul.
Feb 02 14:38:46 np0005605268.novalocal systemd[1]: Started Session 5 of User zuul.
Feb 02 14:38:46 np0005605268.novalocal sshd-session[8066]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 14:38:46 np0005605268.novalocal sudo[8093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clangafkvbmndloyodkgrmcwwhjqfgcd ; /usr/bin/python3'
Feb 02 14:38:46 np0005605268.novalocal sudo[8093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:38:47 np0005605268.novalocal python3[8095]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 14:39:08 np0005605268.novalocal setsebool[8133]: The virt_use_nfs policy boolean was changed to 1 by root
Feb 02 14:39:08 np0005605268.novalocal setsebool[8133]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb 02 14:39:21 np0005605268.novalocal kernel: SELinux:  Converting 385 SID table entries...
Feb 02 14:39:21 np0005605268.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 14:39:21 np0005605268.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 02 14:39:21 np0005605268.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 14:39:21 np0005605268.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 02 14:39:21 np0005605268.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 14:39:21 np0005605268.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 14:39:21 np0005605268.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 14:39:32 np0005605268.novalocal kernel: SELinux:  Converting 388 SID table entries...
Feb 02 14:39:32 np0005605268.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 14:39:32 np0005605268.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 02 14:39:32 np0005605268.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 14:39:32 np0005605268.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 02 14:39:32 np0005605268.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 14:39:32 np0005605268.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 14:39:32 np0005605268.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 14:39:52 np0005605268.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 02 14:39:52 np0005605268.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 14:39:52 np0005605268.novalocal systemd[1]: Starting man-db-cache-update.service...
Feb 02 14:39:52 np0005605268.novalocal systemd[1]: Reloading.
Feb 02 14:39:52 np0005605268.novalocal systemd-rc-local-generator[8900]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 14:39:52 np0005605268.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 14:39:54 np0005605268.novalocal sudo[8093]: pam_unix(sudo:session): session closed for user root
Feb 02 14:39:54 np0005605268.novalocal python3[10663]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-ce63-c317-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:39:55 np0005605268.novalocal kernel: evm: overlay not supported
Feb 02 14:39:55 np0005605268.novalocal systemd[4304]: Starting D-Bus User Message Bus...
Feb 02 14:39:55 np0005605268.novalocal dbus-broker-launch[11958]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 02 14:39:55 np0005605268.novalocal dbus-broker-launch[11958]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 02 14:39:55 np0005605268.novalocal systemd[4304]: Started D-Bus User Message Bus.
Feb 02 14:39:55 np0005605268.novalocal dbus-broker-lau[11958]: Ready
Feb 02 14:39:55 np0005605268.novalocal systemd[4304]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 02 14:39:55 np0005605268.novalocal systemd[4304]: Created slice Slice /user.
Feb 02 14:39:55 np0005605268.novalocal systemd[4304]: podman-11800.scope: unit configures an IP firewall, but not running as root.
Feb 02 14:39:55 np0005605268.novalocal systemd[4304]: (This warning is only shown for the first unit using IP firewalling.)
Feb 02 14:39:55 np0005605268.novalocal systemd[4304]: Started podman-11800.scope.
Feb 02 14:39:55 np0005605268.novalocal systemd[4304]: Started podman-pause-5571cac9.scope.
Feb 02 14:39:56 np0005605268.novalocal sudo[12560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvoimmgdkuifomjltzspwayvraaqzvbd ; /usr/bin/python3'
Feb 02 14:39:56 np0005605268.novalocal sudo[12560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:39:56 np0005605268.novalocal python3[12580]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.129.56.244:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.129.56.244:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:39:56 np0005605268.novalocal python3[12580]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb 02 14:39:56 np0005605268.novalocal sudo[12560]: pam_unix(sudo:session): session closed for user root
Feb 02 14:39:57 np0005605268.novalocal sshd-session[8069]: Connection closed by 38.102.83.114 port 45526
Feb 02 14:39:57 np0005605268.novalocal sshd-session[8066]: pam_unix(sshd:session): session closed for user zuul
Feb 02 14:39:57 np0005605268.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Feb 02 14:39:57 np0005605268.novalocal systemd[1]: session-5.scope: Consumed 44.754s CPU time.
Feb 02 14:39:57 np0005605268.novalocal systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Feb 02 14:39:57 np0005605268.novalocal systemd-logind[786]: Removed session 5.
Feb 02 14:40:17 np0005605268.novalocal sshd-session[24056]: Connection closed by 38.129.56.75 port 49338 [preauth]
Feb 02 14:40:17 np0005605268.novalocal sshd-session[24057]: Connection closed by 38.129.56.75 port 49348 [preauth]
Feb 02 14:40:17 np0005605268.novalocal sshd-session[24058]: Unable to negotiate with 38.129.56.75 port 49362: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 02 14:40:17 np0005605268.novalocal sshd-session[24060]: Unable to negotiate with 38.129.56.75 port 49372: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 02 14:40:17 np0005605268.novalocal sshd-session[24061]: Unable to negotiate with 38.129.56.75 port 49384: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 02 14:40:21 np0005605268.novalocal sshd-session[25841]: Accepted publickey for zuul from 38.102.83.114 port 36860 ssh2: RSA SHA256:6MqBH2X7LXmocyY6TeaOivEV/FItCxqrc1tGLmCm8YI
Feb 02 14:40:21 np0005605268.novalocal systemd-logind[786]: New session 6 of user zuul.
Feb 02 14:40:21 np0005605268.novalocal systemd[1]: Started Session 6 of User zuul.
Feb 02 14:40:21 np0005605268.novalocal sshd-session[25841]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 14:40:21 np0005605268.novalocal python3[25937]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMcj7hA0Q42s9ME9c6t16gq8B1Bu/hvMoJHCPVjGUmMFVr+ce64Ah1TeJBz+cYaaGnZOmRBB9BiZ8HUYa0mFRpQ= zuul@np0005605267.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:40:22 np0005605268.novalocal sudo[26205]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpmznjygkecjluvyxgutzpuvmczavksy ; /usr/bin/python3'
Feb 02 14:40:22 np0005605268.novalocal sudo[26205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:40:22 np0005605268.novalocal python3[26212]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMcj7hA0Q42s9ME9c6t16gq8B1Bu/hvMoJHCPVjGUmMFVr+ce64Ah1TeJBz+cYaaGnZOmRBB9BiZ8HUYa0mFRpQ= zuul@np0005605267.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:40:22 np0005605268.novalocal sudo[26205]: pam_unix(sudo:session): session closed for user root
Feb 02 14:40:22 np0005605268.novalocal sudo[26641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arngferiyaucjydajwvfkuiphgthnakt ; /usr/bin/python3'
Feb 02 14:40:22 np0005605268.novalocal sudo[26641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:40:23 np0005605268.novalocal python3[26651]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005605268.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 02 14:40:23 np0005605268.novalocal useradd[26757]: new group: name=cloud-admin, GID=1002
Feb 02 14:40:23 np0005605268.novalocal useradd[26757]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Feb 02 14:40:23 np0005605268.novalocal sudo[26641]: pam_unix(sudo:session): session closed for user root
Feb 02 14:40:23 np0005605268.novalocal sudo[26935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkytrjnnivvkzzftfwpdrkrfwjqsugft ; /usr/bin/python3'
Feb 02 14:40:23 np0005605268.novalocal sudo[26935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:40:23 np0005605268.novalocal python3[26941]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMcj7hA0Q42s9ME9c6t16gq8B1Bu/hvMoJHCPVjGUmMFVr+ce64Ah1TeJBz+cYaaGnZOmRBB9BiZ8HUYa0mFRpQ= zuul@np0005605267.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 02 14:40:23 np0005605268.novalocal sudo[26935]: pam_unix(sudo:session): session closed for user root
Feb 02 14:40:23 np0005605268.novalocal sudo[27219]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akzacylinxpyykbptclkxdcfsgrwcioc ; /usr/bin/python3'
Feb 02 14:40:23 np0005605268.novalocal sudo[27219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:40:23 np0005605268.novalocal python3[27231]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:40:23 np0005605268.novalocal sudo[27219]: pam_unix(sudo:session): session closed for user root
Feb 02 14:40:24 np0005605268.novalocal sudo[27494]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szowqigvjsxoqljernjdntmcrluyxazf ; /usr/bin/python3'
Feb 02 14:40:24 np0005605268.novalocal sudo[27494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:40:24 np0005605268.novalocal python3[27502]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770043223.685332-135-60871700790668/source _original_basename=tmph34oy2hn follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:40:24 np0005605268.novalocal sudo[27494]: pam_unix(sudo:session): session closed for user root
Feb 02 14:40:24 np0005605268.novalocal sudo[27827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryvelebprnqlnqilkcchggjeoyrtdrko ; /usr/bin/python3'
Feb 02 14:40:24 np0005605268.novalocal sudo[27827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:40:25 np0005605268.novalocal python3[27836]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb 02 14:40:25 np0005605268.novalocal systemd[1]: Starting Hostname Service...
Feb 02 14:40:25 np0005605268.novalocal systemd[1]: Started Hostname Service.
Feb 02 14:40:25 np0005605268.novalocal systemd-hostnamed[27970]: Changed pretty hostname to 'compute-0'
Feb 02 14:40:25 compute-0 systemd-hostnamed[27970]: Hostname set to <compute-0> (static)
Feb 02 14:40:25 compute-0 NetworkManager[7192]: <info>  [1770043225.2414] hostname: static hostname changed from "np0005605268.novalocal" to "compute-0"
Feb 02 14:40:25 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 14:40:25 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 14:40:25 compute-0 sudo[27827]: pam_unix(sudo:session): session closed for user root
Feb 02 14:40:25 compute-0 sshd-session[25886]: Connection closed by 38.102.83.114 port 36860
Feb 02 14:40:25 compute-0 sshd-session[25841]: pam_unix(sshd:session): session closed for user zuul
Feb 02 14:40:25 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Feb 02 14:40:25 compute-0 systemd[1]: session-6.scope: Consumed 1.920s CPU time.
Feb 02 14:40:25 compute-0 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Feb 02 14:40:25 compute-0 systemd-logind[786]: Removed session 6.
Feb 02 14:40:30 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 14:40:30 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 14:40:30 compute-0 systemd[1]: man-db-cache-update.service: Consumed 37.833s CPU time.
Feb 02 14:40:30 compute-0 systemd[1]: run-rb8e02305ba5240899611b74d7d0d6bd0.service: Deactivated successfully.
Feb 02 14:40:35 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 14:40:55 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 02 14:43:37 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Feb 02 14:43:37 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb 02 14:43:37 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Feb 02 14:43:37 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb 02 14:44:39 compute-0 sshd-session[29980]: Accepted publickey for zuul from 38.129.56.75 port 37592 ssh2: RSA SHA256:6MqBH2X7LXmocyY6TeaOivEV/FItCxqrc1tGLmCm8YI
Feb 02 14:44:39 compute-0 systemd-logind[786]: New session 7 of user zuul.
Feb 02 14:44:39 compute-0 systemd[1]: Started Session 7 of User zuul.
Feb 02 14:44:39 compute-0 sshd-session[29980]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 14:44:39 compute-0 python3[30056]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:44:41 compute-0 sudo[30170]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxnzjrfiikqtqsltlukkcviuluozxpqn ; /usr/bin/python3'
Feb 02 14:44:41 compute-0 sudo[30170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:41 compute-0 python3[30172]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:44:41 compute-0 sudo[30170]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:41 compute-0 sudo[30243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfttwjgxnjhhcprrblyxlvbrcgbgjgem ; /usr/bin/python3'
Feb 02 14:44:41 compute-0 sudo[30243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:41 compute-0 python3[30245]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770043481.0247161-33708-227070060352299/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:44:41 compute-0 sudo[30243]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:41 compute-0 sudo[30269]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzvhlhfgftgvrnedspxynyoaigmjwear ; /usr/bin/python3'
Feb 02 14:44:41 compute-0 sudo[30269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:41 compute-0 python3[30271]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:44:41 compute-0 sudo[30269]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:42 compute-0 sudo[30342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyqcbhxceggkbsbfrukyoebimtnjpkjn ; /usr/bin/python3'
Feb 02 14:44:42 compute-0 sudo[30342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:42 compute-0 python3[30344]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770043481.0247161-33708-227070060352299/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:44:42 compute-0 sudo[30342]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:42 compute-0 sudo[30368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chdckmjgbitwsvhesebbpoppukdzhxqm ; /usr/bin/python3'
Feb 02 14:44:42 compute-0 sudo[30368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:42 compute-0 python3[30370]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:44:42 compute-0 sudo[30368]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:42 compute-0 sudo[30441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvusrkzidukvfchzqhcrplbyhpqprlny ; /usr/bin/python3'
Feb 02 14:44:42 compute-0 sudo[30441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:42 compute-0 python3[30443]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770043481.0247161-33708-227070060352299/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:44:42 compute-0 sudo[30441]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:42 compute-0 sudo[30467]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzkajkyyhwtiubojrfrjiwfdpqjdhcrf ; /usr/bin/python3'
Feb 02 14:44:42 compute-0 sudo[30467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:43 compute-0 python3[30469]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:44:43 compute-0 sudo[30467]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:43 compute-0 sudo[30540]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uclgeleettaewlpvgjdwfcdywqqkhyzo ; /usr/bin/python3'
Feb 02 14:44:43 compute-0 sudo[30540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:43 compute-0 python3[30542]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770043481.0247161-33708-227070060352299/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:44:43 compute-0 sudo[30540]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:43 compute-0 sudo[30566]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllrcylvvfrxsxtiqkzojknzrffgkcsv ; /usr/bin/python3'
Feb 02 14:44:43 compute-0 sudo[30566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:43 compute-0 python3[30568]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:44:43 compute-0 sudo[30566]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:43 compute-0 sudo[30639]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxssbdmhfhcveadkvobgaumkpjqcawwy ; /usr/bin/python3'
Feb 02 14:44:43 compute-0 sudo[30639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:44 compute-0 python3[30641]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770043481.0247161-33708-227070060352299/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:44:44 compute-0 sudo[30639]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:44 compute-0 sudo[30665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcumpermnxlthmfqbfcfkuwrfurjhttx ; /usr/bin/python3'
Feb 02 14:44:44 compute-0 sudo[30665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:44 compute-0 python3[30667]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:44:44 compute-0 sudo[30665]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:44 compute-0 sudo[30738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flyzelfuzynlvcvunpmsoywdpvdmzmop ; /usr/bin/python3'
Feb 02 14:44:44 compute-0 sudo[30738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:44 compute-0 python3[30740]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770043481.0247161-33708-227070060352299/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:44:44 compute-0 sudo[30738]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:44 compute-0 sudo[30764]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkkxqyselvrzvtwvbisgixuozzunwukc ; /usr/bin/python3'
Feb 02 14:44:44 compute-0 sudo[30764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:44 compute-0 python3[30766]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 14:44:44 compute-0 sudo[30764]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:45 compute-0 sudo[30837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmtkpcggvospoxjuxffbrdeqogtzamgp ; /usr/bin/python3'
Feb 02 14:44:45 compute-0 sudo[30837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:44:49 compute-0 python3[30839]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770043481.0247161-33708-227070060352299/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:44:49 compute-0 sudo[30837]: pam_unix(sudo:session): session closed for user root
Feb 02 14:44:51 compute-0 sshd-session[30864]: Connection closed by 192.168.122.11 port 60802 [preauth]
Feb 02 14:44:51 compute-0 sshd-session[30865]: Connection closed by 192.168.122.11 port 60806 [preauth]
Feb 02 14:44:51 compute-0 sshd-session[30866]: Unable to negotiate with 192.168.122.11 port 60816: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 02 14:44:51 compute-0 sshd-session[30867]: Unable to negotiate with 192.168.122.11 port 60822: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 02 14:44:51 compute-0 sshd-session[30868]: Unable to negotiate with 192.168.122.11 port 60826: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 02 14:45:01 compute-0 python3[30897]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:48:21 compute-0 sshd-session[30900]: Connection closed by 3.88.190.149 port 21748 [preauth]
Feb 02 14:50:00 compute-0 sshd-session[29983]: Received disconnect from 38.129.56.75 port 37592:11: disconnected by user
Feb 02 14:50:00 compute-0 sshd-session[29983]: Disconnected from user zuul 38.129.56.75 port 37592
Feb 02 14:50:00 compute-0 sshd-session[29980]: pam_unix(sshd:session): session closed for user zuul
Feb 02 14:50:00 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Feb 02 14:50:00 compute-0 systemd[1]: session-7.scope: Consumed 3.997s CPU time.
Feb 02 14:50:00 compute-0 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Feb 02 14:50:00 compute-0 systemd-logind[786]: Removed session 7.
Feb 02 14:57:39 compute-0 sshd-session[30906]: Accepted publickey for zuul from 192.168.122.30 port 48274 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 14:57:39 compute-0 systemd-logind[786]: New session 8 of user zuul.
Feb 02 14:57:39 compute-0 systemd[1]: Started Session 8 of User zuul.
Feb 02 14:57:39 compute-0 sshd-session[30906]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 14:57:40 compute-0 python3.9[31059]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:57:41 compute-0 sudo[31238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbnsfwfknodofzafjcjkkcljsdnblbkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044261.4268582-27-131226053759224/AnsiballZ_command.py'
Feb 02 14:57:41 compute-0 sudo[31238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:57:41 compute-0 python3.9[31240]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:57:49 compute-0 sudo[31238]: pam_unix(sudo:session): session closed for user root
Feb 02 14:57:49 compute-0 sshd-session[30909]: Connection closed by 192.168.122.30 port 48274
Feb 02 14:57:49 compute-0 sshd-session[30906]: pam_unix(sshd:session): session closed for user zuul
Feb 02 14:57:49 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Feb 02 14:57:49 compute-0 systemd[1]: session-8.scope: Consumed 7.677s CPU time.
Feb 02 14:57:49 compute-0 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Feb 02 14:57:49 compute-0 systemd-logind[786]: Removed session 8.
Feb 02 14:57:50 compute-0 irqbalance[783]: Cannot change IRQ 26 affinity: Operation not permitted
Feb 02 14:57:50 compute-0 irqbalance[783]: IRQ 26 affinity is now unmanaged
Feb 02 14:58:05 compute-0 sshd-session[31300]: Accepted publickey for zuul from 192.168.122.30 port 47292 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 14:58:05 compute-0 systemd-logind[786]: New session 9 of user zuul.
Feb 02 14:58:05 compute-0 systemd[1]: Started Session 9 of User zuul.
Feb 02 14:58:05 compute-0 sshd-session[31300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 14:58:06 compute-0 python3.9[31453]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 02 14:58:07 compute-0 python3.9[31627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:58:07 compute-0 sudo[31777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emtojandxyywoyiydesgqdpkbugsnpvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044287.454713-40-38779430563322/AnsiballZ_command.py'
Feb 02 14:58:07 compute-0 sudo[31777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:08 compute-0 python3.9[31779]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 14:58:08 compute-0 sudo[31777]: pam_unix(sudo:session): session closed for user root
Feb 02 14:58:08 compute-0 sudo[31930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhbfmnkgltrsxyuxxuacewysnhycgctv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044288.3133972-52-224124053453235/AnsiballZ_stat.py'
Feb 02 14:58:08 compute-0 sudo[31930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:08 compute-0 python3.9[31932]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 14:58:08 compute-0 sudo[31930]: pam_unix(sudo:session): session closed for user root
Feb 02 14:58:09 compute-0 sudo[32082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkkioipeintqjwabzgussefebjhqifxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044289.013143-60-92875350278857/AnsiballZ_file.py'
Feb 02 14:58:09 compute-0 sudo[32082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:09 compute-0 python3.9[32084]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:58:09 compute-0 sudo[32082]: pam_unix(sudo:session): session closed for user root
Feb 02 14:58:09 compute-0 sudo[32234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiujixhwrjsozstvwhgumhuqrmhynbov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044289.7101643-68-144138969684827/AnsiballZ_stat.py'
Feb 02 14:58:09 compute-0 sudo[32234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:10 compute-0 python3.9[32236]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 14:58:10 compute-0 sudo[32234]: pam_unix(sudo:session): session closed for user root
Feb 02 14:58:10 compute-0 sudo[32357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foauuhrcqhzsgixyjoswpeeyzpoqbahu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044289.7101643-68-144138969684827/AnsiballZ_copy.py'
Feb 02 14:58:10 compute-0 sudo[32357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:10 compute-0 python3.9[32359]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044289.7101643-68-144138969684827/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:58:10 compute-0 sudo[32357]: pam_unix(sudo:session): session closed for user root
Feb 02 14:58:11 compute-0 sudo[32509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuxufzbbjtmkoowuraljrplpxwdmxxwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044290.823281-83-168062824092875/AnsiballZ_setup.py'
Feb 02 14:58:11 compute-0 sudo[32509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:11 compute-0 python3.9[32511]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:58:11 compute-0 sudo[32509]: pam_unix(sudo:session): session closed for user root
Feb 02 14:58:11 compute-0 sudo[32665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjmlsrnjusqsaxgcqegycadugxhlwjce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044291.6525383-91-124015662779033/AnsiballZ_file.py'
Feb 02 14:58:11 compute-0 sudo[32665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:12 compute-0 python3.9[32667]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 14:58:12 compute-0 sudo[32665]: pam_unix(sudo:session): session closed for user root
Feb 02 14:58:12 compute-0 sudo[32817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teyuqvzylgdmcwdxmvshzpzojkkmmcsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044292.2265034-100-96453572431309/AnsiballZ_file.py'
Feb 02 14:58:12 compute-0 sudo[32817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:12 compute-0 python3.9[32819]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 14:58:12 compute-0 sudo[32817]: pam_unix(sudo:session): session closed for user root
Feb 02 14:58:13 compute-0 python3.9[32969]: ansible-ansible.builtin.service_facts Invoked
Feb 02 14:58:15 compute-0 python3.9[33222]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 14:58:16 compute-0 python3.9[33372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:58:17 compute-0 python3.9[33527]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 14:58:17 compute-0 sudo[33683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-migccermxghsxmlbuhjofxgjvtlrehrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044297.6985903-148-189172302754783/AnsiballZ_setup.py'
Feb 02 14:58:17 compute-0 sudo[33683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:18 compute-0 python3.9[33685]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 14:58:18 compute-0 sudo[33683]: pam_unix(sudo:session): session closed for user root
Feb 02 14:58:18 compute-0 sudo[33767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prgnnxgnuurktxknwiaezocfgnlwtjvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044297.6985903-148-189172302754783/AnsiballZ_dnf.py'
Feb 02 14:58:18 compute-0 sudo[33767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 14:58:19 compute-0 python3.9[33769]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 14:59:59 compute-0 systemd[1]: Reloading.
Feb 02 14:59:59 compute-0 systemd-rc-local-generator[34110]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 14:59:59 compute-0 systemd[1]: Starting dnf makecache...
Feb 02 14:59:59 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb 02 14:59:59 compute-0 dnf[34125]: Failed determining last makecache time.
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-barbican-42b4c41831408a8e323 158 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-python-glean-642fffe0203a8ffcc2443db52 180 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-cinder-1c00d6490d88e436f26ef 185 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-python-stevedore-c4acc5639fd2329372142 180 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-python-cloudkitty-tests-tempest-783703 175 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 systemd[1]: Reloading.
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-diskimage-builder-61b717cc45660834fe9a 134 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-nova-eaa65f0b85123a4ee343246 192 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-python-designate-tests-tempest-347fdbc 142 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 systemd-rc-local-generator[34165]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-glance-1fd12c29b339f30fe823e 179 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 191 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-manila-d783d10e75495b73866db 193 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-neutron-95cadbd379667c8520c8 199 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-octavia-5975097dd4b021385178 175 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-watcher-c014f81a8647287f6dcc 168 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-python-tcib-78032d201b02cee27e8e644c61 140 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 190 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-swift-dc98a8463506ac520c469a 191 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-python-tempestconf-8515371b7cceebd4282 189 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 systemd[1]: Reloading.
Feb 02 14:59:59 compute-0 dnf[34125]: delorean-openstack-heat-ui-013accbfd179753bc3f0 186 kB/s | 3.0 kB     00:00
Feb 02 14:59:59 compute-0 systemd-rc-local-generator[34219]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:00:00 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Feb 02 15:00:00 compute-0 dnf[34125]: CentOS Stream 9 - BaseOS                         28 kB/s | 6.7 kB     00:00
Feb 02 15:00:00 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Feb 02 15:00:00 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Feb 02 15:00:00 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Feb 02 15:00:00 compute-0 dnf[34125]: CentOS Stream 9 - AppStream                      28 kB/s | 6.8 kB     00:00
Feb 02 15:00:00 compute-0 dnf[34125]: CentOS Stream 9 - CRB                            70 kB/s | 6.6 kB     00:00
Feb 02 15:00:00 compute-0 dnf[34125]: CentOS Stream 9 - Extras packages                31 kB/s | 7.3 kB     00:00
Feb 02 15:00:00 compute-0 dnf[34125]: dlrn-antelope-testing                           163 kB/s | 3.0 kB     00:00
Feb 02 15:00:00 compute-0 dnf[34125]: dlrn-antelope-build-deps                        155 kB/s | 3.0 kB     00:00
Feb 02 15:00:01 compute-0 dnf[34125]: centos9-rabbitmq                                117 kB/s | 3.0 kB     00:00
Feb 02 15:00:01 compute-0 dnf[34125]: centos9-storage                                 113 kB/s | 3.0 kB     00:00
Feb 02 15:00:01 compute-0 dnf[34125]: centos9-opstools                                102 kB/s | 3.0 kB     00:00
Feb 02 15:00:01 compute-0 dnf[34125]: NFV SIG OpenvSwitch                              36 kB/s | 3.0 kB     00:00
Feb 02 15:00:01 compute-0 dnf[34125]: repo-setup-centos-appstream                     159 kB/s | 4.4 kB     00:00
Feb 02 15:00:01 compute-0 dnf[34125]: repo-setup-centos-baseos                        159 kB/s | 3.9 kB     00:00
Feb 02 15:00:01 compute-0 dnf[34125]: repo-setup-centos-highavailability              176 kB/s | 3.9 kB     00:00
Feb 02 15:00:01 compute-0 dnf[34125]: repo-setup-centos-powertools                    177 kB/s | 4.3 kB     00:00
Feb 02 15:00:01 compute-0 dnf[34125]: Extra Packages for Enterprise Linux 9 - x86_64  232 kB/s |  31 kB     00:00
Feb 02 15:00:02 compute-0 dnf[34125]: Extra Packages for Enterprise Linux 9 - x86_64   53 kB/s |  29 kB     00:00
Feb 02 15:00:02 compute-0 dnf[34125]: Errors during downloading metadata for repository 'epel-low-priority':
Feb 02 15:00:02 compute-0 dnf[34125]:   - Status code: 404 for https://ca.mirrors.cicku.me/epel/9/Everything/x86_64/repodata/b91c3e9e4a65bea79c79970f3e18a6c1e08c2bc2c33d9720fed74026713e7e97-primary.xml.xz (IP: 172.65.90.5)
Feb 02 15:00:02 compute-0 dnf[34125]:   - Status code: 404 for https://ca.mirrors.cicku.me/epel/9/Everything/x86_64/repodata/c40e9ff61186b0c344776a51e7e32dff7183ca6e1f6c592550028c41142c0c9d-updateinfo.xml.bz2 (IP: 172.65.90.5)
Feb 02 15:00:02 compute-0 dnf[34125]:   - Status code: 404 for https://ca.mirrors.cicku.me/epel/9/Everything/x86_64/repodata/78b3a8831da8d985f4f1462c109a8c99cfe3f36128c53ddac4ab4b1d90777ad9-filelists.xml.xz (IP: 172.65.90.5)
Feb 02 15:00:02 compute-0 dnf[34125]: Error: Failed to download metadata for repo 'epel-low-priority': Yum repo downloading error: Downloading error(s): repodata/b91c3e9e4a65bea79c79970f3e18a6c1e08c2bc2c33d9720fed74026713e7e97-primary.xml.xz - Download failed: Status code: 404 for https://ca.mirrors.cicku.me/epel/9/Everything/x86_64/repodata/b91c3e9e4a65bea79c79970f3e18a6c1e08c2bc2c33d9720fed74026713e7e97-primary.xml.xz (IP: 172.65.90.5); repodata/78b3a8831da8d985f4f1462c109a8c99cfe3f36128c53ddac4ab4b1d90777ad9-filelists.xml.xz - Download failed: Status code: 404 for https://ca.mirrors.cicku.me/epel/9/Everything/x86_64/repodata/78b3a8831da8d985f4f1462c109a8c99cfe3f36128c53ddac4ab4b1d90777ad9-filelists.xml.xz (IP: 172.65.90.5); repodata/c40e9ff61186b0c344776a51e7e32dff7183ca6e1f6c592550028c41142c0c9d-updateinfo.xml.bz2 - Download failed: Status code: 404 for https://ca.mirrors.cicku.me/epel/9/Everything/x86_64/repodata/c40e9ff61186b0c344776a51e7e32dff7183ca6e1f6c592550028c41142c0c9d-updateinfo.xml.bz2 (IP: 172.65.90.5)
Feb 02 15:00:02 compute-0 systemd[1]: dnf-makecache.service: Main process exited, code=exited, status=1/FAILURE
Feb 02 15:00:02 compute-0 systemd[1]: dnf-makecache.service: Failed with result 'exit-code'.
Feb 02 15:00:02 compute-0 systemd[1]: Failed to start dnf makecache.
Feb 02 15:00:02 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.305s CPU time.
Feb 02 15:00:58 compute-0 kernel: SELinux:  Converting 2727 SID table entries...
Feb 02 15:00:58 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 15:00:58 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 15:00:58 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 15:00:58 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 15:00:58 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 15:00:58 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 15:00:58 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 15:00:59 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb 02 15:00:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 15:00:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 15:00:59 compute-0 systemd[1]: Reloading.
Feb 02 15:00:59 compute-0 systemd-rc-local-generator[34568]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:00:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 15:00:59 compute-0 sudo[33767]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 15:01:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 15:01:00 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.101s CPU time.
Feb 02 15:01:00 compute-0 systemd[1]: run-r71507cc4d5444f0bbcae05c346b4563b.service: Deactivated successfully.
Feb 02 15:01:00 compute-0 sudo[35479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtbbpgntpkbdcgrlwtvwjekewkdpdnfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044460.0551383-160-137611467674658/AnsiballZ_command.py'
Feb 02 15:01:00 compute-0 sudo[35479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:00 compute-0 python3.9[35481]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:01:01 compute-0 sudo[35479]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:01 compute-0 CROND[35659]: (root) CMD (run-parts /etc/cron.hourly)
Feb 02 15:01:01 compute-0 run-parts[35662]: (/etc/cron.hourly) starting 0anacron
Feb 02 15:01:01 compute-0 anacron[35674]: Anacron started on 2026-02-02
Feb 02 15:01:01 compute-0 anacron[35674]: Will run job `cron.daily' in 22 min.
Feb 02 15:01:01 compute-0 anacron[35674]: Will run job `cron.weekly' in 42 min.
Feb 02 15:01:01 compute-0 anacron[35674]: Will run job `cron.monthly' in 62 min.
Feb 02 15:01:01 compute-0 anacron[35674]: Jobs will be executed sequentially
Feb 02 15:01:01 compute-0 run-parts[35678]: (/etc/cron.hourly) finished 0anacron
Feb 02 15:01:01 compute-0 CROND[35655]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 02 15:01:02 compute-0 sudo[35775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcjhrxxgopwltmwfkhnnjbyekcqepwzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044461.6043725-168-87042789843833/AnsiballZ_selinux.py'
Feb 02 15:01:02 compute-0 sudo[35775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:02 compute-0 python3.9[35777]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 02 15:01:02 compute-0 sudo[35775]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:03 compute-0 sudo[35927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqjekemtwojbvkgmhqzjqklpzxcodekp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044462.821465-179-267381217665163/AnsiballZ_command.py'
Feb 02 15:01:03 compute-0 sudo[35927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:03 compute-0 python3.9[35929]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 02 15:01:04 compute-0 sudo[35927]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:04 compute-0 sudo[36080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hliifszrqhtlfsjwcbogwgydcszomlpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044464.4257042-187-178740376893648/AnsiballZ_file.py'
Feb 02 15:01:04 compute-0 sudo[36080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:05 compute-0 python3.9[36082]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:01:05 compute-0 sudo[36080]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:05 compute-0 sudo[36232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqcgcekydeevaibprnxkhacrgrjdtfwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044465.3851452-195-267651491577765/AnsiballZ_mount.py'
Feb 02 15:01:05 compute-0 sudo[36232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:06 compute-0 python3.9[36234]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 02 15:01:06 compute-0 sudo[36232]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:06 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:01:06 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:01:07 compute-0 sudo[36385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwhvkvrzfzbuykqqtpycaqiizsgwyla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044466.889441-223-231173404250243/AnsiballZ_file.py'
Feb 02 15:01:07 compute-0 sudo[36385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:07 compute-0 python3.9[36387]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:01:07 compute-0 sudo[36385]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:07 compute-0 sudo[36537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpmejheryjgizmnswbnsnfrbnumnfdna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044467.4987168-231-105911996752222/AnsiballZ_stat.py'
Feb 02 15:01:07 compute-0 sudo[36537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:08 compute-0 python3.9[36539]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:01:08 compute-0 sudo[36537]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:08 compute-0 sudo[36660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcgryuksdleawtekkwloqurrkqodozey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044467.4987168-231-105911996752222/AnsiballZ_copy.py'
Feb 02 15:01:08 compute-0 sudo[36660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:08 compute-0 python3.9[36662]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044467.4987168-231-105911996752222/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7be2e1fb7115b1c5b555a1bd4d7bf988bfc4e3da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:01:08 compute-0 sudo[36660]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:09 compute-0 sudo[36812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogpsjtfujaenqiytkgzecngpqrpnupnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044469.1077955-255-206512392536755/AnsiballZ_stat.py'
Feb 02 15:01:09 compute-0 sudo[36812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:09 compute-0 python3.9[36814]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:01:09 compute-0 sudo[36812]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:09 compute-0 sudo[36964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vanyyuihgrrixiuyauigymspyycqyelr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044469.6902373-263-55037679631283/AnsiballZ_command.py'
Feb 02 15:01:09 compute-0 sudo[36964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:12 compute-0 python3.9[36966]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:01:12 compute-0 sudo[36964]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:12 compute-0 sudo[37118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmxrassxjcbtmkcoagvuvyyqxjhkoona ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044472.5726674-271-122417795523218/AnsiballZ_file.py'
Feb 02 15:01:12 compute-0 sudo[37118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:13 compute-0 python3.9[37120]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:01:13 compute-0 sudo[37118]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:13 compute-0 sudo[37270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwczwdzjdvlgoeuhawrdrlaexbwkwkxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044473.400713-282-253704233144647/AnsiballZ_getent.py'
Feb 02 15:01:13 compute-0 sudo[37270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:14 compute-0 python3.9[37272]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 02 15:01:14 compute-0 sudo[37270]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:14 compute-0 sudo[37423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzylvjvvjbisztihhkqruxnazjnhawjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044474.2131279-290-72831694401657/AnsiballZ_group.py'
Feb 02 15:01:14 compute-0 sudo[37423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:14 compute-0 python3.9[37425]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 15:01:14 compute-0 groupadd[37426]: group added to /etc/group: name=qemu, GID=107
Feb 02 15:01:14 compute-0 groupadd[37426]: group added to /etc/gshadow: name=qemu
Feb 02 15:01:14 compute-0 groupadd[37426]: new group: name=qemu, GID=107
Feb 02 15:01:14 compute-0 sudo[37423]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:15 compute-0 sudo[37581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofhpjskzocwzatpzmyhrtzzpsmybsrkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044475.0677602-298-220555671693907/AnsiballZ_user.py'
Feb 02 15:01:15 compute-0 sudo[37581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:15 compute-0 python3.9[37583]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 02 15:01:15 compute-0 useradd[37585]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Feb 02 15:01:15 compute-0 sudo[37581]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:16 compute-0 sudo[37741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssvxxumepoblfijjfeyskeakybobjdjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044475.9874394-306-51507880381234/AnsiballZ_getent.py'
Feb 02 15:01:16 compute-0 sudo[37741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:16 compute-0 python3.9[37743]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 02 15:01:16 compute-0 sudo[37741]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:16 compute-0 sudo[37894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnypjriaepthgugnrzsgdbkmrlhsjpsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044476.5806396-314-80159150913899/AnsiballZ_group.py'
Feb 02 15:01:16 compute-0 sudo[37894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:17 compute-0 python3.9[37896]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 15:01:17 compute-0 groupadd[37897]: group added to /etc/group: name=hugetlbfs, GID=42477
Feb 02 15:01:17 compute-0 groupadd[37897]: group added to /etc/gshadow: name=hugetlbfs
Feb 02 15:01:17 compute-0 groupadd[37897]: new group: name=hugetlbfs, GID=42477
Feb 02 15:01:17 compute-0 sudo[37894]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:17 compute-0 sudo[38052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwidmgeqarnbxyavnoptebxqoplgdxdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044477.2313547-323-5225123337564/AnsiballZ_file.py'
Feb 02 15:01:17 compute-0 sudo[38052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:17 compute-0 python3.9[38054]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 02 15:01:17 compute-0 sudo[38052]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:18 compute-0 sudo[38204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clsxhwrghrvxbdkxnmfcrmqjagcncuck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044478.0291827-334-270253065888390/AnsiballZ_dnf.py'
Feb 02 15:01:18 compute-0 sudo[38204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:18 compute-0 python3.9[38206]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:01:20 compute-0 sudo[38204]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:20 compute-0 sudo[38358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-walndvojbgzklanqmwvyskxroaxaxuox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044480.3643732-342-217989618566928/AnsiballZ_file.py'
Feb 02 15:01:20 compute-0 sudo[38358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:20 compute-0 python3.9[38360]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:01:20 compute-0 sudo[38358]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:21 compute-0 sudo[38510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmlojrtedgdecbdoicktogefxxoergev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044481.02615-350-219240263373395/AnsiballZ_stat.py'
Feb 02 15:01:21 compute-0 sudo[38510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:21 compute-0 python3.9[38512]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:01:21 compute-0 sudo[38510]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:21 compute-0 sudo[38633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigfwyuucbizwsuajfiebuzmhneekyox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044481.02615-350-219240263373395/AnsiballZ_copy.py'
Feb 02 15:01:21 compute-0 sudo[38633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:22 compute-0 python3.9[38635]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770044481.02615-350-219240263373395/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:01:22 compute-0 sudo[38633]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:22 compute-0 sudo[38785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzpsnnmlbggxhllnmvbtutxbeczlsaol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044482.1780612-365-2067240696688/AnsiballZ_systemd.py'
Feb 02 15:01:22 compute-0 sudo[38785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:23 compute-0 python3.9[38787]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:01:23 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 02 15:01:23 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 02 15:01:23 compute-0 kernel: Bridge firewalling registered
Feb 02 15:01:23 compute-0 systemd-modules-load[38791]: Inserted module 'br_netfilter'
Feb 02 15:01:23 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 02 15:01:23 compute-0 sudo[38785]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:23 compute-0 sudo[38946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsagkeunazkssddlqdvarkcvkfbwlghu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044483.4156985-373-87183496603763/AnsiballZ_stat.py'
Feb 02 15:01:23 compute-0 sudo[38946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:23 compute-0 python3.9[38948]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:01:23 compute-0 sudo[38946]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:24 compute-0 sudo[39069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfsxcrbvgijjhgmlckabpruosavdruiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044483.4156985-373-87183496603763/AnsiballZ_copy.py'
Feb 02 15:01:24 compute-0 sudo[39069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:24 compute-0 python3.9[39071]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770044483.4156985-373-87183496603763/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:01:24 compute-0 sudo[39069]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:25 compute-0 sudo[39221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsybnrkutdoebtcxievcosqnxaieycsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044484.8834658-391-63014392185430/AnsiballZ_dnf.py'
Feb 02 15:01:25 compute-0 sudo[39221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:25 compute-0 python3.9[39223]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:01:28 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Feb 02 15:01:28 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Feb 02 15:01:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 15:01:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 15:01:28 compute-0 systemd[1]: Reloading.
Feb 02 15:01:29 compute-0 systemd-rc-local-generator[39285]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:01:29 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 15:01:29 compute-0 sudo[39221]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:30 compute-0 python3.9[40738]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:01:30 compute-0 python3.9[41952]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 02 15:01:31 compute-0 python3.9[42798]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:01:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 15:01:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 15:01:31 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.736s CPU time.
Feb 02 15:01:31 compute-0 systemd[1]: run-r6cf148c96cfb45bdb8b2e83e06ba06fb.service: Deactivated successfully.
Feb 02 15:01:32 compute-0 sudo[43432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nornegdrfhvartefaozcytnjfmnaocmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044491.7849438-430-272635767884521/AnsiballZ_command.py'
Feb 02 15:01:32 compute-0 sudo[43432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:32 compute-0 python3.9[43434]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:01:32 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 02 15:01:32 compute-0 systemd[1]: Starting Authorization Manager...
Feb 02 15:01:32 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 02 15:01:32 compute-0 polkitd[43651]: Started polkitd version 0.117
Feb 02 15:01:32 compute-0 polkitd[43651]: Loading rules from directory /etc/polkit-1/rules.d
Feb 02 15:01:32 compute-0 polkitd[43651]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 02 15:01:32 compute-0 polkitd[43651]: Finished loading, compiling and executing 2 rules
Feb 02 15:01:32 compute-0 systemd[1]: Started Authorization Manager.
Feb 02 15:01:32 compute-0 polkitd[43651]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 02 15:01:32 compute-0 sudo[43432]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:33 compute-0 sudo[43819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amtwevrogutjwxqsyjjglmhkbbvwstcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044493.0588849-439-97080535864388/AnsiballZ_systemd.py'
Feb 02 15:01:33 compute-0 sudo[43819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:33 compute-0 python3.9[43821]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:01:33 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 02 15:01:33 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Feb 02 15:01:33 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 02 15:01:33 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 02 15:01:33 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 02 15:01:33 compute-0 sudo[43819]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:34 compute-0 python3.9[43982]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 02 15:01:36 compute-0 sudo[44132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmaiortxxthzhyhyfzkughegqinzwqdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044496.4461498-496-96241909194989/AnsiballZ_systemd.py'
Feb 02 15:01:36 compute-0 sudo[44132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:36 compute-0 python3.9[44134]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:01:36 compute-0 systemd[1]: Reloading.
Feb 02 15:01:37 compute-0 systemd-rc-local-generator[44159]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:01:37 compute-0 sudo[44132]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:37 compute-0 sudo[44321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqndfvxfovjvubsdknkkzuiejdsejhcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044497.3176756-496-244436263588864/AnsiballZ_systemd.py'
Feb 02 15:01:37 compute-0 sudo[44321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:37 compute-0 python3.9[44323]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:01:37 compute-0 systemd[1]: Reloading.
Feb 02 15:01:38 compute-0 systemd-rc-local-generator[44353]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:01:38 compute-0 sudo[44321]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:38 compute-0 sudo[44510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atnosipmuwlynarldhqfdocbeuszlqaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044498.3563461-512-671551399554/AnsiballZ_command.py'
Feb 02 15:01:38 compute-0 sudo[44510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:38 compute-0 python3.9[44512]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:01:38 compute-0 sudo[44510]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:39 compute-0 sudo[44663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndoopwzixgjrntpluorjspnnemozrtfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044498.9267776-520-138993943493180/AnsiballZ_command.py'
Feb 02 15:01:39 compute-0 sudo[44663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:39 compute-0 python3.9[44665]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:01:39 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb 02 15:01:39 compute-0 sudo[44663]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:39 compute-0 sudo[44816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryjvatdwxhhfpmazltcqhgrswokbutjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044499.5603573-528-149619267449104/AnsiballZ_command.py'
Feb 02 15:01:39 compute-0 sudo[44816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:40 compute-0 python3.9[44818]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:01:41 compute-0 sudo[44816]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:41 compute-0 sudo[44978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjqjkpkptaylvtqzjdicxnfdhchrqyox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044501.6487486-536-267701058613562/AnsiballZ_command.py'
Feb 02 15:01:41 compute-0 sudo[44978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:42 compute-0 python3.9[44980]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:01:42 compute-0 sudo[44978]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:42 compute-0 sudo[45131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aefwirywaozwquxrxccrbadqoguxkrlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044502.3052697-544-89355615436464/AnsiballZ_systemd.py'
Feb 02 15:01:42 compute-0 sudo[45131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:42 compute-0 python3.9[45133]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:01:42 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 02 15:01:42 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Feb 02 15:01:42 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Feb 02 15:01:42 compute-0 systemd[1]: Starting Apply Kernel Variables...
Feb 02 15:01:42 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 02 15:01:42 compute-0 systemd[1]: Finished Apply Kernel Variables.
Feb 02 15:01:42 compute-0 sudo[45131]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:43 compute-0 sshd-session[31303]: Connection closed by 192.168.122.30 port 47292
Feb 02 15:01:43 compute-0 sshd-session[31300]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:01:43 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Feb 02 15:01:43 compute-0 systemd[1]: session-9.scope: Consumed 2min 11.938s CPU time.
Feb 02 15:01:43 compute-0 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Feb 02 15:01:43 compute-0 systemd-logind[786]: Removed session 9.
Feb 02 15:01:48 compute-0 sshd-session[45164]: Accepted publickey for zuul from 192.168.122.30 port 59754 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:01:48 compute-0 systemd-logind[786]: New session 10 of user zuul.
Feb 02 15:01:48 compute-0 systemd[1]: Started Session 10 of User zuul.
Feb 02 15:01:48 compute-0 sshd-session[45164]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:01:49 compute-0 python3.9[45317]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:01:50 compute-0 sudo[45471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmphanfzpiwsznwieqwphjlfikrtdifa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044510.1424935-31-261865536509034/AnsiballZ_getent.py'
Feb 02 15:01:50 compute-0 sudo[45471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:50 compute-0 python3.9[45473]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 02 15:01:50 compute-0 sudo[45471]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:51 compute-0 sudo[45624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pggwsbcuyexyagccfxplyshgzufdxnon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044511.0341647-39-228265704524323/AnsiballZ_group.py'
Feb 02 15:01:51 compute-0 sudo[45624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:51 compute-0 python3.9[45626]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 15:01:51 compute-0 groupadd[45627]: group added to /etc/group: name=openvswitch, GID=42476
Feb 02 15:01:51 compute-0 groupadd[45627]: group added to /etc/gshadow: name=openvswitch
Feb 02 15:01:51 compute-0 groupadd[45627]: new group: name=openvswitch, GID=42476
Feb 02 15:01:51 compute-0 sudo[45624]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:52 compute-0 sudo[45782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaugdgykqassjtfhqtszvegtjwgtpgfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044511.7951133-47-57430920082421/AnsiballZ_user.py'
Feb 02 15:01:52 compute-0 sudo[45782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:52 compute-0 python3.9[45784]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 02 15:01:52 compute-0 useradd[45786]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Feb 02 15:01:52 compute-0 useradd[45786]: add 'openvswitch' to group 'hugetlbfs'
Feb 02 15:01:52 compute-0 useradd[45786]: add 'openvswitch' to shadow group 'hugetlbfs'
Feb 02 15:01:52 compute-0 sudo[45782]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:53 compute-0 sudo[45942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jebyrncnqztnutgobypayrmnrpzcihci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044512.8197744-57-269578571167767/AnsiballZ_setup.py'
Feb 02 15:01:53 compute-0 sudo[45942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:53 compute-0 python3.9[45944]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:01:53 compute-0 sudo[45942]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:53 compute-0 sudo[46026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmahfvfclevewssprcrabfeqfrwmaksx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044512.8197744-57-269578571167767/AnsiballZ_dnf.py'
Feb 02 15:01:53 compute-0 sudo[46026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:54 compute-0 python3.9[46028]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 02 15:01:56 compute-0 sudo[46026]: pam_unix(sudo:session): session closed for user root
Feb 02 15:01:56 compute-0 sudo[46189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaobvttnxprqxavprdyfwsaxrjuhfyqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044516.4304733-71-49585699226721/AnsiballZ_dnf.py'
Feb 02 15:01:56 compute-0 sudo[46189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:01:56 compute-0 python3.9[46191]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:02:07 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Feb 02 15:02:07 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 15:02:07 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 15:02:07 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 15:02:07 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 15:02:07 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 15:02:07 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 15:02:07 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 15:02:07 compute-0 groupadd[46214]: group added to /etc/group: name=unbound, GID=994
Feb 02 15:02:07 compute-0 groupadd[46214]: group added to /etc/gshadow: name=unbound
Feb 02 15:02:07 compute-0 groupadd[46214]: new group: name=unbound, GID=994
Feb 02 15:02:07 compute-0 useradd[46221]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Feb 02 15:02:07 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb 02 15:02:07 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb 02 15:02:08 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 15:02:08 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 15:02:08 compute-0 systemd[1]: Reloading.
Feb 02 15:02:08 compute-0 systemd-rc-local-generator[46719]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:02:08 compute-0 systemd-sysv-generator[46724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:02:08 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 15:02:09 compute-0 sudo[46189]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:09 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 15:02:09 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 15:02:09 compute-0 systemd[1]: run-r929de4e7ee5146428652005350c2b89d.service: Deactivated successfully.
Feb 02 15:02:09 compute-0 sudo[47288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuibcgquswillbnofkpdfoaeqbvjevpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044529.4181423-79-272937500418039/AnsiballZ_systemd.py'
Feb 02 15:02:09 compute-0 sudo[47288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:10 compute-0 python3.9[47290]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 15:02:10 compute-0 systemd[1]: Reloading.
Feb 02 15:02:10 compute-0 systemd-rc-local-generator[47320]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:02:10 compute-0 systemd-sysv-generator[47324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:02:10 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Feb 02 15:02:10 compute-0 chown[47332]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb 02 15:02:10 compute-0 ovs-ctl[47337]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb 02 15:02:10 compute-0 ovs-ctl[47337]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb 02 15:02:10 compute-0 ovs-ctl[47337]: Starting ovsdb-server [  OK  ]
Feb 02 15:02:10 compute-0 ovs-vsctl[47386]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb 02 15:02:10 compute-0 ovs-vsctl[47406]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"673607ba-6470-4d88-9324-0f750aed69af\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb 02 15:02:10 compute-0 ovs-ctl[47337]: Configuring Open vSwitch system IDs [  OK  ]
Feb 02 15:02:10 compute-0 ovs-ctl[47337]: Enabling remote OVSDB managers [  OK  ]
Feb 02 15:02:10 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Feb 02 15:02:10 compute-0 ovs-vsctl[47412]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 02 15:02:10 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb 02 15:02:10 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb 02 15:02:10 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb 02 15:02:11 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Feb 02 15:02:11 compute-0 ovs-ctl[47456]: Inserting openvswitch module [  OK  ]
Feb 02 15:02:11 compute-0 ovs-ctl[47425]: Starting ovs-vswitchd [  OK  ]
Feb 02 15:02:11 compute-0 ovs-vsctl[47474]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 02 15:02:11 compute-0 ovs-ctl[47425]: Enabling remote OVSDB managers [  OK  ]
Feb 02 15:02:11 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb 02 15:02:11 compute-0 systemd[1]: Starting Open vSwitch...
Feb 02 15:02:11 compute-0 systemd[1]: Finished Open vSwitch.
Feb 02 15:02:11 compute-0 sudo[47288]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:11 compute-0 python3.9[47625]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:02:12 compute-0 sudo[47775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlwwcoktkzzzmffwazdrdqdymvopkpbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044532.0936556-97-241164742231280/AnsiballZ_sefcontext.py'
Feb 02 15:02:12 compute-0 sudo[47775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:12 compute-0 python3.9[47777]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 02 15:02:13 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Feb 02 15:02:13 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 15:02:13 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 15:02:13 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 15:02:13 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 15:02:13 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 15:02:13 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 15:02:13 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 15:02:13 compute-0 sudo[47775]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:14 compute-0 python3.9[47932]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:02:15 compute-0 sudo[48088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsxsfcqhenxjsearyaoqovrfwtkzlvzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044535.0590603-115-273741486688327/AnsiballZ_dnf.py'
Feb 02 15:02:15 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb 02 15:02:15 compute-0 sudo[48088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:15 compute-0 python3.9[48090]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:02:16 compute-0 sudo[48088]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:17 compute-0 sudo[48241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckiauuxzrqjegiwkdabjiuakiufemhky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044537.1088228-123-209968771302370/AnsiballZ_command.py'
Feb 02 15:02:17 compute-0 sudo[48241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:17 compute-0 python3.9[48243]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:02:18 compute-0 sudo[48241]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:18 compute-0 sudo[48528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhisxfsxyyttjliqcswndxxmnqldfgkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044538.5245624-131-194118023739435/AnsiballZ_file.py'
Feb 02 15:02:18 compute-0 sudo[48528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:19 compute-0 python3.9[48530]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 02 15:02:19 compute-0 sudo[48528]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:19 compute-0 python3.9[48680]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:02:20 compute-0 sudo[48832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziphzcxltlcogdqgynzorttranuadkzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044539.958213-147-56967016659005/AnsiballZ_dnf.py'
Feb 02 15:02:20 compute-0 sudo[48832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:20 compute-0 python3.9[48834]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:02:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 15:02:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 15:02:26 compute-0 systemd[1]: Reloading.
Feb 02 15:02:26 compute-0 systemd-sysv-generator[48874]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:02:26 compute-0 systemd-rc-local-generator[48866]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:02:26 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 15:02:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 15:02:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 15:02:27 compute-0 systemd[1]: run-r9dc7128980ab4e439774652b4c1f1819.service: Deactivated successfully.
Feb 02 15:02:27 compute-0 sudo[48832]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:27 compute-0 sudo[49151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvpryvezjuaqisemindzgfcaynfbtjkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044547.2217813-155-81615392026622/AnsiballZ_systemd.py'
Feb 02 15:02:27 compute-0 sudo[49151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:27 compute-0 python3.9[49153]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:02:27 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 02 15:02:27 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Feb 02 15:02:27 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Feb 02 15:02:27 compute-0 systemd[1]: Stopping Network Manager...
Feb 02 15:02:27 compute-0 NetworkManager[7192]: <info>  [1770044547.7539] caught SIGTERM, shutting down normally.
Feb 02 15:02:27 compute-0 NetworkManager[7192]: <info>  [1770044547.7560] dhcp4 (eth0): canceled DHCP transaction
Feb 02 15:02:27 compute-0 NetworkManager[7192]: <info>  [1770044547.7560] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 15:02:27 compute-0 NetworkManager[7192]: <info>  [1770044547.7560] dhcp4 (eth0): state changed no lease
Feb 02 15:02:27 compute-0 NetworkManager[7192]: <info>  [1770044547.7564] manager: NetworkManager state is now CONNECTED_SITE
Feb 02 15:02:27 compute-0 NetworkManager[7192]: <info>  [1770044547.7631] exiting (success)
Feb 02 15:02:27 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 15:02:27 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 15:02:27 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 02 15:02:27 compute-0 systemd[1]: Stopped Network Manager.
Feb 02 15:02:27 compute-0 systemd[1]: NetworkManager.service: Consumed 16.061s CPU time, 4.1M memory peak, read 0B from disk, written 22.0K to disk.
Feb 02 15:02:27 compute-0 systemd[1]: Starting Network Manager...
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.8418] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3da1c12d-3f65-4f20-960d-600dea66a7e3)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.8421] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.8475] manager[0x55563e38e000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 02 15:02:27 compute-0 systemd[1]: Starting Hostname Service...
Feb 02 15:02:27 compute-0 systemd[1]: Started Hostname Service.
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9385] hostname: hostname: using hostnamed
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9386] hostname: static hostname changed from (none) to "compute-0"
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9392] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9399] manager[0x55563e38e000]: rfkill: Wi-Fi hardware radio set enabled
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9399] manager[0x55563e38e000]: rfkill: WWAN hardware radio set enabled
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9421] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9430] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9431] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9431] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9432] manager: Networking is enabled by state file
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9433] settings: Loaded settings plugin: keyfile (internal)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9437] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9468] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9476] dhcp: init: Using DHCP client 'internal'
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9479] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9485] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9489] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9497] device (lo): Activation: starting connection 'lo' (04c2cb28-6382-41c1-9610-496161f13eea)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9504] device (eth0): carrier: link connected
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9508] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9513] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9514] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9520] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9526] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9532] device (eth1): carrier: link connected
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9536] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9543] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (7c114712-cf58-5bea-858e-99132cd0be47) (indicated)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9543] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9548] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9555] device (eth1): Activation: starting connection 'ci-private-network' (7c114712-cf58-5bea-858e-99132cd0be47)
Feb 02 15:02:27 compute-0 systemd[1]: Started Network Manager.
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9562] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9571] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9573] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9575] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9577] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9583] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9586] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9588] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9597] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9606] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9610] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9619] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9635] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9652] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9654] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9656] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9666] device (lo): Activation: successful, device activated.
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9676] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 02 15:02:27 compute-0 systemd[1]: Starting Network Manager Wait Online...
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9688] manager: NetworkManager state is now CONNECTED_LOCAL
Feb 02 15:02:27 compute-0 NetworkManager[49171]: <info>  [1770044547.9693] device (eth1): Activation: successful, device activated.
Feb 02 15:02:27 compute-0 sudo[49151]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:28 compute-0 sudo[49359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcxweihflnjzrurvntprycapumybkxex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044548.1352537-163-78338879644135/AnsiballZ_dnf.py'
Feb 02 15:02:28 compute-0 sudo[49359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:28 compute-0 python3.9[49361]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:02:29 compute-0 NetworkManager[49171]: <info>  [1770044549.2652] dhcp4 (eth0): state changed new lease, address=38.129.56.16
Feb 02 15:02:29 compute-0 NetworkManager[49171]: <info>  [1770044549.2661] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 02 15:02:29 compute-0 NetworkManager[49171]: <info>  [1770044549.2732] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 02 15:02:29 compute-0 NetworkManager[49171]: <info>  [1770044549.2760] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 02 15:02:29 compute-0 NetworkManager[49171]: <info>  [1770044549.2762] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 02 15:02:29 compute-0 NetworkManager[49171]: <info>  [1770044549.2764] manager: NetworkManager state is now CONNECTED_SITE
Feb 02 15:02:29 compute-0 NetworkManager[49171]: <info>  [1770044549.2767] device (eth0): Activation: successful, device activated.
Feb 02 15:02:29 compute-0 NetworkManager[49171]: <info>  [1770044549.2771] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 02 15:02:29 compute-0 NetworkManager[49171]: <info>  [1770044549.2774] manager: startup complete
Feb 02 15:02:29 compute-0 systemd[1]: Finished Network Manager Wait Online.
Feb 02 15:02:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 15:02:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 15:02:32 compute-0 systemd[1]: Reloading.
Feb 02 15:02:32 compute-0 systemd-sysv-generator[49435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:02:32 compute-0 systemd-rc-local-generator[49430]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:02:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 15:02:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 15:02:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 15:02:33 compute-0 systemd[1]: run-r79a84335dff5471e86da75f980101f0d.service: Deactivated successfully.
Feb 02 15:02:33 compute-0 sudo[49359]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:34 compute-0 sudo[49846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snakczrkdfjkmuxacjhrhairkjbdbrnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044554.0337868-175-239274812702019/AnsiballZ_stat.py'
Feb 02 15:02:34 compute-0 sudo[49846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:34 compute-0 python3.9[49848]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:02:34 compute-0 sudo[49846]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:35 compute-0 sudo[49998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aegqvulpvafzdegvxdakrsufwtjykxgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044554.7090409-184-234996890194303/AnsiballZ_ini_file.py'
Feb 02 15:02:35 compute-0 sudo[49998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:35 compute-0 python3.9[50000]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:35 compute-0 sudo[49998]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:35 compute-0 sudo[50152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbeizlmzmteznsskacnfhflixxxtwbqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044555.5315123-194-9901952666110/AnsiballZ_ini_file.py'
Feb 02 15:02:35 compute-0 sudo[50152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:35 compute-0 python3.9[50154]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:35 compute-0 sudo[50152]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:36 compute-0 sudo[50304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrolfewlmvwpjqlbnjkrhetupvsyvjdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044556.1004682-194-258805132633974/AnsiballZ_ini_file.py'
Feb 02 15:02:36 compute-0 sudo[50304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:36 compute-0 python3.9[50306]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:36 compute-0 sudo[50304]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:36 compute-0 sudo[50456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oexnfdvrqouevfzkyjyscrzfhzhlzeod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044556.6968575-209-117822118765744/AnsiballZ_ini_file.py'
Feb 02 15:02:36 compute-0 sudo[50456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:37 compute-0 python3.9[50458]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:37 compute-0 sudo[50456]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:37 compute-0 sudo[50608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieokyiakzgnxzwapdumfiaemvnvgnhfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044557.178375-209-149593457816986/AnsiballZ_ini_file.py'
Feb 02 15:02:37 compute-0 sudo[50608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:37 compute-0 python3.9[50610]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:37 compute-0 sudo[50608]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:37 compute-0 sudo[50760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkzteomokreofcfegfxyawchfacsuvhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044557.7247863-224-271796189772264/AnsiballZ_stat.py'
Feb 02 15:02:37 compute-0 sudo[50760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:38 compute-0 python3.9[50762]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:02:38 compute-0 sudo[50760]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:38 compute-0 sudo[50883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzsgbhzytyhqhhmmxajknjxmgcnilbjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044557.7247863-224-271796189772264/AnsiballZ_copy.py'
Feb 02 15:02:38 compute-0 sudo[50883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:38 compute-0 python3.9[50885]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044557.7247863-224-271796189772264/.source _original_basename=.k5cslfbe follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:38 compute-0 sudo[50883]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:39 compute-0 sudo[51035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeoqefjyxnbnuisapruakjdaeybuqcgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044558.9581301-239-263115605427813/AnsiballZ_file.py'
Feb 02 15:02:39 compute-0 sudo[51035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:39 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 15:02:39 compute-0 python3.9[51037]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:39 compute-0 sudo[51035]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:39 compute-0 sudo[51187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atzmswcbcliiipnsobiaxfyvjuwrmfsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044559.5533316-247-61582110972521/AnsiballZ_edpm_os_net_config_mappings.py'
Feb 02 15:02:39 compute-0 sudo[51187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:40 compute-0 python3.9[51189]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb 02 15:02:40 compute-0 sudo[51187]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:40 compute-0 sudo[51339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fplvmgduzdhtmkulesqdizkdnevtpyux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044560.376013-256-38188932166410/AnsiballZ_file.py'
Feb 02 15:02:40 compute-0 sudo[51339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:40 compute-0 python3.9[51341]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:40 compute-0 sudo[51339]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:41 compute-0 sudo[51491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbltgnfsqqnljmyuomvhpxfomrhkpmqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044561.0774853-266-276996455306179/AnsiballZ_stat.py'
Feb 02 15:02:41 compute-0 sudo[51491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:41 compute-0 sudo[51491]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:41 compute-0 sudo[51614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfznsxteimjmsodqtlpldievonvifspf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044561.0774853-266-276996455306179/AnsiballZ_copy.py'
Feb 02 15:02:41 compute-0 sudo[51614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:42 compute-0 sudo[51614]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:42 compute-0 sudo[51766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmjawknisjbpimgwggkmxmqnxagjkcbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044562.2103522-281-122942333568101/AnsiballZ_slurp.py'
Feb 02 15:02:42 compute-0 sudo[51766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:42 compute-0 python3.9[51768]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb 02 15:02:42 compute-0 sudo[51766]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:43 compute-0 sudo[51941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvcwcgrhwbbzjytficwiwjanpbdkwiax ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044563.0056477-290-13034080277281/async_wrapper.py j869233864177 300 /home/zuul/.ansible/tmp/ansible-tmp-1770044563.0056477-290-13034080277281/AnsiballZ_edpm_os_net_config.py _'
Feb 02 15:02:43 compute-0 sudo[51941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:43 compute-0 ansible-async_wrapper.py[51943]: Invoked with j869233864177 300 /home/zuul/.ansible/tmp/ansible-tmp-1770044563.0056477-290-13034080277281/AnsiballZ_edpm_os_net_config.py _
Feb 02 15:02:43 compute-0 ansible-async_wrapper.py[51946]: Starting module and watcher
Feb 02 15:02:43 compute-0 ansible-async_wrapper.py[51946]: Start watching 51947 (300)
Feb 02 15:02:43 compute-0 ansible-async_wrapper.py[51947]: Start module (51947)
Feb 02 15:02:43 compute-0 ansible-async_wrapper.py[51943]: Return async_wrapper task started.
Feb 02 15:02:43 compute-0 sudo[51941]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:44 compute-0 python3.9[51948]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Feb 02 15:02:44 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb 02 15:02:44 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb 02 15:02:44 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb 02 15:02:44 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb 02 15:02:44 compute-0 kernel: cfg80211: failed to load regulatory.db
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.1680] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.1694] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2301] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2303] audit: op="connection-add" uuid="43e688a7-c41c-4874-a2ef-cd689ac6947d" name="br-ex-br" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2320] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2322] audit: op="connection-add" uuid="c3f65a99-e145-491d-a2a7-d9b404c1ef80" name="br-ex-port" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2336] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2338] audit: op="connection-add" uuid="6d6778d3-ceb6-43d1-81a5-afe3d4026b51" name="eth1-port" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2352] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2354] audit: op="connection-add" uuid="65b7e278-1559-4f65-98db-ac466a90c511" name="vlan20-port" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2368] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2370] audit: op="connection-add" uuid="c352813a-93f6-4069-802d-b81fa3f07989" name="vlan21-port" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2383] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2385] audit: op="connection-add" uuid="1e3d3943-0e82-4f00-aa5e-6f744e8ab334" name="vlan22-port" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2398] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2400] audit: op="connection-add" uuid="3f1fe1d1-4ac3-4213-a6cd-8f0d73aa91ed" name="vlan23-port" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2422] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2444] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2446] audit: op="connection-add" uuid="27d58bb6-8159-49ce-ae4b-21a05fa187fc" name="br-ex-if" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2495] audit: op="connection-update" uuid="7c114712-cf58-5bea-858e-99132cd0be47" name="ci-private-network" args="ovs-interface.type,ipv4.method,ipv4.routing-rules,ipv4.never-default,ipv4.addresses,ipv4.dns,ipv4.routes,ipv6.method,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,ipv6.routes,connection.slave-type,connection.controller,connection.port-type,connection.master,connection.timestamp,ovs-external-ids.data" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2514] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2516] audit: op="connection-add" uuid="08fcf43b-8bae-439b-8ecb-05b375b14bbb" name="vlan20-if" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2531] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2532] audit: op="connection-add" uuid="e69d256c-f849-45d0-87c9-9b426c67806d" name="vlan21-if" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2547] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2549] audit: op="connection-add" uuid="7874e159-385d-4756-ac77-840f87f9c3ac" name="vlan22-if" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2563] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2565] audit: op="connection-add" uuid="b45262ac-3af8-4c76-8cea-e1b6b63f12be" name="vlan23-if" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2576] audit: op="connection-delete" uuid="f01884bf-ed74-3e7e-8325-4a6b7f715beb" name="Wired connection 1" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2586] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2588] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2595] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2599] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (43e688a7-c41c-4874-a2ef-cd689ac6947d)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2600] audit: op="connection-activate" uuid="43e688a7-c41c-4874-a2ef-cd689ac6947d" name="br-ex-br" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2602] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2603] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2607] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2611] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (c3f65a99-e145-491d-a2a7-d9b404c1ef80)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2613] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2613] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2617] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2620] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (6d6778d3-ceb6-43d1-81a5-afe3d4026b51)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2622] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2623] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2628] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2632] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (65b7e278-1559-4f65-98db-ac466a90c511)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2634] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2635] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2640] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2643] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c352813a-93f6-4069-802d-b81fa3f07989)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2645] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2646] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2650] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2654] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (1e3d3943-0e82-4f00-aa5e-6f744e8ab334)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2656] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2656] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2661] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2665] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (3f1fe1d1-4ac3-4213-a6cd-8f0d73aa91ed)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2666] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2669] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2670] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2677] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2678] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2682] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2686] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (27d58bb6-8159-49ce-ae4b-21a05fa187fc)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2687] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2691] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2693] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2694] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2696] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2708] device (eth1): disconnecting for new activation request.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2708] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2711] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2713] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2715] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2719] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2720] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2724] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2728] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (08fcf43b-8bae-439b-8ecb-05b375b14bbb)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2729] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2732] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2733] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2735] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2738] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2739] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2742] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2746] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (e69d256c-f849-45d0-87c9-9b426c67806d)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2747] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2750] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2752] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2753] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2756] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2757] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2762] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2766] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (7874e159-385d-4756-ac77-840f87f9c3ac)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2767] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2770] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2771] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2773] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2776] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <warn>  [1770044566.2776] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2780] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2784] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b45262ac-3af8-4c76-8cea-e1b6b63f12be)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2784] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2787] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2789] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2790] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2792] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2804] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2805] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2809] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2810] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2822] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2826] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2830] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2833] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2834] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 kernel: ovs-system: entered promiscuous mode
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2839] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2842] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 kernel: Timeout policy base is empty
Feb 02 15:02:46 compute-0 systemd-udevd[51954]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2872] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2874] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2878] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2881] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2883] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2884] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2888] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2891] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2893] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2894] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2897] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2900] dhcp4 (eth0): canceled DHCP transaction
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2901] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2901] dhcp4 (eth0): state changed no lease
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2902] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2912] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2921] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51949 uid=0 result="fail" reason="Device is not activated"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2924] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2931] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2938] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2944] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.2947] dhcp4 (eth0): state changed new lease, address=38.129.56.16
Feb 02 15:02:46 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3040] device (eth1): disconnecting for new activation request.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3041] audit: op="connection-activate" uuid="7c114712-cf58-5bea-858e-99132cd0be47" name="ci-private-network" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3041] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3156] device (eth1): Activation: starting connection 'ci-private-network' (7c114712-cf58-5bea-858e-99132cd0be47)
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3163] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3165] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3168] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3170] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3172] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3174] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 kernel: br-ex: entered promiscuous mode
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3188] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3208] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3215] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3227] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3233] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3240] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3245] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3250] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3255] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3262] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 kernel: vlan22: entered promiscuous mode
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3267] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 systemd-udevd[51955]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3272] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3278] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3282] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3288] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3293] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3298] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3308] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3315] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3331] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3340] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3344] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 kernel: vlan21: entered promiscuous mode
Feb 02 15:02:46 compute-0 systemd-udevd[52053]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3387] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 kernel: vlan20: entered promiscuous mode
Feb 02 15:02:46 compute-0 kernel: vlan23: entered promiscuous mode
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3477] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3489] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3491] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3496] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb 02 15:02:46 compute-0 systemd-udevd[51953]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3503] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3512] device (eth1): Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3520] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3526] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3553] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3569] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3580] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3602] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3609] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3610] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3612] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3618] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3624] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3628] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3640] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3641] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3644] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3648] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3676] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3718] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3720] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 02 15:02:46 compute-0 NetworkManager[49171]: <info>  [1770044566.3725] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 02 15:02:47 compute-0 sudo[52305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqezupyelvjqkxqyvewgdidsxfdxcelj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044566.9621599-290-161472833457515/AnsiballZ_async_status.py'
Feb 02 15:02:47 compute-0 sudo[52305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:47 compute-0 NetworkManager[49171]: <info>  [1770044567.5632] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Feb 02 15:02:47 compute-0 python3.9[52307]: ansible-ansible.legacy.async_status Invoked with jid=j869233864177.51943 mode=status _async_dir=/root/.ansible_async
Feb 02 15:02:47 compute-0 sudo[52305]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:47 compute-0 NetworkManager[49171]: <info>  [1770044567.7898] checkpoint[0x55563e363950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb 02 15:02:47 compute-0 NetworkManager[49171]: <info>  [1770044567.7901] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51949 uid=0 result="success"
Feb 02 15:02:48 compute-0 NetworkManager[49171]: <info>  [1770044568.1405] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51949 uid=0 result="success"
Feb 02 15:02:48 compute-0 NetworkManager[49171]: <info>  [1770044568.1418] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51949 uid=0 result="success"
Feb 02 15:02:48 compute-0 NetworkManager[49171]: <info>  [1770044568.3584] audit: op="networking-control" arg="global-dns-configuration" pid=51949 uid=0 result="success"
Feb 02 15:02:48 compute-0 NetworkManager[49171]: <info>  [1770044568.3622] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb 02 15:02:48 compute-0 NetworkManager[49171]: <info>  [1770044568.3651] audit: op="networking-control" arg="global-dns-configuration" pid=51949 uid=0 result="success"
Feb 02 15:02:48 compute-0 NetworkManager[49171]: <info>  [1770044568.3673] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51949 uid=0 result="success"
Feb 02 15:02:48 compute-0 NetworkManager[49171]: <info>  [1770044568.5314] checkpoint[0x55563e363a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb 02 15:02:48 compute-0 NetworkManager[49171]: <info>  [1770044568.5319] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51949 uid=0 result="success"
Feb 02 15:02:48 compute-0 ansible-async_wrapper.py[51947]: Module complete (51947)
Feb 02 15:02:48 compute-0 ansible-async_wrapper.py[51946]: Done in kid B.
Feb 02 15:02:50 compute-0 sudo[52411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plsuekrchwxndtmadclekxcsxocrjowb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044566.9621599-290-161472833457515/AnsiballZ_async_status.py'
Feb 02 15:02:50 compute-0 sudo[52411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:51 compute-0 python3.9[52413]: ansible-ansible.legacy.async_status Invoked with jid=j869233864177.51943 mode=status _async_dir=/root/.ansible_async
Feb 02 15:02:51 compute-0 sudo[52411]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:51 compute-0 sudo[52511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgvkmfouasnllhvjyrfjbvgidblhruku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044566.9621599-290-161472833457515/AnsiballZ_async_status.py'
Feb 02 15:02:51 compute-0 sudo[52511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:51 compute-0 python3.9[52513]: ansible-ansible.legacy.async_status Invoked with jid=j869233864177.51943 mode=cleanup _async_dir=/root/.ansible_async
Feb 02 15:02:51 compute-0 sudo[52511]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:51 compute-0 sudo[52663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egdpsaqxxhhwiunbcacarfmtaoxadkuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044571.7067537-317-99179463337093/AnsiballZ_stat.py'
Feb 02 15:02:51 compute-0 sudo[52663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:52 compute-0 python3.9[52665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:02:52 compute-0 sudo[52663]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:52 compute-0 sudo[52786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqisqrtwvpcaysreolpnmvkbxnvgfijr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044571.7067537-317-99179463337093/AnsiballZ_copy.py'
Feb 02 15:02:52 compute-0 sudo[52786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:52 compute-0 python3.9[52788]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044571.7067537-317-99179463337093/.source.returncode _original_basename=.dlt1m2ch follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:52 compute-0 sudo[52786]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:53 compute-0 sudo[52938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yevqbevzvvtrvdidkfjpcinlqixafhnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044572.9666736-333-266991162398886/AnsiballZ_stat.py'
Feb 02 15:02:53 compute-0 sudo[52938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:53 compute-0 python3.9[52940]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:02:53 compute-0 sudo[52938]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:53 compute-0 sudo[53061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozpotlehblgbtwxyhxnlzxdymgkdfaph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044572.9666736-333-266991162398886/AnsiballZ_copy.py'
Feb 02 15:02:53 compute-0 sudo[53061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:54 compute-0 python3.9[53063]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044572.9666736-333-266991162398886/.source.cfg _original_basename=.77twd12k follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:02:54 compute-0 sudo[53061]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:54 compute-0 sudo[53214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fywfrztxfhlcynkmxtvzocdtromkzrfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044574.1690161-348-238562220335693/AnsiballZ_systemd.py'
Feb 02 15:02:54 compute-0 sudo[53214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:02:54 compute-0 python3.9[53216]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:02:54 compute-0 systemd[1]: Reloading Network Manager...
Feb 02 15:02:54 compute-0 NetworkManager[49171]: <info>  [1770044574.7835] audit: op="reload" arg="0" pid=53220 uid=0 result="success"
Feb 02 15:02:54 compute-0 NetworkManager[49171]: <info>  [1770044574.7844] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb 02 15:02:54 compute-0 systemd[1]: Reloaded Network Manager.
Feb 02 15:02:54 compute-0 sudo[53214]: pam_unix(sudo:session): session closed for user root
Feb 02 15:02:55 compute-0 sshd-session[45167]: Connection closed by 192.168.122.30 port 59754
Feb 02 15:02:55 compute-0 sshd-session[45164]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:02:55 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Feb 02 15:02:55 compute-0 systemd[1]: session-10.scope: Consumed 43.767s CPU time.
Feb 02 15:02:55 compute-0 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Feb 02 15:02:55 compute-0 systemd-logind[786]: Removed session 10.
Feb 02 15:02:57 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 02 15:03:00 compute-0 sshd-session[53252]: Accepted publickey for zuul from 192.168.122.30 port 33982 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:03:00 compute-0 systemd-logind[786]: New session 11 of user zuul.
Feb 02 15:03:00 compute-0 systemd[1]: Started Session 11 of User zuul.
Feb 02 15:03:00 compute-0 sshd-session[53252]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:03:01 compute-0 python3.9[53406]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:03:02 compute-0 python3.9[53560]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:03:03 compute-0 python3.9[53753]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:03:04 compute-0 sshd-session[53255]: Connection closed by 192.168.122.30 port 33982
Feb 02 15:03:04 compute-0 sshd-session[53252]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:03:04 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Feb 02 15:03:04 compute-0 systemd[1]: session-11.scope: Consumed 1.971s CPU time.
Feb 02 15:03:04 compute-0 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Feb 02 15:03:04 compute-0 systemd-logind[786]: Removed session 11.
Feb 02 15:03:04 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 02 15:03:09 compute-0 sshd-session[53782]: Accepted publickey for zuul from 192.168.122.30 port 38202 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:03:09 compute-0 systemd-logind[786]: New session 12 of user zuul.
Feb 02 15:03:09 compute-0 systemd[1]: Started Session 12 of User zuul.
Feb 02 15:03:09 compute-0 sshd-session[53782]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:03:10 compute-0 python3.9[53936]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:03:11 compute-0 python3.9[54090]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:03:12 compute-0 sudo[54244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvnnbkeyqwsfrxrbavukssxvgjxwdyex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044591.7513764-35-73573938036981/AnsiballZ_setup.py'
Feb 02 15:03:12 compute-0 sudo[54244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:12 compute-0 python3.9[54246]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:03:12 compute-0 sudo[54244]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:12 compute-0 sudo[54328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaiqkdfqkmjskizurdhkqxtlpbpkbgow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044591.7513764-35-73573938036981/AnsiballZ_dnf.py'
Feb 02 15:03:12 compute-0 sudo[54328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:13 compute-0 python3.9[54330]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:03:14 compute-0 sudo[54328]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:14 compute-0 sudo[54482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiiovlfdtrkjvuyikdsqqcxzaqjqwhvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044594.5562499-47-253730871339932/AnsiballZ_setup.py'
Feb 02 15:03:14 compute-0 sudo[54482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:15 compute-0 python3.9[54484]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:03:15 compute-0 sudo[54482]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:15 compute-0 sudo[54677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nboysnqtlyrrnwhkqybiqhqwfojdzmwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044595.546027-58-198806555334495/AnsiballZ_file.py'
Feb 02 15:03:15 compute-0 sudo[54677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:16 compute-0 python3.9[54679]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:03:16 compute-0 sudo[54677]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:16 compute-0 sudo[54830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mckixnvgpkivrypiudofulrhdyxygvqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044596.2597082-66-174315056878165/AnsiballZ_command.py'
Feb 02 15:03:16 compute-0 sudo[54830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:16 compute-0 python3.9[54832]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:03:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat402537547-merged.mount: Deactivated successfully.
Feb 02 15:03:16 compute-0 podman[54833]: 2026-02-02 15:03:16.922544096 +0000 UTC m=+0.041509120 system refresh
Feb 02 15:03:16 compute-0 sudo[54830]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:17 compute-0 sudo[54993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjkvomcvugshlwtqqgvyfdtybmciiynd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044597.086391-74-129045421528320/AnsiballZ_stat.py'
Feb 02 15:03:17 compute-0 sudo[54993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:17 compute-0 python3.9[54995]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:03:17 compute-0 sudo[54993]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:03:18 compute-0 sudo[55116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nofnnszakbijkcrhfjmzngzaeokloalv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044597.086391-74-129045421528320/AnsiballZ_copy.py'
Feb 02 15:03:18 compute-0 sudo[55116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:18 compute-0 python3.9[55118]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044597.086391-74-129045421528320/.source.json follow=False _original_basename=podman_network_config.j2 checksum=dadae08b14b2582cfa327e46a7871edd54542402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:03:18 compute-0 sudo[55116]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:18 compute-0 sudo[55268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkwsjibnzharszxwefqcirawnnzweals ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044598.5317671-89-100839650983601/AnsiballZ_stat.py'
Feb 02 15:03:18 compute-0 sudo[55268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:18 compute-0 python3.9[55270]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:03:18 compute-0 sudo[55268]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:19 compute-0 sudo[55391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezpvopulcbetljvgotqypclscbjzieee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044598.5317671-89-100839650983601/AnsiballZ_copy.py'
Feb 02 15:03:19 compute-0 sudo[55391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:19 compute-0 python3.9[55393]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770044598.5317671-89-100839650983601/.source.conf follow=False _original_basename=registries.conf.j2 checksum=4591d50a695378a6731466fc923a7fec458ffe68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:03:19 compute-0 sudo[55391]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:20 compute-0 sudo[55543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwyveozmjexrkprceimvmwriegaoecr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044599.613957-105-79309054122179/AnsiballZ_ini_file.py'
Feb 02 15:03:20 compute-0 sudo[55543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:20 compute-0 python3.9[55545]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:03:20 compute-0 sudo[55543]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:20 compute-0 sudo[55695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xemuiqulzeiyascfzngwzhnzvxwvbhey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044600.4664183-105-154088706025806/AnsiballZ_ini_file.py'
Feb 02 15:03:20 compute-0 sudo[55695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:20 compute-0 python3.9[55697]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:03:20 compute-0 sudo[55695]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:21 compute-0 sudo[55847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzcsezmbdvjlyotnhotpsyyyfsuveezv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044601.0442934-105-152431219179049/AnsiballZ_ini_file.py'
Feb 02 15:03:21 compute-0 sudo[55847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:21 compute-0 python3.9[55849]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:03:21 compute-0 sudo[55847]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:21 compute-0 sudo[55999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhloasimmevwtzbakukkuebrvljjlxyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044601.591312-105-198302561085250/AnsiballZ_ini_file.py'
Feb 02 15:03:21 compute-0 sudo[55999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:22 compute-0 python3.9[56001]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:03:22 compute-0 sudo[55999]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:22 compute-0 sudo[56151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhrdefbgwprmgiksfugtpejvhrcsngvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044602.2914572-136-118509510626854/AnsiballZ_dnf.py'
Feb 02 15:03:22 compute-0 sudo[56151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:22 compute-0 python3.9[56153]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:03:23 compute-0 sudo[56151]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:24 compute-0 sudo[56304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biixrenrrjoillbqvvneysenfrvjopzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044604.3588185-147-85845869269721/AnsiballZ_setup.py'
Feb 02 15:03:24 compute-0 sudo[56304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:24 compute-0 python3.9[56306]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:03:24 compute-0 sudo[56304]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:25 compute-0 sudo[56458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btupkhsugqzfszdpidixoniyparxuslt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044605.0349488-155-196784553965310/AnsiballZ_stat.py'
Feb 02 15:03:25 compute-0 sudo[56458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:25 compute-0 python3.9[56460]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:03:25 compute-0 sudo[56458]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:25 compute-0 sudo[56610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybrfjfhcbwqimcxgnweiiofaqcsbemwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044605.687603-164-145175730200675/AnsiballZ_stat.py'
Feb 02 15:03:25 compute-0 sudo[56610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:26 compute-0 python3.9[56612]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:03:26 compute-0 sudo[56610]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:26 compute-0 sudo[56762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lafyuxaqmpxqnfjqrggtirmtiiyrgtho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044606.385125-174-192663527801565/AnsiballZ_command.py'
Feb 02 15:03:26 compute-0 sudo[56762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:26 compute-0 python3.9[56764]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:03:26 compute-0 sudo[56762]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:27 compute-0 sudo[56915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngpkeyrmexonbpycndyomvaprwdilpos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044607.1165853-184-16714901401823/AnsiballZ_service_facts.py'
Feb 02 15:03:27 compute-0 sudo[56915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:27 compute-0 python3.9[56917]: ansible-service_facts Invoked
Feb 02 15:03:27 compute-0 network[56934]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 15:03:27 compute-0 network[56935]: 'network-scripts' will be removed from distribution in near future.
Feb 02 15:03:27 compute-0 network[56936]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 15:03:30 compute-0 sudo[56915]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:31 compute-0 sudo[57219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goalibbgbeejlkpudfninfprvubosrra ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1770044610.705006-199-258167936110477/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1770044610.705006-199-258167936110477/args'
Feb 02 15:03:31 compute-0 sudo[57219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:31 compute-0 sudo[57219]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:31 compute-0 sudo[57386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvovigsdwgfxuplladhropqhoygopxco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044611.394295-210-32760137052895/AnsiballZ_dnf.py'
Feb 02 15:03:31 compute-0 sudo[57386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:31 compute-0 python3.9[57388]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:03:33 compute-0 sudo[57386]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:33 compute-0 sudo[57539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weqpdweiuevbraszcpvvoksyrachameq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044613.3565648-223-259677798441719/AnsiballZ_package_facts.py'
Feb 02 15:03:33 compute-0 sudo[57539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:34 compute-0 python3.9[57541]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 02 15:03:34 compute-0 sudo[57539]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:35 compute-0 sudo[57691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-remfegcmdwtuzubkqklfqyxkkowkotig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044614.8017595-233-83342781700512/AnsiballZ_stat.py'
Feb 02 15:03:35 compute-0 sudo[57691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:35 compute-0 python3.9[57693]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:03:35 compute-0 sudo[57691]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:35 compute-0 sudo[57816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yorrehmpkytuxkglhdmkuqywzbmfhgzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044614.8017595-233-83342781700512/AnsiballZ_copy.py'
Feb 02 15:03:35 compute-0 sudo[57816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:35 compute-0 python3.9[57818]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044614.8017595-233-83342781700512/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:03:35 compute-0 sudo[57816]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:36 compute-0 sudo[57970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxpjvvnepkdrryonqlktnmvzhqshamuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044616.0130568-248-116337657615909/AnsiballZ_stat.py'
Feb 02 15:03:36 compute-0 sudo[57970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:36 compute-0 python3.9[57972]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:03:36 compute-0 sudo[57970]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:36 compute-0 sudo[58095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlqfkhcowsycmblkkxqzsdeqdffgavph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044616.0130568-248-116337657615909/AnsiballZ_copy.py'
Feb 02 15:03:36 compute-0 sudo[58095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:36 compute-0 python3.9[58097]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044616.0130568-248-116337657615909/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:03:36 compute-0 sudo[58095]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:37 compute-0 sudo[58249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztlhnrfnrabcetnovauzmgbcnwifnily ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044617.3897407-269-81193815729849/AnsiballZ_lineinfile.py'
Feb 02 15:03:37 compute-0 sudo[58249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:37 compute-0 python3.9[58251]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:03:38 compute-0 sudo[58249]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:38 compute-0 sudo[58403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfmxgcwqhubusiaqpndyggqmzybtcury ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044618.6443546-284-146097497633682/AnsiballZ_setup.py'
Feb 02 15:03:38 compute-0 sudo[58403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:39 compute-0 python3.9[58405]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:03:39 compute-0 sudo[58403]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:40 compute-0 sudo[58487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvbnlpuahwklgzgqvstlendknncjopwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044618.6443546-284-146097497633682/AnsiballZ_systemd.py'
Feb 02 15:03:40 compute-0 sudo[58487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:40 compute-0 python3.9[58489]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:03:40 compute-0 sudo[58487]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:41 compute-0 sudo[58641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmgfaxgswigkxzhizsjbbpqfcpkhjvql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044620.8903997-300-137782563852414/AnsiballZ_setup.py'
Feb 02 15:03:41 compute-0 sudo[58641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:41 compute-0 python3.9[58643]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:03:41 compute-0 sudo[58641]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:41 compute-0 sudo[58725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auujfhnfmamjudpyagbamvnfyqtlwlht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044620.8903997-300-137782563852414/AnsiballZ_systemd.py'
Feb 02 15:03:41 compute-0 sudo[58725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:42 compute-0 python3.9[58727]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:03:42 compute-0 chronyd[790]: chronyd exiting
Feb 02 15:03:42 compute-0 systemd[1]: Stopping NTP client/server...
Feb 02 15:03:42 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Feb 02 15:03:42 compute-0 systemd[1]: Stopped NTP client/server.
Feb 02 15:03:42 compute-0 systemd[1]: Starting NTP client/server...
Feb 02 15:03:42 compute-0 chronyd[58735]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 02 15:03:42 compute-0 chronyd[58735]: Frequency -23.841 +/- 0.068 ppm read from /var/lib/chrony/drift
Feb 02 15:03:42 compute-0 chronyd[58735]: Loaded seccomp filter (level 2)
Feb 02 15:03:42 compute-0 systemd[1]: Started NTP client/server.
Feb 02 15:03:42 compute-0 sudo[58725]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:42 compute-0 sshd-session[53785]: Connection closed by 192.168.122.30 port 38202
Feb 02 15:03:42 compute-0 sshd-session[53782]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:03:42 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Feb 02 15:03:42 compute-0 systemd[1]: session-12.scope: Consumed 22.119s CPU time.
Feb 02 15:03:42 compute-0 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Feb 02 15:03:42 compute-0 systemd-logind[786]: Removed session 12.
Feb 02 15:03:48 compute-0 sshd-session[58761]: Accepted publickey for zuul from 192.168.122.30 port 44632 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:03:48 compute-0 systemd-logind[786]: New session 13 of user zuul.
Feb 02 15:03:48 compute-0 systemd[1]: Started Session 13 of User zuul.
Feb 02 15:03:48 compute-0 sshd-session[58761]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:03:48 compute-0 sudo[58914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbpucsnyinvvayorwjyvcgiljdebgwok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044628.139237-17-4632123894730/AnsiballZ_file.py'
Feb 02 15:03:48 compute-0 sudo[58914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:48 compute-0 python3.9[58916]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:03:48 compute-0 sudo[58914]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:49 compute-0 sudo[59066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzcaogkrdikqeiwaqphfnrmdpbbsbsjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044628.981847-29-223467499526744/AnsiballZ_stat.py'
Feb 02 15:03:49 compute-0 sudo[59066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:49 compute-0 python3.9[59068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:03:49 compute-0 sudo[59066]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:50 compute-0 sudo[59189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkkfmhgfdbsobgimxjmsueuwyosmmeaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044628.981847-29-223467499526744/AnsiballZ_copy.py'
Feb 02 15:03:50 compute-0 sudo[59189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:50 compute-0 python3.9[59191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044628.981847-29-223467499526744/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:03:50 compute-0 sudo[59189]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:50 compute-0 sshd-session[58764]: Connection closed by 192.168.122.30 port 44632
Feb 02 15:03:50 compute-0 sshd-session[58761]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:03:50 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Feb 02 15:03:50 compute-0 systemd[1]: session-13.scope: Consumed 1.490s CPU time.
Feb 02 15:03:50 compute-0 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Feb 02 15:03:50 compute-0 systemd-logind[786]: Removed session 13.
Feb 02 15:03:56 compute-0 sshd-session[59216]: Accepted publickey for zuul from 192.168.122.30 port 33516 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:03:56 compute-0 systemd-logind[786]: New session 14 of user zuul.
Feb 02 15:03:56 compute-0 systemd[1]: Started Session 14 of User zuul.
Feb 02 15:03:56 compute-0 sshd-session[59216]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:03:57 compute-0 python3.9[59369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:03:58 compute-0 sudo[59523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkebkuochjyisycfvvejfwuhxdkpyogu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044637.761687-28-240074766030198/AnsiballZ_file.py'
Feb 02 15:03:58 compute-0 sudo[59523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:58 compute-0 python3.9[59525]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:03:58 compute-0 sudo[59523]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:59 compute-0 sudo[59698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlqzqyfmdobpwrlvdfbirosavnsdormp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044638.5601563-36-7988944921262/AnsiballZ_stat.py'
Feb 02 15:03:59 compute-0 sudo[59698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:59 compute-0 python3.9[59700]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:03:59 compute-0 sudo[59698]: pam_unix(sudo:session): session closed for user root
Feb 02 15:03:59 compute-0 sudo[59821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evmsmwgqzjxwufvqbfekzbuclbuiyond ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044638.5601563-36-7988944921262/AnsiballZ_copy.py'
Feb 02 15:03:59 compute-0 sudo[59821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:03:59 compute-0 python3.9[59823]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1770044638.5601563-36-7988944921262/.source.json _original_basename=.j4qwhkcg follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:03:59 compute-0 sudo[59821]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:00 compute-0 sudo[59973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlxagxxpmwzmwzkkbeighhiabeifhqov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044640.3545237-59-134772381131152/AnsiballZ_stat.py'
Feb 02 15:04:00 compute-0 sudo[59973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:00 compute-0 python3.9[59975]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:00 compute-0 sudo[59973]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:01 compute-0 sudo[60096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xarjifoozqnvzdjiacdqmcbpeysgepdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044640.3545237-59-134772381131152/AnsiballZ_copy.py'
Feb 02 15:04:01 compute-0 sudo[60096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:01 compute-0 python3.9[60098]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044640.3545237-59-134772381131152/.source _original_basename=._9cxw4m7 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:01 compute-0 sudo[60096]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:01 compute-0 sudo[60248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtqtwhjtrzmxonottgtvpflbjapgztbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044641.471068-75-156197595149126/AnsiballZ_file.py'
Feb 02 15:04:01 compute-0 sudo[60248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:01 compute-0 python3.9[60250]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:04:01 compute-0 sudo[60248]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:02 compute-0 sudo[60400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efdapiiccgiiongixadwhbuxuxgjhcmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044642.1522262-83-8374178814621/AnsiballZ_stat.py'
Feb 02 15:04:02 compute-0 sudo[60400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:02 compute-0 python3.9[60402]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:02 compute-0 sudo[60400]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:02 compute-0 sudo[60523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjynjmzxpbjlukmmcbmvpeqnhjkluhop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044642.1522262-83-8374178814621/AnsiballZ_copy.py'
Feb 02 15:04:02 compute-0 sudo[60523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:03 compute-0 python3.9[60525]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770044642.1522262-83-8374178814621/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:04:03 compute-0 sudo[60523]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:03 compute-0 sudo[60675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwztywdxfqvuqtgynbcnegvyjxjtrzyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044643.271131-83-139619064347805/AnsiballZ_stat.py'
Feb 02 15:04:03 compute-0 sudo[60675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:03 compute-0 python3.9[60677]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:03 compute-0 sudo[60675]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:04 compute-0 sudo[60798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djbdlrenarqjaghhwcdhlllkqbprkfjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044643.271131-83-139619064347805/AnsiballZ_copy.py'
Feb 02 15:04:04 compute-0 sudo[60798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:04 compute-0 python3.9[60800]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770044643.271131-83-139619064347805/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:04:04 compute-0 sudo[60798]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:04 compute-0 sudo[60950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxknexfpoitebeyujakanuuvaxgijgtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044644.4755006-112-1896465717013/AnsiballZ_file.py'
Feb 02 15:04:04 compute-0 sudo[60950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:04 compute-0 python3.9[60952]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:04 compute-0 sudo[60950]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:05 compute-0 sudo[61102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qonmkkgovrcyzfyxtpmeykiuyvgulpvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044645.150899-120-212661166601645/AnsiballZ_stat.py'
Feb 02 15:04:05 compute-0 sudo[61102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:05 compute-0 python3.9[61104]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:05 compute-0 sudo[61102]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:05 compute-0 sudo[61225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abntjmvyeufrhycsoqboosjpyakigkrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044645.150899-120-212661166601645/AnsiballZ_copy.py'
Feb 02 15:04:05 compute-0 sudo[61225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:06 compute-0 python3.9[61227]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044645.150899-120-212661166601645/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:06 compute-0 sudo[61225]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:06 compute-0 sudo[61377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpbujmcbvatnqzgmapxsfxhisqbcgazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044646.357099-135-101722078259027/AnsiballZ_stat.py'
Feb 02 15:04:06 compute-0 sudo[61377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:06 compute-0 python3.9[61379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:06 compute-0 sudo[61377]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:07 compute-0 sudo[61500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btvutizukihddzjavtgmqvefnuhfnfyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044646.357099-135-101722078259027/AnsiballZ_copy.py'
Feb 02 15:04:07 compute-0 sudo[61500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:07 compute-0 python3.9[61502]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044646.357099-135-101722078259027/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:07 compute-0 sudo[61500]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:08 compute-0 sudo[61652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cofuqytspcebkenfhzvbkwjvwxcxcglp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044647.533466-150-103542540358089/AnsiballZ_systemd.py'
Feb 02 15:04:08 compute-0 sudo[61652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:08 compute-0 python3.9[61654]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:04:08 compute-0 systemd[1]: Reloading.
Feb 02 15:04:08 compute-0 systemd-rc-local-generator[61680]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:04:08 compute-0 systemd-sysv-generator[61684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:04:08 compute-0 systemd[1]: Reloading.
Feb 02 15:04:08 compute-0 systemd-rc-local-generator[61719]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:04:08 compute-0 systemd-sysv-generator[61722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:04:08 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Feb 02 15:04:08 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Feb 02 15:04:08 compute-0 sudo[61652]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:09 compute-0 sudo[61879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqncglsklerfryzshwygevyujywcalox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044649.0096817-158-241368836725539/AnsiballZ_stat.py'
Feb 02 15:04:09 compute-0 sudo[61879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:09 compute-0 python3.9[61881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:09 compute-0 sudo[61879]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:09 compute-0 sudo[62002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjcriireubsovzeqtpvvdacliiiwxtos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044649.0096817-158-241368836725539/AnsiballZ_copy.py'
Feb 02 15:04:09 compute-0 sudo[62002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:09 compute-0 python3.9[62004]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044649.0096817-158-241368836725539/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:09 compute-0 sudo[62002]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:10 compute-0 sudo[62154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zclehjmqtgedeyaokixiwbkuxepbajwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044650.116069-173-108384192774003/AnsiballZ_stat.py'
Feb 02 15:04:10 compute-0 sudo[62154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:10 compute-0 python3.9[62156]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:10 compute-0 sudo[62154]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:10 compute-0 sudo[62277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rogkappdwrobtvyrraeofzrervllrdme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044650.116069-173-108384192774003/AnsiballZ_copy.py'
Feb 02 15:04:10 compute-0 sudo[62277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:11 compute-0 python3.9[62279]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044650.116069-173-108384192774003/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:11 compute-0 sudo[62277]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:11 compute-0 sudo[62429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hngwlxvgjczpngqaebiewouffcqazfqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044651.250803-188-183595438592848/AnsiballZ_systemd.py'
Feb 02 15:04:11 compute-0 sudo[62429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:11 compute-0 python3.9[62431]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:04:11 compute-0 systemd[1]: Reloading.
Feb 02 15:04:11 compute-0 systemd-rc-local-generator[62458]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:04:11 compute-0 systemd-sysv-generator[62462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:04:12 compute-0 systemd[1]: Reloading.
Feb 02 15:04:12 compute-0 systemd-rc-local-generator[62496]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:04:12 compute-0 systemd-sysv-generator[62500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:04:12 compute-0 systemd[1]: Starting Create netns directory...
Feb 02 15:04:12 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 02 15:04:12 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 02 15:04:12 compute-0 systemd[1]: Finished Create netns directory.
Feb 02 15:04:12 compute-0 sudo[62429]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:13 compute-0 python3.9[62657]: ansible-ansible.builtin.service_facts Invoked
Feb 02 15:04:13 compute-0 network[62674]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 15:04:13 compute-0 network[62675]: 'network-scripts' will be removed from distribution in near future.
Feb 02 15:04:13 compute-0 network[62676]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 15:04:15 compute-0 sudo[62936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzuhhhzghkkqsnupncuownatijgzwxuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044655.5740252-204-210396742529303/AnsiballZ_systemd.py'
Feb 02 15:04:15 compute-0 sudo[62936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:16 compute-0 python3.9[62938]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:04:16 compute-0 systemd[1]: Reloading.
Feb 02 15:04:16 compute-0 systemd-rc-local-generator[62957]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:04:16 compute-0 systemd-sysv-generator[62961]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:04:16 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Feb 02 15:04:16 compute-0 iptables.init[62978]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb 02 15:04:16 compute-0 iptables.init[62978]: iptables: Flushing firewall rules: [  OK  ]
Feb 02 15:04:16 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Feb 02 15:04:16 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Feb 02 15:04:16 compute-0 sudo[62936]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:17 compute-0 sudo[63172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwnvlybihuplomrkdkmllezmxmgtbtbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044656.8472948-204-84122982903329/AnsiballZ_systemd.py'
Feb 02 15:04:17 compute-0 sudo[63172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:17 compute-0 python3.9[63174]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:04:17 compute-0 sudo[63172]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:17 compute-0 sudo[63326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thwbavtvvlxybqzltocjthyktctcecgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044657.6925564-220-240558964483744/AnsiballZ_systemd.py'
Feb 02 15:04:17 compute-0 sudo[63326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:18 compute-0 python3.9[63328]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:04:18 compute-0 systemd[1]: Reloading.
Feb 02 15:04:18 compute-0 systemd-rc-local-generator[63358]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:04:18 compute-0 systemd-sysv-generator[63361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:04:18 compute-0 systemd[1]: Starting Netfilter Tables...
Feb 02 15:04:18 compute-0 systemd[1]: Finished Netfilter Tables.
Feb 02 15:04:18 compute-0 sudo[63326]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:19 compute-0 sudo[63518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxotnixzppiciceubwarqnnidyvokviv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044658.6851506-228-165550552512325/AnsiballZ_command.py'
Feb 02 15:04:19 compute-0 sudo[63518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:19 compute-0 python3.9[63520]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:04:19 compute-0 sudo[63518]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:19 compute-0 sudo[63671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oglusxoczdcortetalqztuaoxbaibcoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044659.6720836-242-215539763564868/AnsiballZ_stat.py'
Feb 02 15:04:19 compute-0 sudo[63671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:20 compute-0 python3.9[63673]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:20 compute-0 sudo[63671]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:20 compute-0 sudo[63796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zayrboeceoxehbgedguxtzsdahmptrwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044659.6720836-242-215539763564868/AnsiballZ_copy.py'
Feb 02 15:04:20 compute-0 sudo[63796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:20 compute-0 python3.9[63798]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044659.6720836-242-215539763564868/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:20 compute-0 sudo[63796]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:21 compute-0 sudo[63949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wacicpbipganynjiqerrwbugjhtpywzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044660.8193057-257-61671900155785/AnsiballZ_systemd.py'
Feb 02 15:04:21 compute-0 sudo[63949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:21 compute-0 python3.9[63951]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:04:21 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Feb 02 15:04:21 compute-0 sshd[1005]: Received SIGHUP; restarting.
Feb 02 15:04:21 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Feb 02 15:04:21 compute-0 sshd[1005]: Server listening on 0.0.0.0 port 22.
Feb 02 15:04:21 compute-0 sshd[1005]: Server listening on :: port 22.
Feb 02 15:04:21 compute-0 sudo[63949]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:21 compute-0 sudo[64105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdpovvijdgpjtqzzidckmpxlldxymhag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044661.5820413-265-250645119882970/AnsiballZ_file.py'
Feb 02 15:04:21 compute-0 sudo[64105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:21 compute-0 python3.9[64107]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:21 compute-0 sudo[64105]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:22 compute-0 sudo[64257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghvswrypfhjkdqedxwlywsubcwuxkljc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044662.134652-273-163169495274042/AnsiballZ_stat.py'
Feb 02 15:04:22 compute-0 sudo[64257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:22 compute-0 python3.9[64259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:22 compute-0 sudo[64257]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:22 compute-0 sudo[64380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xobjodeprswjqcizhukbsoyiumfofweo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044662.134652-273-163169495274042/AnsiballZ_copy.py'
Feb 02 15:04:22 compute-0 sudo[64380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:23 compute-0 python3.9[64382]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044662.134652-273-163169495274042/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:23 compute-0 sudo[64380]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:23 compute-0 sudo[64532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inuwzdocidfyuhsfzsnommcyosybmkqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044663.420226-291-77875781591435/AnsiballZ_timezone.py'
Feb 02 15:04:23 compute-0 sudo[64532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:24 compute-0 python3.9[64534]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 02 15:04:24 compute-0 systemd[1]: Starting Time & Date Service...
Feb 02 15:04:24 compute-0 systemd[1]: Started Time & Date Service.
Feb 02 15:04:24 compute-0 sudo[64532]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:24 compute-0 sudo[64688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsitpadcwgswyqolcvorxkiprpqoruta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044664.5066507-300-144926935821794/AnsiballZ_file.py'
Feb 02 15:04:24 compute-0 sudo[64688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:24 compute-0 python3.9[64690]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:24 compute-0 sudo[64688]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:25 compute-0 sudo[64840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpsyucfkfuvmrhyvooigdmwcwhttpuvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044665.1372263-308-132897549119953/AnsiballZ_stat.py'
Feb 02 15:04:25 compute-0 sudo[64840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:25 compute-0 python3.9[64842]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:25 compute-0 sudo[64840]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:25 compute-0 sudo[64963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-womhajfgzcnxgnajuflnvdukvruaplzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044665.1372263-308-132897549119953/AnsiballZ_copy.py'
Feb 02 15:04:25 compute-0 sudo[64963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:26 compute-0 python3.9[64965]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044665.1372263-308-132897549119953/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:26 compute-0 sudo[64963]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:26 compute-0 sudo[65115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqeduogefcfdthpohaqhbtgkkkyvzbvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044666.284276-323-2965161716045/AnsiballZ_stat.py'
Feb 02 15:04:26 compute-0 sudo[65115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:26 compute-0 python3.9[65117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:26 compute-0 sudo[65115]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:27 compute-0 sudo[65238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aosukaopbjyfqhvcwswzbnxdqzbqkiww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044666.284276-323-2965161716045/AnsiballZ_copy.py'
Feb 02 15:04:27 compute-0 sudo[65238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:27 compute-0 python3.9[65240]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770044666.284276-323-2965161716045/.source.yaml _original_basename=._jeiv890 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:27 compute-0 sudo[65238]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:27 compute-0 sudo[65390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itggxykjqnqxgeyhujjznpfkfmjdjypn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044667.4053218-338-198422494049528/AnsiballZ_stat.py'
Feb 02 15:04:27 compute-0 sudo[65390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:27 compute-0 python3.9[65392]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:27 compute-0 sudo[65390]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:28 compute-0 sudo[65513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igjbtzluwxvsuezusfjnvkfrafrapqtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044667.4053218-338-198422494049528/AnsiballZ_copy.py'
Feb 02 15:04:28 compute-0 sudo[65513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:28 compute-0 python3.9[65515]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044667.4053218-338-198422494049528/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:28 compute-0 sudo[65513]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:28 compute-0 sudo[65665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpgzylkaqheioihivblyyxjldkoldqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044668.5197263-353-101633920756521/AnsiballZ_command.py'
Feb 02 15:04:28 compute-0 sudo[65665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:28 compute-0 python3.9[65667]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:04:28 compute-0 sudo[65665]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:29 compute-0 sudo[65818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgzaamuhctlvocxpzblgygtgdcyynhhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044669.1410222-361-151991810573289/AnsiballZ_command.py'
Feb 02 15:04:29 compute-0 sudo[65818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:29 compute-0 python3.9[65820]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:04:29 compute-0 sudo[65818]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:30 compute-0 sudo[65971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxjdyxzezdjchqwubtfrdpilmrxmuoji ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770044669.7925663-369-147395090427876/AnsiballZ_edpm_nftables_from_files.py'
Feb 02 15:04:30 compute-0 sudo[65971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:30 compute-0 python3[65973]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 02 15:04:30 compute-0 sudo[65971]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:30 compute-0 sudo[66123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nukbtmvrekdcgjzkysslmkjjivhujjdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044670.676889-377-209462159424868/AnsiballZ_stat.py'
Feb 02 15:04:30 compute-0 sudo[66123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:31 compute-0 python3.9[66125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:31 compute-0 sudo[66123]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:31 compute-0 sudo[66246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwjrzwclvbvegmmfbtnnxzesnnfwtxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044670.676889-377-209462159424868/AnsiballZ_copy.py'
Feb 02 15:04:31 compute-0 sudo[66246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:31 compute-0 python3.9[66248]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044670.676889-377-209462159424868/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:31 compute-0 sudo[66246]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:32 compute-0 sudo[66398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xafrujvbfnwtgmviernypnreumrkevkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044671.826074-392-244501249528513/AnsiballZ_stat.py'
Feb 02 15:04:32 compute-0 sudo[66398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:32 compute-0 python3.9[66400]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:32 compute-0 sudo[66398]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:32 compute-0 sudo[66521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqnhunfdprihzxhetcyemzpugrpmkrqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044671.826074-392-244501249528513/AnsiballZ_copy.py'
Feb 02 15:04:32 compute-0 sudo[66521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:32 compute-0 python3.9[66523]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044671.826074-392-244501249528513/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:32 compute-0 sudo[66521]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:33 compute-0 sudo[66673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvkgipoimzpbvhwmfbjnzggtdkhtepek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044672.966474-407-204026339753423/AnsiballZ_stat.py'
Feb 02 15:04:33 compute-0 sudo[66673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:33 compute-0 python3.9[66675]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:33 compute-0 sudo[66673]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:33 compute-0 sudo[66796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnlwxqivhtvgecjikphdfnzvykogilsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044672.966474-407-204026339753423/AnsiballZ_copy.py'
Feb 02 15:04:33 compute-0 sudo[66796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:34 compute-0 python3.9[66798]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044672.966474-407-204026339753423/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:34 compute-0 sudo[66796]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:34 compute-0 sudo[66948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxsmimnhpqhfazaugcunhkcibrbckavv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044674.2258494-422-252414791541830/AnsiballZ_stat.py'
Feb 02 15:04:34 compute-0 sudo[66948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:34 compute-0 python3.9[66950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:34 compute-0 sudo[66948]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:35 compute-0 sudo[67071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtwiomxxzjzhxsbomxvufvsbmolkamwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044674.2258494-422-252414791541830/AnsiballZ_copy.py'
Feb 02 15:04:35 compute-0 sudo[67071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:35 compute-0 python3.9[67073]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044674.2258494-422-252414791541830/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:35 compute-0 sudo[67071]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:35 compute-0 sudo[67223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cusdnmdxksmtihnswapbwhyeficxuqnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044675.4286654-437-239130875727577/AnsiballZ_stat.py'
Feb 02 15:04:35 compute-0 sudo[67223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:35 compute-0 python3.9[67225]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:04:35 compute-0 sudo[67223]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:36 compute-0 sudo[67346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzqogghjfrblqwtwjsxigzumydmscgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044675.4286654-437-239130875727577/AnsiballZ_copy.py'
Feb 02 15:04:36 compute-0 sudo[67346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:36 compute-0 python3.9[67348]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770044675.4286654-437-239130875727577/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:36 compute-0 sudo[67346]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:36 compute-0 sudo[67498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abmbndsinugfshlnftqlqzxekhwgfwlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044676.6603315-452-271429810878447/AnsiballZ_file.py'
Feb 02 15:04:36 compute-0 sudo[67498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:37 compute-0 python3.9[67500]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:37 compute-0 sudo[67498]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:37 compute-0 sudo[67650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dintdjssmmlowxgnrirrljphcehsnmvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044677.2289486-460-255126931848656/AnsiballZ_command.py'
Feb 02 15:04:37 compute-0 sudo[67650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:37 compute-0 python3.9[67652]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:04:37 compute-0 sudo[67650]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:38 compute-0 sudo[67809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqbxwmjugzudqeynsuoenmmduzqtowel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044677.8644953-468-63745112909979/AnsiballZ_blockinfile.py'
Feb 02 15:04:38 compute-0 sudo[67809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:38 compute-0 python3.9[67811]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:38 compute-0 sudo[67809]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:39 compute-0 sudo[67962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqjyginmaubwbrgfowekwlnawmfeuqpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044678.802843-477-247344061239189/AnsiballZ_file.py'
Feb 02 15:04:39 compute-0 sudo[67962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:39 compute-0 python3.9[67964]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:39 compute-0 sudo[67962]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:39 compute-0 sudo[68114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtbavqqwaydgcisszngedfonbgwnwcba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044679.4129438-477-170775947621984/AnsiballZ_file.py'
Feb 02 15:04:39 compute-0 sudo[68114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:39 compute-0 python3.9[68116]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:39 compute-0 sudo[68114]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:40 compute-0 sudo[68266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxxvtctrmueiirwllzpdigvjxwfetsfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044680.0845363-492-79185129787332/AnsiballZ_mount.py'
Feb 02 15:04:40 compute-0 sudo[68266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:40 compute-0 python3.9[68268]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 02 15:04:40 compute-0 sudo[68266]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:41 compute-0 sudo[68419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqedidnezprcxctnifiuxgvkoymqoofq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044681.098666-492-182569753057686/AnsiballZ_mount.py'
Feb 02 15:04:41 compute-0 sudo[68419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:41 compute-0 python3.9[68421]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 02 15:04:41 compute-0 sudo[68419]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:41 compute-0 sshd-session[59219]: Connection closed by 192.168.122.30 port 33516
Feb 02 15:04:41 compute-0 sshd-session[59216]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:04:42 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Feb 02 15:04:42 compute-0 systemd[1]: session-14.scope: Consumed 31.710s CPU time.
Feb 02 15:04:42 compute-0 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Feb 02 15:04:42 compute-0 systemd-logind[786]: Removed session 14.
Feb 02 15:04:47 compute-0 sshd-session[68447]: Accepted publickey for zuul from 192.168.122.30 port 48120 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:04:47 compute-0 systemd-logind[786]: New session 15 of user zuul.
Feb 02 15:04:47 compute-0 systemd[1]: Started Session 15 of User zuul.
Feb 02 15:04:47 compute-0 sshd-session[68447]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:04:48 compute-0 sudo[68600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladjglomvbzbhqkvjhtosdzsykgdctma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044687.867153-16-275793440758806/AnsiballZ_tempfile.py'
Feb 02 15:04:48 compute-0 sudo[68600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:48 compute-0 python3.9[68602]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 02 15:04:48 compute-0 sudo[68600]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:49 compute-0 sudo[68752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwgbxzjbhukawhbvwsemkaunzwvvwqjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044688.7015793-28-220011787717742/AnsiballZ_stat.py'
Feb 02 15:04:49 compute-0 sudo[68752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:49 compute-0 python3.9[68754]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:04:49 compute-0 sudo[68752]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:50 compute-0 sudo[68904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xooverqgxrtvplcvubtpvwjcnurlmrng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044689.570177-38-88422009651748/AnsiballZ_setup.py'
Feb 02 15:04:50 compute-0 sudo[68904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:50 compute-0 python3.9[68906]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:04:50 compute-0 sudo[68904]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:51 compute-0 sudo[69056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyinvqmpmrrudycwuifenygxsxlqhrfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044690.6444876-47-170224606484659/AnsiballZ_blockinfile.py'
Feb 02 15:04:51 compute-0 sudo[69056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:51 compute-0 python3.9[69058]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCujbCE344dzd5aW6GG04mR4h2IsckejACLY7ihWz4yYp81LQjEf3SFracBI8VRXub4oT9gkdzEXyLbvZ0BHsvBiNGkn16VOJ9Q3/GqKhU6E58mswaIOBpHKPHeW98mVcKwx7Sr++vzFwxKZcAs5adxcVfSLgRkkehKwMnp8Q532D24Ve7hfVLLjEPqqXAIxgumXpcgBlozL+69tEoxYMipxmf9Lb6EzgeWun+GLKSpEABakFJzad8F+CirhPEkREeGYUpNAKKU2Fv6H43cm8VGjdZ4cc4cITm7os6tUblAMee6NPQY6C7B8I2leHcey0yiT6zsZSSumGWfMGgl8E7tlrgsWLt9GKsEOrl3xxcAYV1vwl4498I4vD6snf1B0Jtki+BYhGzcN32imYItx4W1Ev/JhHfWHYOZUbStwuEtin2wxLS4MijlR+A4c2HoJZaUvSg2h8pKH/gK3hslcCA2vFH8P2C0gSeZxcPJ1UloBHbUmXScshovc23RMVcVtIc=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIImBKhc0pGFrUmCwl/KjuUkeButVm48ak5OqOT7WW0FK
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBFMuj0/gkzyq5OTLSvXRObhIfDk9AaGYJS/YW5/yeiFYKNmn1O5EdHf9Zx7iuWkXi6VSxpStB/Z4Y9fR602dno=
                                             create=True mode=0644 path=/tmp/ansible.vjx52rab state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:51 compute-0 sudo[69056]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:51 compute-0 sudo[69208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usttmbzdjeqigjsevwtlprrmkclwgvlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044691.4057815-55-116500746270385/AnsiballZ_command.py'
Feb 02 15:04:51 compute-0 sudo[69208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:52 compute-0 python3.9[69210]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vjx52rab' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:04:52 compute-0 sudo[69208]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:52 compute-0 sudo[69362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnllztsvdogiecphnvekohehoahwtdgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044692.273286-63-241970112849760/AnsiballZ_file.py'
Feb 02 15:04:52 compute-0 sudo[69362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:04:52 compute-0 python3.9[69364]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vjx52rab state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:04:52 compute-0 sudo[69362]: pam_unix(sudo:session): session closed for user root
Feb 02 15:04:53 compute-0 sshd-session[68450]: Connection closed by 192.168.122.30 port 48120
Feb 02 15:04:53 compute-0 sshd-session[68447]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:04:53 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Feb 02 15:04:53 compute-0 systemd[1]: session-15.scope: Consumed 3.257s CPU time.
Feb 02 15:04:53 compute-0 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Feb 02 15:04:53 compute-0 systemd-logind[786]: Removed session 15.
Feb 02 15:04:54 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 02 15:04:58 compute-0 sshd-session[69391]: Accepted publickey for zuul from 192.168.122.30 port 47158 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:04:58 compute-0 systemd-logind[786]: New session 16 of user zuul.
Feb 02 15:04:58 compute-0 systemd[1]: Started Session 16 of User zuul.
Feb 02 15:04:58 compute-0 sshd-session[69391]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:04:59 compute-0 python3.9[69544]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:05:00 compute-0 sudo[69698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyhkwechcpsnopgnjclshcizyyhuujsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044700.3258724-27-21351210627764/AnsiballZ_systemd.py'
Feb 02 15:05:00 compute-0 sudo[69698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:01 compute-0 python3.9[69700]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 02 15:05:01 compute-0 sudo[69698]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:01 compute-0 sudo[69852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eihanjowqruxdzseaujzdolikowbvgqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044701.3613665-35-77140887651777/AnsiballZ_systemd.py'
Feb 02 15:05:01 compute-0 sudo[69852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:01 compute-0 python3.9[69854]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:05:01 compute-0 sudo[69852]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:02 compute-0 sudo[70005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfaxorpenljqdubgxsszulbenhlygggh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044702.1448655-44-58196843147352/AnsiballZ_command.py'
Feb 02 15:05:02 compute-0 sudo[70005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:02 compute-0 python3.9[70007]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:02 compute-0 sudo[70005]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:03 compute-0 sudo[70158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krpwybdlnvqbzezhydwgnyuvthcbjosb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044702.955992-52-77902369883877/AnsiballZ_stat.py'
Feb 02 15:05:03 compute-0 sudo[70158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:03 compute-0 python3.9[70160]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:05:03 compute-0 sudo[70158]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:04 compute-0 sudo[70312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmafruebgjkzpsqhbwtwdrvtjydksmvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044703.8027723-60-241949724972227/AnsiballZ_command.py'
Feb 02 15:05:04 compute-0 sudo[70312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:04 compute-0 python3.9[70314]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:04 compute-0 sudo[70312]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:04 compute-0 sudo[70467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhpyzwwzavyzwjzxazzazjluytvoelmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044704.456769-68-100305756633830/AnsiballZ_file.py'
Feb 02 15:05:04 compute-0 sudo[70467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:05 compute-0 python3.9[70469]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:05:05 compute-0 sudo[70467]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:05 compute-0 sshd-session[69394]: Connection closed by 192.168.122.30 port 47158
Feb 02 15:05:05 compute-0 sshd-session[69391]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:05:05 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Feb 02 15:05:05 compute-0 systemd[1]: session-16.scope: Consumed 4.127s CPU time.
Feb 02 15:05:05 compute-0 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Feb 02 15:05:05 compute-0 systemd-logind[786]: Removed session 16.
Feb 02 15:05:10 compute-0 sshd-session[70494]: Accepted publickey for zuul from 192.168.122.30 port 34074 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:05:10 compute-0 systemd-logind[786]: New session 17 of user zuul.
Feb 02 15:05:10 compute-0 systemd[1]: Started Session 17 of User zuul.
Feb 02 15:05:10 compute-0 sshd-session[70494]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:05:11 compute-0 python3.9[70647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:05:12 compute-0 sudo[70801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmstgyouuhalyizzlgujolubazashxoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044712.19459-29-41840055967170/AnsiballZ_setup.py'
Feb 02 15:05:12 compute-0 sudo[70801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:12 compute-0 python3.9[70803]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:05:13 compute-0 sudo[70801]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:13 compute-0 sudo[70885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bljqcwcibirrfhwmofbmkqhbeiodfjau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044712.19459-29-41840055967170/AnsiballZ_dnf.py'
Feb 02 15:05:13 compute-0 sudo[70885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:13 compute-0 python3.9[70887]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 02 15:05:14 compute-0 sudo[70885]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:15 compute-0 python3.9[71038]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:16 compute-0 python3.9[71189]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 15:05:17 compute-0 python3.9[71339]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:05:17 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:05:18 compute-0 python3.9[71490]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:05:18 compute-0 sshd-session[70497]: Connection closed by 192.168.122.30 port 34074
Feb 02 15:05:18 compute-0 sshd-session[70494]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:05:18 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Feb 02 15:05:18 compute-0 systemd[1]: session-17.scope: Consumed 5.320s CPU time.
Feb 02 15:05:18 compute-0 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Feb 02 15:05:18 compute-0 systemd-logind[786]: Removed session 17.
Feb 02 15:05:26 compute-0 sshd-session[71515]: Accepted publickey for zuul from 38.129.56.75 port 41702 ssh2: RSA SHA256:6MqBH2X7LXmocyY6TeaOivEV/FItCxqrc1tGLmCm8YI
Feb 02 15:05:26 compute-0 systemd-logind[786]: New session 18 of user zuul.
Feb 02 15:05:26 compute-0 systemd[1]: Started Session 18 of User zuul.
Feb 02 15:05:26 compute-0 sshd-session[71515]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:05:26 compute-0 sudo[71591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zikjsdguppmsmbjilqpqmltuahnnjagm ; /usr/bin/python3'
Feb 02 15:05:26 compute-0 sudo[71591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:26 compute-0 useradd[71595]: new group: name=ceph-admin, GID=42478
Feb 02 15:05:26 compute-0 useradd[71595]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Feb 02 15:05:26 compute-0 sudo[71591]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:27 compute-0 sudo[71677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfjfxolnnbfcjffmurttfuhiadznqlzd ; /usr/bin/python3'
Feb 02 15:05:27 compute-0 sudo[71677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:27 compute-0 sudo[71677]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:27 compute-0 sudo[71750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvdeplubyeuvublmtwvlrnprwzdzschv ; /usr/bin/python3'
Feb 02 15:05:27 compute-0 sudo[71750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:27 compute-0 sudo[71750]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:28 compute-0 sudo[71800]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfnsraxhavymugxukmstyjsvcunfqkgy ; /usr/bin/python3'
Feb 02 15:05:28 compute-0 sudo[71800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:28 compute-0 sudo[71800]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:28 compute-0 sudo[71826]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idkmmemwevztgmcqclxjfzzcamdcccmx ; /usr/bin/python3'
Feb 02 15:05:28 compute-0 sudo[71826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:28 compute-0 sudo[71826]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:28 compute-0 sudo[71852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icrvqhkqykfnscahbfjdoklversdwnvi ; /usr/bin/python3'
Feb 02 15:05:28 compute-0 sudo[71852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:28 compute-0 sudo[71852]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:29 compute-0 sudo[71878]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggjojvxinkjovfsdoeqqwfoovnecbkxd ; /usr/bin/python3'
Feb 02 15:05:29 compute-0 sudo[71878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:29 compute-0 sudo[71878]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:29 compute-0 sudo[71956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqjsdcdewmfsrercorwigslsidfvrvng ; /usr/bin/python3'
Feb 02 15:05:29 compute-0 sudo[71956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:29 compute-0 sudo[71956]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:30 compute-0 sudo[72029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjtylxaqxojssocpbasgkatvklwmldom ; /usr/bin/python3'
Feb 02 15:05:30 compute-0 sudo[72029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:30 compute-0 sudo[72029]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:30 compute-0 sudo[72131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whnegvohoghphewqwgkxtndmuxofcbho ; /usr/bin/python3'
Feb 02 15:05:30 compute-0 sudo[72131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:30 compute-0 sudo[72131]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:30 compute-0 sudo[72204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndpbofseqrdsegoxdruwlydrzyqkqxvl ; /usr/bin/python3'
Feb 02 15:05:30 compute-0 sudo[72204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:31 compute-0 sudo[72204]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:31 compute-0 sudo[72254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xklenaavpqqnbdguwreibazkwinjiywp ; /usr/bin/python3'
Feb 02 15:05:31 compute-0 sudo[72254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:31 compute-0 python3[72256]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:05:32 compute-0 sudo[72254]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:33 compute-0 sudo[72349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuecaxsxupcmdsurqdshlrgvtgdasdgw ; /usr/bin/python3'
Feb 02 15:05:33 compute-0 sudo[72349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:33 compute-0 python3[72351]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 15:05:34 compute-0 sudo[72349]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:34 compute-0 sudo[72376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jewcrfploaikbzcwfasntbokyhlbojjw ; /usr/bin/python3'
Feb 02 15:05:34 compute-0 sudo[72376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:34 compute-0 python3[72378]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 15:05:34 compute-0 sudo[72376]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:35 compute-0 sudo[72402]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhzxsubegjwsglpzvhmrygsbzopxmxxw ; /usr/bin/python3'
Feb 02 15:05:35 compute-0 sudo[72402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:35 compute-0 python3[72404]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:35 compute-0 kernel: loop: module loaded
Feb 02 15:05:35 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Feb 02 15:05:35 compute-0 sudo[72402]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:35 compute-0 sudo[72437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhkcklnytfycgjxjqumzeqcgwgmbojzx ; /usr/bin/python3'
Feb 02 15:05:35 compute-0 sudo[72437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:35 compute-0 python3[72439]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:35 compute-0 lvm[72442]: PV /dev/loop3 not used.
Feb 02 15:05:35 compute-0 lvm[72451]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:05:35 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb 02 15:05:35 compute-0 sudo[72437]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:35 compute-0 lvm[72453]:   1 logical volume(s) in volume group "ceph_vg0" now active
Feb 02 15:05:35 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb 02 15:05:36 compute-0 sudo[72529]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjjatsoviuznhxickbveneszbnqbmzl ; /usr/bin/python3'
Feb 02 15:05:36 compute-0 sudo[72529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:36 compute-0 python3[72531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 15:05:36 compute-0 sudo[72529]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:36 compute-0 sudo[72602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npzzcdujbzzbeioylpjlekfzdhtbhmwb ; /usr/bin/python3'
Feb 02 15:05:36 compute-0 sudo[72602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:36 compute-0 python3[72604]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770044735.9767575-36285-137418439276842/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:05:36 compute-0 sudo[72602]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:37 compute-0 sudo[72652]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilrbkevhscujwsariqmzhqayarvhlavv ; /usr/bin/python3'
Feb 02 15:05:37 compute-0 sudo[72652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:37 compute-0 python3[72654]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:05:37 compute-0 systemd[1]: Reloading.
Feb 02 15:05:37 compute-0 systemd-sysv-generator[72687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:05:37 compute-0 systemd-rc-local-generator[72684]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:05:37 compute-0 systemd[1]: Starting Ceph OSD losetup...
Feb 02 15:05:37 compute-0 bash[72694]: /dev/loop3: [64513]:4329561 (/var/lib/ceph-osd-0.img)
Feb 02 15:05:37 compute-0 systemd[1]: Finished Ceph OSD losetup.
Feb 02 15:05:37 compute-0 lvm[72695]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:05:37 compute-0 lvm[72695]: VG ceph_vg0 finished
Feb 02 15:05:37 compute-0 sudo[72652]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:37 compute-0 sudo[72719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qujkaawrottzsthzubadvmmdznwvylgc ; /usr/bin/python3'
Feb 02 15:05:37 compute-0 sudo[72719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:38 compute-0 python3[72721]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 15:05:39 compute-0 sudo[72719]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:39 compute-0 sudo[72746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfnamknttgmvijhuknwkxyrubmbnlbrw ; /usr/bin/python3'
Feb 02 15:05:39 compute-0 sudo[72746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:39 compute-0 python3[72748]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 15:05:39 compute-0 sudo[72746]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:39 compute-0 sudo[72772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqrzlnryecagrgtmwkvjaqpjebfeulry ; /usr/bin/python3'
Feb 02 15:05:39 compute-0 sudo[72772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:39 compute-0 python3[72774]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:39 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Feb 02 15:05:39 compute-0 sudo[72772]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:40 compute-0 sudo[72804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhlmxsvmcdfeksgbyvtfpxluhbtvfsd ; /usr/bin/python3'
Feb 02 15:05:40 compute-0 sudo[72804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:40 compute-0 python3[72806]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:40 compute-0 lvm[72809]: PV /dev/loop4 not used.
Feb 02 15:05:40 compute-0 lvm[72819]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:05:40 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Feb 02 15:05:40 compute-0 sudo[72804]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:40 compute-0 lvm[72821]:   1 logical volume(s) in volume group "ceph_vg1" now active
Feb 02 15:05:40 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Feb 02 15:05:40 compute-0 sudo[72897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjlqcvsuoqqfduhqropfdxpdlyojdfca ; /usr/bin/python3'
Feb 02 15:05:40 compute-0 sudo[72897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:40 compute-0 python3[72899]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 15:05:40 compute-0 sudo[72897]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:41 compute-0 sudo[72970]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyggnalfzhzaasrksdgtwhftgehpbkxk ; /usr/bin/python3'
Feb 02 15:05:41 compute-0 sudo[72970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:41 compute-0 python3[72972]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770044740.674793-36312-180245452018741/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:05:41 compute-0 sudo[72970]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:41 compute-0 sudo[73020]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alemwtqkxlambowffrqcyojstadrpsjj ; /usr/bin/python3'
Feb 02 15:05:41 compute-0 sudo[73020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:41 compute-0 python3[73022]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:05:41 compute-0 systemd[1]: Reloading.
Feb 02 15:05:41 compute-0 systemd-sysv-generator[73055]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:05:41 compute-0 systemd-rc-local-generator[73049]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:05:42 compute-0 systemd[1]: Starting Ceph OSD losetup...
Feb 02 15:05:42 compute-0 bash[73063]: /dev/loop4: [64513]:4642326 (/var/lib/ceph-osd-1.img)
Feb 02 15:05:42 compute-0 systemd[1]: Finished Ceph OSD losetup.
Feb 02 15:05:42 compute-0 sudo[73020]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:42 compute-0 lvm[73064]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:05:42 compute-0 lvm[73064]: VG ceph_vg1 finished
Feb 02 15:05:42 compute-0 sudo[73088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybpudllqfcmbobblbcwxqpglrnljbvtb ; /usr/bin/python3'
Feb 02 15:05:42 compute-0 sudo[73088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:42 compute-0 python3[73090]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 15:05:43 compute-0 sudo[73088]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:43 compute-0 sudo[73115]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdtoglequscfspxxohibwpqxktwqglvm ; /usr/bin/python3'
Feb 02 15:05:43 compute-0 sudo[73115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:43 compute-0 python3[73117]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 15:05:43 compute-0 sudo[73115]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:44 compute-0 sudo[73141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmnyputcppjvuqanhqqxlqobwishttro ; /usr/bin/python3'
Feb 02 15:05:44 compute-0 sudo[73141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:44 compute-0 python3[73143]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:44 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Feb 02 15:05:44 compute-0 sudo[73141]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:44 compute-0 sudo[73173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeonycitvnksgxjkszbejjrrhlttfsxe ; /usr/bin/python3'
Feb 02 15:05:44 compute-0 sudo[73173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:44 compute-0 python3[73175]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:44 compute-0 lvm[73178]: PV /dev/loop5 not used.
Feb 02 15:05:44 compute-0 lvm[73188]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:05:44 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Feb 02 15:05:44 compute-0 lvm[73190]:   1 logical volume(s) in volume group "ceph_vg2" now active
Feb 02 15:05:44 compute-0 sudo[73173]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:44 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Feb 02 15:05:45 compute-0 sudo[73266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckzpvmrkcxjyiqgupgjwbwfdgkhrucrd ; /usr/bin/python3'
Feb 02 15:05:45 compute-0 sudo[73266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:45 compute-0 python3[73268]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 15:05:45 compute-0 sudo[73266]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:45 compute-0 sudo[73339]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsfsszjeqlmqraysudbnnfohosnshwrz ; /usr/bin/python3'
Feb 02 15:05:45 compute-0 sudo[73339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:45 compute-0 python3[73341]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770044744.9085982-36339-23057263201444/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:05:45 compute-0 sudo[73339]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:46 compute-0 sudo[73389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qawycuyjfmtjljsrnwxjotcscfntdosm ; /usr/bin/python3'
Feb 02 15:05:46 compute-0 sudo[73389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:46 compute-0 python3[73391]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:05:46 compute-0 systemd[1]: Reloading.
Feb 02 15:05:46 compute-0 systemd-rc-local-generator[73410]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:05:46 compute-0 systemd-sysv-generator[73414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:05:47 compute-0 systemd[1]: Starting Ceph OSD losetup...
Feb 02 15:05:47 compute-0 bash[73431]: /dev/loop5: [64513]:4642849 (/var/lib/ceph-osd-2.img)
Feb 02 15:05:47 compute-0 systemd[1]: Finished Ceph OSD losetup.
Feb 02 15:05:47 compute-0 lvm[73432]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:05:47 compute-0 lvm[73432]: VG ceph_vg2 finished
Feb 02 15:05:47 compute-0 sudo[73389]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:48 compute-0 python3[73456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:05:50 compute-0 sudo[73547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orayxrarhlekqvwuiyuwagkksagnsotk ; /usr/bin/python3'
Feb 02 15:05:50 compute-0 sudo[73547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:50 compute-0 python3[73549]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 15:05:51 compute-0 chronyd[58735]: Selected source 138.197.135.239 (pool.ntp.org)
Feb 02 15:05:53 compute-0 sudo[73547]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:53 compute-0 sudo[73604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofjnmpstjibjhukrgvyarbzarnkomcrk ; /usr/bin/python3'
Feb 02 15:05:53 compute-0 sudo[73604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:53 compute-0 python3[73606]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 02 15:05:56 compute-0 groupadd[73616]: group added to /etc/group: name=cephadm, GID=993
Feb 02 15:05:56 compute-0 groupadd[73616]: group added to /etc/gshadow: name=cephadm
Feb 02 15:05:56 compute-0 groupadd[73616]: new group: name=cephadm, GID=993
Feb 02 15:05:56 compute-0 useradd[73623]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Feb 02 15:05:56 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 15:05:56 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 15:05:57 compute-0 sudo[73604]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 15:05:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 15:05:57 compute-0 systemd[1]: run-re6ecc28a602f4d378ef3259425085653.service: Deactivated successfully.
Feb 02 15:05:57 compute-0 sudo[73723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxhrhumrputfhgdptixwdsqdgeqdaqca ; /usr/bin/python3'
Feb 02 15:05:57 compute-0 sudo[73723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:57 compute-0 python3[73725]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 15:05:57 compute-0 sudo[73723]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:57 compute-0 sudo[73751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxpkbxlzuabkytsbelxilfgkxpsfvwcd ; /usr/bin/python3'
Feb 02 15:05:57 compute-0 sudo[73751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:57 compute-0 python3[73753]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:05:58 compute-0 sudo[73751]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:58 compute-0 sudo[73790]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdkxknlrwbokshkbwjordhddrqifjmzn ; /usr/bin/python3'
Feb 02 15:05:58 compute-0 sudo[73790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:58 compute-0 python3[73792]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:05:58 compute-0 sudo[73790]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:58 compute-0 sudo[73816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aslcmkoexyfkjfqzrsymyprivcntxhnu ; /usr/bin/python3'
Feb 02 15:05:58 compute-0 sudo[73816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:58 compute-0 python3[73818]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:05:58 compute-0 sudo[73816]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:59 compute-0 sudo[73894]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dszmokkmexckawemfftystxtaressmwj ; /usr/bin/python3'
Feb 02 15:05:59 compute-0 sudo[73894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:59 compute-0 python3[73896]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 15:05:59 compute-0 sudo[73894]: pam_unix(sudo:session): session closed for user root
Feb 02 15:05:59 compute-0 sudo[73967]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qycdflwsboyphvmqjyjcadzwsbhhayqf ; /usr/bin/python3'
Feb 02 15:05:59 compute-0 sudo[73967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:05:59 compute-0 python3[73969]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770044759.337774-36487-76135876253390/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:05:59 compute-0 sudo[73967]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:00 compute-0 sudo[74069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffffulizmyzywblnmkktkriqksgwoopq ; /usr/bin/python3'
Feb 02 15:06:00 compute-0 sudo[74069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:06:00 compute-0 python3[74071]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 15:06:00 compute-0 sudo[74069]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:00 compute-0 sudo[74142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgnonkyvcmwwprfbvgioeswkhtscujyp ; /usr/bin/python3'
Feb 02 15:06:00 compute-0 sudo[74142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:06:01 compute-0 python3[74144]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770044760.385594-36505-179069905767796/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:06:01 compute-0 sudo[74142]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:01 compute-0 sudo[74192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjegbasxrhgepipnmkqdxylmyplmswha ; /usr/bin/python3'
Feb 02 15:06:01 compute-0 sudo[74192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:06:01 compute-0 python3[74194]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 15:06:01 compute-0 sudo[74192]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:01 compute-0 sudo[74220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acouufjsuiwzxkoxcfjiesdqvwdhvodn ; /usr/bin/python3'
Feb 02 15:06:01 compute-0 sudo[74220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:06:01 compute-0 python3[74222]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 15:06:01 compute-0 sudo[74220]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:01 compute-0 sudo[74248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulanzlbujvmhjqjvnrwjopkgmnirkdmn ; /usr/bin/python3'
Feb 02 15:06:01 compute-0 sudo[74248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:06:02 compute-0 python3[74250]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 15:06:02 compute-0 sudo[74248]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:02 compute-0 python3[74276]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 15:06:02 compute-0 sudo[74300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnziottqhgbpdtohzlvzprwjrsccwiwq ; /usr/bin/python3'
Feb 02 15:06:02 compute-0 sudo[74300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:06:02 compute-0 python3[74302]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:06:03 compute-0 sshd-session[74306]: Accepted publickey for ceph-admin from 192.168.122.100 port 51774 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:03 compute-0 systemd-logind[786]: New session 19 of user ceph-admin.
Feb 02 15:06:03 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Feb 02 15:06:03 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 02 15:06:03 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 02 15:06:03 compute-0 systemd[1]: Starting User Manager for UID 42477...
Feb 02 15:06:03 compute-0 systemd[74310]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:03 compute-0 systemd[74310]: Queued start job for default target Main User Target.
Feb 02 15:06:03 compute-0 systemd[74310]: Created slice User Application Slice.
Feb 02 15:06:03 compute-0 systemd[74310]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 02 15:06:03 compute-0 systemd[74310]: Started Daily Cleanup of User's Temporary Directories.
Feb 02 15:06:03 compute-0 systemd[74310]: Reached target Paths.
Feb 02 15:06:03 compute-0 systemd[74310]: Reached target Timers.
Feb 02 15:06:03 compute-0 systemd[74310]: Starting D-Bus User Message Bus Socket...
Feb 02 15:06:03 compute-0 systemd[74310]: Starting Create User's Volatile Files and Directories...
Feb 02 15:06:03 compute-0 systemd[74310]: Listening on D-Bus User Message Bus Socket.
Feb 02 15:06:03 compute-0 systemd[74310]: Reached target Sockets.
Feb 02 15:06:03 compute-0 systemd[74310]: Finished Create User's Volatile Files and Directories.
Feb 02 15:06:03 compute-0 systemd[74310]: Reached target Basic System.
Feb 02 15:06:03 compute-0 systemd[74310]: Reached target Main User Target.
Feb 02 15:06:03 compute-0 systemd[74310]: Startup finished in 111ms.
Feb 02 15:06:03 compute-0 systemd[1]: Started User Manager for UID 42477.
Feb 02 15:06:03 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Feb 02 15:06:03 compute-0 sshd-session[74306]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:03 compute-0 sudo[74326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Feb 02 15:06:03 compute-0 sudo[74326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:03 compute-0 sudo[74326]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:03 compute-0 sshd-session[74325]: Received disconnect from 192.168.122.100 port 51774:11: disconnected by user
Feb 02 15:06:03 compute-0 sshd-session[74325]: Disconnected from user ceph-admin 192.168.122.100 port 51774
Feb 02 15:06:03 compute-0 sshd-session[74306]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 02 15:06:03 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Feb 02 15:06:03 compute-0 systemd-logind[786]: Session 19 logged out. Waiting for processes to exit.
Feb 02 15:06:03 compute-0 systemd-logind[786]: Removed session 19.
Feb 02 15:06:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:06:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2801116093-merged.mount: Deactivated successfully.
Feb 02 15:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2801116093-lower\x2dmapped.mount: Deactivated successfully.
Feb 02 15:06:13 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Feb 02 15:06:13 compute-0 systemd[74310]: Activating special unit Exit the Session...
Feb 02 15:06:13 compute-0 systemd[74310]: Stopped target Main User Target.
Feb 02 15:06:13 compute-0 systemd[74310]: Stopped target Basic System.
Feb 02 15:06:13 compute-0 systemd[74310]: Stopped target Paths.
Feb 02 15:06:13 compute-0 systemd[74310]: Stopped target Sockets.
Feb 02 15:06:13 compute-0 systemd[74310]: Stopped target Timers.
Feb 02 15:06:13 compute-0 systemd[74310]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 02 15:06:13 compute-0 systemd[74310]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 02 15:06:13 compute-0 systemd[74310]: Closed D-Bus User Message Bus Socket.
Feb 02 15:06:13 compute-0 systemd[74310]: Stopped Create User's Volatile Files and Directories.
Feb 02 15:06:13 compute-0 systemd[74310]: Removed slice User Application Slice.
Feb 02 15:06:13 compute-0 systemd[74310]: Reached target Shutdown.
Feb 02 15:06:13 compute-0 systemd[74310]: Finished Exit the Session.
Feb 02 15:06:13 compute-0 systemd[74310]: Reached target Exit the Session.
Feb 02 15:06:13 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Feb 02 15:06:13 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Feb 02 15:06:13 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb 02 15:06:13 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb 02 15:06:13 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb 02 15:06:13 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb 02 15:06:13 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Feb 02 15:06:20 compute-0 podman[74402]: 2026-02-02 15:06:20.512863179 +0000 UTC m=+16.899923460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:06:20 compute-0 podman[74463]: 2026-02-02 15:06:20.577422214 +0000 UTC m=+0.043638932 container create e728eb36858985d3a2a145176e55b9c91035d0c0aa9af8d08c71e95b8d316715 (image=quay.io/ceph/ceph:v20, name=cool_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1075182537-merged.mount: Deactivated successfully.
Feb 02 15:06:20 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb 02 15:06:20 compute-0 systemd[1]: Started libpod-conmon-e728eb36858985d3a2a145176e55b9c91035d0c0aa9af8d08c71e95b8d316715.scope.
Feb 02 15:06:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:20 compute-0 podman[74463]: 2026-02-02 15:06:20.555930257 +0000 UTC m=+0.022146995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:20 compute-0 podman[74463]: 2026-02-02 15:06:20.663214771 +0000 UTC m=+0.129431499 container init e728eb36858985d3a2a145176e55b9c91035d0c0aa9af8d08c71e95b8d316715 (image=quay.io/ceph/ceph:v20, name=cool_shamir, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:20 compute-0 podman[74463]: 2026-02-02 15:06:20.668583983 +0000 UTC m=+0.134800701 container start e728eb36858985d3a2a145176e55b9c91035d0c0aa9af8d08c71e95b8d316715 (image=quay.io/ceph/ceph:v20, name=cool_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:06:20 compute-0 podman[74463]: 2026-02-02 15:06:20.67131482 +0000 UTC m=+0.137531548 container attach e728eb36858985d3a2a145176e55b9c91035d0c0aa9af8d08c71e95b8d316715 (image=quay.io/ceph/ceph:v20, name=cool_shamir, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:20 compute-0 cool_shamir[74479]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb 02 15:06:20 compute-0 systemd[1]: libpod-e728eb36858985d3a2a145176e55b9c91035d0c0aa9af8d08c71e95b8d316715.scope: Deactivated successfully.
Feb 02 15:06:20 compute-0 podman[74463]: 2026-02-02 15:06:20.763174926 +0000 UTC m=+0.229391684 container died e728eb36858985d3a2a145176e55b9c91035d0c0aa9af8d08c71e95b8d316715 (image=quay.io/ceph/ceph:v20, name=cool_shamir, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:20 compute-0 podman[74463]: 2026-02-02 15:06:20.798052243 +0000 UTC m=+0.264268961 container remove e728eb36858985d3a2a145176e55b9c91035d0c0aa9af8d08c71e95b8d316715 (image=quay.io/ceph/ceph:v20, name=cool_shamir, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:20 compute-0 systemd[1]: libpod-conmon-e728eb36858985d3a2a145176e55b9c91035d0c0aa9af8d08c71e95b8d316715.scope: Deactivated successfully.
Feb 02 15:06:20 compute-0 podman[74496]: 2026-02-02 15:06:20.870443759 +0000 UTC m=+0.051756441 container create fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08 (image=quay.io/ceph/ceph:v20, name=jolly_fermi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 02 15:06:20 compute-0 systemd[1]: Started libpod-conmon-fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08.scope.
Feb 02 15:06:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:20 compute-0 podman[74496]: 2026-02-02 15:06:20.927980613 +0000 UTC m=+0.109293325 container init fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08 (image=quay.io/ceph/ceph:v20, name=jolly_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:06:20 compute-0 podman[74496]: 2026-02-02 15:06:20.936054081 +0000 UTC m=+0.117366763 container start fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08 (image=quay.io/ceph/ceph:v20, name=jolly_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:06:20 compute-0 jolly_fermi[74512]: 167 167
Feb 02 15:06:20 compute-0 systemd[1]: libpod-fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08.scope: Deactivated successfully.
Feb 02 15:06:20 compute-0 podman[74496]: 2026-02-02 15:06:20.940069769 +0000 UTC m=+0.121382471 container attach fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08 (image=quay.io/ceph/ceph:v20, name=jolly_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:20 compute-0 conmon[74512]: conmon fe2775d85af06c51a1b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08.scope/container/memory.events
Feb 02 15:06:20 compute-0 podman[74496]: 2026-02-02 15:06:20.941547796 +0000 UTC m=+0.122860518 container died fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08 (image=quay.io/ceph/ceph:v20, name=jolly_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:20 compute-0 podman[74496]: 2026-02-02 15:06:20.85049564 +0000 UTC m=+0.031808372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:20 compute-0 podman[74496]: 2026-02-02 15:06:20.981284162 +0000 UTC m=+0.162596844 container remove fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08 (image=quay.io/ceph/ceph:v20, name=jolly_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:20 compute-0 systemd[1]: libpod-conmon-fe2775d85af06c51a1b093cbbf5d60b55cbbedcee38255487387e81c9e8b9c08.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74528]: 2026-02-02 15:06:21.051626329 +0000 UTC m=+0.045969720 container create 3d7958a6693ebd35d4c0de727b49bdad6f05e304361f74097dfe36826c3c7ece (image=quay.io/ceph/ceph:v20, name=objective_shtern, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:21 compute-0 systemd[1]: Started libpod-conmon-3d7958a6693ebd35d4c0de727b49bdad6f05e304361f74097dfe36826c3c7ece.scope.
Feb 02 15:06:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:21 compute-0 podman[74528]: 2026-02-02 15:06:21.106151028 +0000 UTC m=+0.100494409 container init 3d7958a6693ebd35d4c0de727b49bdad6f05e304361f74097dfe36826c3c7ece (image=quay.io/ceph/ceph:v20, name=objective_shtern, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:21 compute-0 podman[74528]: 2026-02-02 15:06:21.11275386 +0000 UTC m=+0.107097211 container start 3d7958a6693ebd35d4c0de727b49bdad6f05e304361f74097dfe36826c3c7ece (image=quay.io/ceph/ceph:v20, name=objective_shtern, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:06:21 compute-0 podman[74528]: 2026-02-02 15:06:21.115664082 +0000 UTC m=+0.110007453 container attach 3d7958a6693ebd35d4c0de727b49bdad6f05e304361f74097dfe36826c3c7ece (image=quay.io/ceph/ceph:v20, name=objective_shtern, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:21 compute-0 podman[74528]: 2026-02-02 15:06:21.029348882 +0000 UTC m=+0.023692283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:21 compute-0 objective_shtern[74545]: AQBtvYBpZe6cBxAAdqYgR0cj7JYn+yuVT0GYMA==
Feb 02 15:06:21 compute-0 systemd[1]: libpod-3d7958a6693ebd35d4c0de727b49bdad6f05e304361f74097dfe36826c3c7ece.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74528]: 2026-02-02 15:06:21.129523232 +0000 UTC m=+0.123866583 container died 3d7958a6693ebd35d4c0de727b49bdad6f05e304361f74097dfe36826c3c7ece (image=quay.io/ceph/ceph:v20, name=objective_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:06:21 compute-0 podman[74528]: 2026-02-02 15:06:21.173746888 +0000 UTC m=+0.168090279 container remove 3d7958a6693ebd35d4c0de727b49bdad6f05e304361f74097dfe36826c3c7ece (image=quay.io/ceph/ceph:v20, name=objective_shtern, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:06:21 compute-0 systemd[1]: libpod-conmon-3d7958a6693ebd35d4c0de727b49bdad6f05e304361f74097dfe36826c3c7ece.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74563]: 2026-02-02 15:06:21.223159161 +0000 UTC m=+0.035997165 container create 2fa46f1f245dde868a7174930be0efd49c63344c0192c3eb0f2a95b7d286310f (image=quay.io/ceph/ceph:v20, name=musing_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:06:21 compute-0 systemd[1]: Started libpod-conmon-2fa46f1f245dde868a7174930be0efd49c63344c0192c3eb0f2a95b7d286310f.scope.
Feb 02 15:06:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:21 compute-0 podman[74563]: 2026-02-02 15:06:21.270148545 +0000 UTC m=+0.082986539 container init 2fa46f1f245dde868a7174930be0efd49c63344c0192c3eb0f2a95b7d286310f (image=quay.io/ceph/ceph:v20, name=musing_brown, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:06:21 compute-0 podman[74563]: 2026-02-02 15:06:21.273776724 +0000 UTC m=+0.086614698 container start 2fa46f1f245dde868a7174930be0efd49c63344c0192c3eb0f2a95b7d286310f (image=quay.io/ceph/ceph:v20, name=musing_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:06:21 compute-0 podman[74563]: 2026-02-02 15:06:21.27688373 +0000 UTC m=+0.089721734 container attach 2fa46f1f245dde868a7174930be0efd49c63344c0192c3eb0f2a95b7d286310f (image=quay.io/ceph/ceph:v20, name=musing_brown, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:21 compute-0 musing_brown[74579]: AQBtvYBp+3YqERAARTN1rDANh5r9nNCEwiwqwg==
Feb 02 15:06:21 compute-0 systemd[1]: libpod-2fa46f1f245dde868a7174930be0efd49c63344c0192c3eb0f2a95b7d286310f.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74563]: 2026-02-02 15:06:21.290873604 +0000 UTC m=+0.103711618 container died 2fa46f1f245dde868a7174930be0efd49c63344c0192c3eb0f2a95b7d286310f (image=quay.io/ceph/ceph:v20, name=musing_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:06:21 compute-0 podman[74563]: 2026-02-02 15:06:21.205695482 +0000 UTC m=+0.018533496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:21 compute-0 podman[74563]: 2026-02-02 15:06:21.323510015 +0000 UTC m=+0.136347989 container remove 2fa46f1f245dde868a7174930be0efd49c63344c0192c3eb0f2a95b7d286310f (image=quay.io/ceph/ceph:v20, name=musing_brown, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:06:21 compute-0 systemd[1]: libpod-conmon-2fa46f1f245dde868a7174930be0efd49c63344c0192c3eb0f2a95b7d286310f.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74599]: 2026-02-02 15:06:21.385572769 +0000 UTC m=+0.042169307 container create cf767158a141bc41c558fbe93fefed6beeb1b4ed12b8e52d3c4eca4783a8a4e6 (image=quay.io/ceph/ceph:v20, name=nostalgic_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:06:21 compute-0 systemd[1]: Started libpod-conmon-cf767158a141bc41c558fbe93fefed6beeb1b4ed12b8e52d3c4eca4783a8a4e6.scope.
Feb 02 15:06:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:21 compute-0 podman[74599]: 2026-02-02 15:06:21.453034256 +0000 UTC m=+0.109630804 container init cf767158a141bc41c558fbe93fefed6beeb1b4ed12b8e52d3c4eca4783a8a4e6 (image=quay.io/ceph/ceph:v20, name=nostalgic_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:06:21 compute-0 podman[74599]: 2026-02-02 15:06:21.457890195 +0000 UTC m=+0.114486733 container start cf767158a141bc41c558fbe93fefed6beeb1b4ed12b8e52d3c4eca4783a8a4e6 (image=quay.io/ceph/ceph:v20, name=nostalgic_bhabha, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:21 compute-0 podman[74599]: 2026-02-02 15:06:21.460846057 +0000 UTC m=+0.117442615 container attach cf767158a141bc41c558fbe93fefed6beeb1b4ed12b8e52d3c4eca4783a8a4e6 (image=quay.io/ceph/ceph:v20, name=nostalgic_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:21 compute-0 podman[74599]: 2026-02-02 15:06:21.37053752 +0000 UTC m=+0.027134068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:21 compute-0 nostalgic_bhabha[74616]: AQBtvYBpK3lJHBAAhqJML0BemHOjYiQgOKaAjg==
Feb 02 15:06:21 compute-0 systemd[1]: libpod-cf767158a141bc41c558fbe93fefed6beeb1b4ed12b8e52d3c4eca4783a8a4e6.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74599]: 2026-02-02 15:06:21.47763812 +0000 UTC m=+0.134234658 container died cf767158a141bc41c558fbe93fefed6beeb1b4ed12b8e52d3c4eca4783a8a4e6 (image=quay.io/ceph/ceph:v20, name=nostalgic_bhabha, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:21 compute-0 podman[74599]: 2026-02-02 15:06:21.504455238 +0000 UTC m=+0.161051776 container remove cf767158a141bc41c558fbe93fefed6beeb1b4ed12b8e52d3c4eca4783a8a4e6 (image=quay.io/ceph/ceph:v20, name=nostalgic_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:21 compute-0 systemd[1]: libpod-conmon-cf767158a141bc41c558fbe93fefed6beeb1b4ed12b8e52d3c4eca4783a8a4e6.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74635]: 2026-02-02 15:06:21.552083528 +0000 UTC m=+0.033661708 container create 4e83d498d1205542a7bead07c751550b589aab89d1f3ca8132ea9b5e5064d4fd (image=quay.io/ceph/ceph:v20, name=beautiful_cori, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-064ea97bc9a96745a2f9154ae476c14e5fd0e12e77de907a796b70a0be971a88-merged.mount: Deactivated successfully.
Feb 02 15:06:21 compute-0 systemd[1]: Started libpod-conmon-4e83d498d1205542a7bead07c751550b589aab89d1f3ca8132ea9b5e5064d4fd.scope.
Feb 02 15:06:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c010b6c28991b6a36aafa796bffca7732b8faaed13f0ee2c3f0625f23b918/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:21 compute-0 podman[74635]: 2026-02-02 15:06:21.603876509 +0000 UTC m=+0.085454709 container init 4e83d498d1205542a7bead07c751550b589aab89d1f3ca8132ea9b5e5064d4fd (image=quay.io/ceph/ceph:v20, name=beautiful_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:06:21 compute-0 podman[74635]: 2026-02-02 15:06:21.607759795 +0000 UTC m=+0.089337975 container start 4e83d498d1205542a7bead07c751550b589aab89d1f3ca8132ea9b5e5064d4fd (image=quay.io/ceph/ceph:v20, name=beautiful_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:06:21 compute-0 podman[74635]: 2026-02-02 15:06:21.610774699 +0000 UTC m=+0.092352879 container attach 4e83d498d1205542a7bead07c751550b589aab89d1f3ca8132ea9b5e5064d4fd (image=quay.io/ceph/ceph:v20, name=beautiful_cori, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:21 compute-0 podman[74635]: 2026-02-02 15:06:21.535807579 +0000 UTC m=+0.017385769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:21 compute-0 beautiful_cori[74651]: /usr/bin/monmaptool: monmap file /tmp/monmap
Feb 02 15:06:21 compute-0 beautiful_cori[74651]: setting min_mon_release = tentacle
Feb 02 15:06:21 compute-0 beautiful_cori[74651]: /usr/bin/monmaptool: set fsid to e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:21 compute-0 beautiful_cori[74651]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Feb 02 15:06:21 compute-0 systemd[1]: libpod-4e83d498d1205542a7bead07c751550b589aab89d1f3ca8132ea9b5e5064d4fd.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74635]: 2026-02-02 15:06:21.641505374 +0000 UTC m=+0.123083554 container died 4e83d498d1205542a7bead07c751550b589aab89d1f3ca8132ea9b5e5064d4fd (image=quay.io/ceph/ceph:v20, name=beautiful_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a14c010b6c28991b6a36aafa796bffca7732b8faaed13f0ee2c3f0625f23b918-merged.mount: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74635]: 2026-02-02 15:06:21.670019384 +0000 UTC m=+0.151597564 container remove 4e83d498d1205542a7bead07c751550b589aab89d1f3ca8132ea9b5e5064d4fd (image=quay.io/ceph/ceph:v20, name=beautiful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:06:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:06:21 compute-0 systemd[1]: libpod-conmon-4e83d498d1205542a7bead07c751550b589aab89d1f3ca8132ea9b5e5064d4fd.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74669]: 2026-02-02 15:06:21.720250287 +0000 UTC m=+0.035928933 container create 7c575136ee2f514c6d8edd343f49030c757093667b569df0503d450afc7dc316 (image=quay.io/ceph/ceph:v20, name=amazing_easley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:21 compute-0 systemd[1]: Started libpod-conmon-7c575136ee2f514c6d8edd343f49030c757093667b569df0503d450afc7dc316.scope.
Feb 02 15:06:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742aa6cf450f009334361ed9a2e027c3331b6f7726a46a5d141395020b10c43/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742aa6cf450f009334361ed9a2e027c3331b6f7726a46a5d141395020b10c43/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742aa6cf450f009334361ed9a2e027c3331b6f7726a46a5d141395020b10c43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742aa6cf450f009334361ed9a2e027c3331b6f7726a46a5d141395020b10c43/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:21 compute-0 podman[74669]: 2026-02-02 15:06:21.780266601 +0000 UTC m=+0.095945257 container init 7c575136ee2f514c6d8edd343f49030c757093667b569df0503d450afc7dc316 (image=quay.io/ceph/ceph:v20, name=amazing_easley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:21 compute-0 podman[74669]: 2026-02-02 15:06:21.784515955 +0000 UTC m=+0.100194631 container start 7c575136ee2f514c6d8edd343f49030c757093667b569df0503d450afc7dc316 (image=quay.io/ceph/ceph:v20, name=amazing_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 02 15:06:21 compute-0 podman[74669]: 2026-02-02 15:06:21.788115434 +0000 UTC m=+0.103794110 container attach 7c575136ee2f514c6d8edd343f49030c757093667b569df0503d450afc7dc316 (image=quay.io/ceph/ceph:v20, name=amazing_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:06:21 compute-0 podman[74669]: 2026-02-02 15:06:21.703509786 +0000 UTC m=+0.019188462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:21 compute-0 systemd[1]: libpod-7c575136ee2f514c6d8edd343f49030c757093667b569df0503d450afc7dc316.scope: Deactivated successfully.
Feb 02 15:06:21 compute-0 podman[74669]: 2026-02-02 15:06:21.907129086 +0000 UTC m=+0.222807732 container died 7c575136ee2f514c6d8edd343f49030c757093667b569df0503d450afc7dc316 (image=quay.io/ceph/ceph:v20, name=amazing_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 15:06:21 compute-0 podman[74669]: 2026-02-02 15:06:21.93740969 +0000 UTC m=+0.253088356 container remove 7c575136ee2f514c6d8edd343f49030c757093667b569df0503d450afc7dc316 (image=quay.io/ceph/ceph:v20, name=amazing_easley, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:21 compute-0 systemd[1]: libpod-conmon-7c575136ee2f514c6d8edd343f49030c757093667b569df0503d450afc7dc316.scope: Deactivated successfully.
Feb 02 15:06:22 compute-0 systemd[1]: Reloading.
Feb 02 15:06:22 compute-0 systemd-rc-local-generator[74753]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:06:22 compute-0 systemd-sysv-generator[74758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:06:22 compute-0 systemd[1]: Reloading.
Feb 02 15:06:22 compute-0 systemd-sysv-generator[74789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:06:22 compute-0 systemd-rc-local-generator[74786]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:06:22 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Feb 02 15:06:22 compute-0 systemd[1]: Reloading.
Feb 02 15:06:22 compute-0 systemd-sysv-generator[74831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:06:22 compute-0 systemd-rc-local-generator[74826]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:06:22 compute-0 systemd[1]: Reached target Ceph cluster e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:06:22 compute-0 systemd[1]: Reloading.
Feb 02 15:06:22 compute-0 systemd-rc-local-generator[74863]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:06:22 compute-0 systemd-sysv-generator[74867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:06:22 compute-0 systemd[1]: Reloading.
Feb 02 15:06:23 compute-0 systemd-rc-local-generator[74903]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:06:23 compute-0 systemd-sysv-generator[74908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:06:23 compute-0 systemd[1]: Created slice Slice /system/ceph-e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:06:23 compute-0 systemd[1]: Reached target System Time Set.
Feb 02 15:06:23 compute-0 systemd[1]: Reached target System Time Synchronized.
Feb 02 15:06:23 compute-0 systemd[1]: Starting Ceph mon.compute-0 for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:06:23 compute-0 podman[74964]: 2026-02-02 15:06:23.453772265 +0000 UTC m=+0.042984356 container create be95d4208288f89cdac6abeb878532d869843b31b3135765910585a88c3161f4 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd308d74adad8cb8108317b744dccf8d9625b553b74f7579cb61c6c6e109405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd308d74adad8cb8108317b744dccf8d9625b553b74f7579cb61c6c6e109405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd308d74adad8cb8108317b744dccf8d9625b553b74f7579cb61c6c6e109405/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd308d74adad8cb8108317b744dccf8d9625b553b74f7579cb61c6c6e109405/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:23 compute-0 podman[74964]: 2026-02-02 15:06:23.512918317 +0000 UTC m=+0.102130388 container init be95d4208288f89cdac6abeb878532d869843b31b3135765910585a88c3161f4 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:06:23 compute-0 podman[74964]: 2026-02-02 15:06:23.521463017 +0000 UTC m=+0.110675058 container start be95d4208288f89cdac6abeb878532d869843b31b3135765910585a88c3161f4 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:23 compute-0 bash[74964]: be95d4208288f89cdac6abeb878532d869843b31b3135765910585a88c3161f4
Feb 02 15:06:23 compute-0 podman[74964]: 2026-02-02 15:06:23.43486173 +0000 UTC m=+0.024073801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:23 compute-0 systemd[1]: Started Ceph mon.compute-0 for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:06:23 compute-0 ceph-mon[74985]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb 02 15:06:23 compute-0 ceph-mon[74985]: pidfile_write: ignore empty --pid-file
Feb 02 15:06:23 compute-0 ceph-mon[74985]: load: jerasure load: lrc 
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: RocksDB version: 7.9.2
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Git sha 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: DB SUMMARY
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: DB Session ID:  1V1M2YYB8QQATVYR9K2H
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: CURRENT file:  CURRENT
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                         Options.error_if_exists: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                       Options.create_if_missing: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                                     Options.env: 0x55d8bf556440
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                                Options.info_log: 0x55d8c18b53e0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                              Options.statistics: (nil)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                               Options.use_fsync: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                              Options.db_log_dir: 
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                                 Options.wal_dir: 
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                    Options.write_buffer_manager: 0x55d8c1834140
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.unordered_write: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                               Options.row_cache: None
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                              Options.wal_filter: None
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.two_write_queues: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.wal_compression: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.atomic_flush: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.max_background_jobs: 2
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.max_background_compactions: -1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.max_subcompactions: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.max_total_wal_size: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                          Options.max_open_files: -1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:       Options.compaction_readahead_size: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Compression algorithms supported:
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         kZSTD supported: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         kXpressCompression supported: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         kBZip2Compression supported: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         kLZ4Compression supported: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         kZlibCompression supported: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         kSnappyCompression supported: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:           Options.merge_operator: 
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:        Options.compaction_filter: None
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d8c1840700)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d8c18258d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:        Options.write_buffer_size: 33554432
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:  Options.max_write_buffer_number: 2
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:          Options.compression: NoCompression
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.num_levels: 7
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b7096c04-39ee-4763-9c12-88827d921c4c
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044783563897, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044783565841, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "1V1M2YYB8QQATVYR9K2H", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044783565933, "job": 1, "event": "recovery_finished"}
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d8c1852e00
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: DB pointer 0x55d8c199e000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:06:23 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d8c18258d0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 15:06:23 compute-0 ceph-mon[74985]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@-1(???) e0 preinit fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(probing) e0 win_standalone_election
Feb 02 15:06:23 compute-0 ceph-mon[74985]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb 02 15:06:23 compute-0 podman[74986]: 2026-02-02 15:06:23.591172679 +0000 UTC m=+0.038369354 container create 2981c4a113943eaefb5052b6f378fe9c1ffa34d8710ecaf3309edcb8aa127c2d (image=quay.io/ceph/ceph:v20, name=infallible_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 02 15:06:23 compute-0 ceph-mon[74985]: paxos.0).electionLogic(2) init, last seen epoch 2
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T15:06:21.638370+0000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : created 2026-02-02T15:06:21.638370+0000
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-02-02T15:06:21.815838Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).mds e1 new map
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-02-02T15:06:23:601132+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : fsmap 
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mkfs e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 15:06:23 compute-0 systemd[1]: Started libpod-conmon-2981c4a113943eaefb5052b6f378fe9c1ffa34d8710ecaf3309edcb8aa127c2d.scope.
Feb 02 15:06:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757d2332d4384e34945a2af8b3462bea5d848ca365d3cfe7780bded4824c983e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757d2332d4384e34945a2af8b3462bea5d848ca365d3cfe7780bded4824c983e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757d2332d4384e34945a2af8b3462bea5d848ca365d3cfe7780bded4824c983e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:23 compute-0 podman[74986]: 2026-02-02 15:06:23.576271433 +0000 UTC m=+0.023468108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:23 compute-0 podman[74986]: 2026-02-02 15:06:23.682898111 +0000 UTC m=+0.130094806 container init 2981c4a113943eaefb5052b6f378fe9c1ffa34d8710ecaf3309edcb8aa127c2d (image=quay.io/ceph/ceph:v20, name=infallible_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 02 15:06:23 compute-0 podman[74986]: 2026-02-02 15:06:23.6893878 +0000 UTC m=+0.136584475 container start 2981c4a113943eaefb5052b6f378fe9c1ffa34d8710ecaf3309edcb8aa127c2d (image=quay.io/ceph/ceph:v20, name=infallible_rubin, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 02 15:06:23 compute-0 podman[74986]: 2026-02-02 15:06:23.693186774 +0000 UTC m=+0.140383469 container attach 2981c4a113943eaefb5052b6f378fe9c1ffa34d8710ecaf3309edcb8aa127c2d (image=quay.io/ceph/ceph:v20, name=infallible_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 02 15:06:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422340338' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:   cluster:
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:     id:     e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:     health: HEALTH_OK
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:  
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:   services:
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:     mon: 1 daemons, quorum compute-0 (age 0.261994s) [leader: compute-0]
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:     mgr: no daemons active
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:     osd: 0 osds: 0 up, 0 in
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:  
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:   data:
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:     pools:   0 pools, 0 pgs
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:     objects: 0 objects, 0 B
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:     usage:   0 B used, 0 B / 0 B avail
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:     pgs:     
Feb 02 15:06:23 compute-0 infallible_rubin[75037]:  
Feb 02 15:06:23 compute-0 systemd[1]: libpod-2981c4a113943eaefb5052b6f378fe9c1ffa34d8710ecaf3309edcb8aa127c2d.scope: Deactivated successfully.
Feb 02 15:06:23 compute-0 podman[74986]: 2026-02-02 15:06:23.882863191 +0000 UTC m=+0.330059876 container died 2981c4a113943eaefb5052b6f378fe9c1ffa34d8710ecaf3309edcb8aa127c2d (image=quay.io/ceph/ceph:v20, name=infallible_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-757d2332d4384e34945a2af8b3462bea5d848ca365d3cfe7780bded4824c983e-merged.mount: Deactivated successfully.
Feb 02 15:06:23 compute-0 podman[74986]: 2026-02-02 15:06:23.918136687 +0000 UTC m=+0.365333392 container remove 2981c4a113943eaefb5052b6f378fe9c1ffa34d8710ecaf3309edcb8aa127c2d (image=quay.io/ceph/ceph:v20, name=infallible_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:06:23 compute-0 systemd[1]: libpod-conmon-2981c4a113943eaefb5052b6f378fe9c1ffa34d8710ecaf3309edcb8aa127c2d.scope: Deactivated successfully.
Feb 02 15:06:23 compute-0 podman[75074]: 2026-02-02 15:06:23.979417742 +0000 UTC m=+0.043060159 container create 0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c (image=quay.io/ceph/ceph:v20, name=elastic_mendeleev, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:24 compute-0 systemd[1]: Started libpod-conmon-0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c.scope.
Feb 02 15:06:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d876c867ba9802955ee3f9976b9dc79e61189dbcddb289e71d66339f000adf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d876c867ba9802955ee3f9976b9dc79e61189dbcddb289e71d66339f000adf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d876c867ba9802955ee3f9976b9dc79e61189dbcddb289e71d66339f000adf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d876c867ba9802955ee3f9976b9dc79e61189dbcddb289e71d66339f000adf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:24 compute-0 podman[75074]: 2026-02-02 15:06:23.960375354 +0000 UTC m=+0.024017811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:24 compute-0 podman[75074]: 2026-02-02 15:06:24.056423023 +0000 UTC m=+0.120065550 container init 0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c (image=quay.io/ceph/ceph:v20, name=elastic_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:06:24 compute-0 podman[75074]: 2026-02-02 15:06:24.062888312 +0000 UTC m=+0.126530769 container start 0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c (image=quay.io/ceph/ceph:v20, name=elastic_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:24 compute-0 podman[75074]: 2026-02-02 15:06:24.067625038 +0000 UTC m=+0.131267575 container attach 0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c (image=quay.io/ceph/ceph:v20, name=elastic_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 02 15:06:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3852679715' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 02 15:06:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3852679715' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 02 15:06:24 compute-0 elastic_mendeleev[75091]: 
Feb 02 15:06:24 compute-0 elastic_mendeleev[75091]: [global]
Feb 02 15:06:24 compute-0 elastic_mendeleev[75091]:         fsid = e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:24 compute-0 elastic_mendeleev[75091]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 02 15:06:24 compute-0 elastic_mendeleev[75091]:         osd_crush_chooseleaf_type = 0
Feb 02 15:06:24 compute-0 systemd[1]: libpod-0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c.scope: Deactivated successfully.
Feb 02 15:06:24 compute-0 conmon[75091]: conmon 0b29238367e0a5135613 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c.scope/container/memory.events
Feb 02 15:06:24 compute-0 podman[75074]: 2026-02-02 15:06:24.298558558 +0000 UTC m=+0.362200985 container died 0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c (image=quay.io/ceph/ceph:v20, name=elastic_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5d876c867ba9802955ee3f9976b9dc79e61189dbcddb289e71d66339f000adf-merged.mount: Deactivated successfully.
Feb 02 15:06:24 compute-0 podman[75074]: 2026-02-02 15:06:24.464652467 +0000 UTC m=+0.528294904 container remove 0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c (image=quay.io/ceph/ceph:v20, name=elastic_mendeleev, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:06:24 compute-0 systemd[1]: libpod-conmon-0b29238367e0a5135613124090aa9433b478702a8d00e4213b4f7ea08357464c.scope: Deactivated successfully.
Feb 02 15:06:24 compute-0 podman[75131]: 2026-02-02 15:06:24.51731692 +0000 UTC m=+0.038310292 container create dd7971f8b74c5ebe833e8912fdb912735f42d0e9cc6e6a9e88d1a154c0978090 (image=quay.io/ceph/ceph:v20, name=agitated_dijkstra, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:06:24 compute-0 systemd[1]: Started libpod-conmon-dd7971f8b74c5ebe833e8912fdb912735f42d0e9cc6e6a9e88d1a154c0978090.scope.
Feb 02 15:06:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf1c869d1c4fcb3b7416e8de1a0e3de8aa1ee2bff44464ceba0d6fed38e7c62/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf1c869d1c4fcb3b7416e8de1a0e3de8aa1ee2bff44464ceba0d6fed38e7c62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf1c869d1c4fcb3b7416e8de1a0e3de8aa1ee2bff44464ceba0d6fed38e7c62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf1c869d1c4fcb3b7416e8de1a0e3de8aa1ee2bff44464ceba0d6fed38e7c62/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:24 compute-0 podman[75131]: 2026-02-02 15:06:24.589680777 +0000 UTC m=+0.110674239 container init dd7971f8b74c5ebe833e8912fdb912735f42d0e9cc6e6a9e88d1a154c0978090 (image=quay.io/ceph/ceph:v20, name=agitated_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:24 compute-0 podman[75131]: 2026-02-02 15:06:24.498385295 +0000 UTC m=+0.019378687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:24 compute-0 podman[75131]: 2026-02-02 15:06:24.595687004 +0000 UTC m=+0.116680396 container start dd7971f8b74c5ebe833e8912fdb912735f42d0e9cc6e6a9e88d1a154c0978090 (image=quay.io/ceph/ceph:v20, name=agitated_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:06:24 compute-0 podman[75131]: 2026-02-02 15:06:24.59959383 +0000 UTC m=+0.120587232 container attach dd7971f8b74c5ebe833e8912fdb912735f42d0e9cc6e6a9e88d1a154c0978090 (image=quay.io/ceph/ceph:v20, name=agitated_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:06:24 compute-0 ceph-mon[74985]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 15:06:24 compute-0 ceph-mon[74985]: monmap epoch 1
Feb 02 15:06:24 compute-0 ceph-mon[74985]: fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:24 compute-0 ceph-mon[74985]: last_changed 2026-02-02T15:06:21.638370+0000
Feb 02 15:06:24 compute-0 ceph-mon[74985]: created 2026-02-02T15:06:21.638370+0000
Feb 02 15:06:24 compute-0 ceph-mon[74985]: min_mon_release 20 (tentacle)
Feb 02 15:06:24 compute-0 ceph-mon[74985]: election_strategy: 1
Feb 02 15:06:24 compute-0 ceph-mon[74985]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 15:06:24 compute-0 ceph-mon[74985]: fsmap 
Feb 02 15:06:24 compute-0 ceph-mon[74985]: osdmap e1: 0 total, 0 up, 0 in
Feb 02 15:06:24 compute-0 ceph-mon[74985]: mgrmap e1: no daemons active
Feb 02 15:06:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3422340338' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 02 15:06:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3852679715' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 02 15:06:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3852679715' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 02 15:06:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:06:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2617491495' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:06:24 compute-0 systemd[1]: libpod-dd7971f8b74c5ebe833e8912fdb912735f42d0e9cc6e6a9e88d1a154c0978090.scope: Deactivated successfully.
Feb 02 15:06:24 compute-0 podman[75131]: 2026-02-02 15:06:24.874211693 +0000 UTC m=+0.395205055 container died dd7971f8b74c5ebe833e8912fdb912735f42d0e9cc6e6a9e88d1a154c0978090 (image=quay.io/ceph/ceph:v20, name=agitated_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:06:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf1c869d1c4fcb3b7416e8de1a0e3de8aa1ee2bff44464ceba0d6fed38e7c62-merged.mount: Deactivated successfully.
Feb 02 15:06:24 compute-0 podman[75131]: 2026-02-02 15:06:24.921351341 +0000 UTC m=+0.442344743 container remove dd7971f8b74c5ebe833e8912fdb912735f42d0e9cc6e6a9e88d1a154c0978090 (image=quay.io/ceph/ceph:v20, name=agitated_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:06:24 compute-0 systemd[1]: libpod-conmon-dd7971f8b74c5ebe833e8912fdb912735f42d0e9cc6e6a9e88d1a154c0978090.scope: Deactivated successfully.
Feb 02 15:06:24 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:06:25 compute-0 ceph-mon[74985]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 02 15:06:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 02 15:06:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 shutdown
Feb 02 15:06:25 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0[74981]: 2026-02-02T15:06:25.126+0000 7fb68ba2d640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 02 15:06:25 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0[74981]: 2026-02-02T15:06:25.127+0000 7fb68ba2d640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 02 15:06:25 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 02 15:06:25 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 02 15:06:25 compute-0 podman[75214]: 2026-02-02 15:06:25.147266358 +0000 UTC m=+0.065044048 container died be95d4208288f89cdac6abeb878532d869843b31b3135765910585a88c3161f4 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fd308d74adad8cb8108317b744dccf8d9625b553b74f7579cb61c6c6e109405-merged.mount: Deactivated successfully.
Feb 02 15:06:25 compute-0 podman[75214]: 2026-02-02 15:06:25.186557703 +0000 UTC m=+0.104335393 container remove be95d4208288f89cdac6abeb878532d869843b31b3135765910585a88c3161f4 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:25 compute-0 bash[75214]: ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0
Feb 02 15:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 02 15:06:25 compute-0 systemd[1]: ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@mon.compute-0.service: Deactivated successfully.
Feb 02 15:06:25 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:06:25 compute-0 systemd[1]: Starting Ceph mon.compute-0 for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:06:25 compute-0 podman[75315]: 2026-02-02 15:06:25.476558233 +0000 UTC m=+0.045403255 container create a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c71c185fe20299dabdd2d0385d18ad12978b4cf45e87474f9fb8b2b7cf4e967/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c71c185fe20299dabdd2d0385d18ad12978b4cf45e87474f9fb8b2b7cf4e967/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c71c185fe20299dabdd2d0385d18ad12978b4cf45e87474f9fb8b2b7cf4e967/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c71c185fe20299dabdd2d0385d18ad12978b4cf45e87474f9fb8b2b7cf4e967/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:25 compute-0 podman[75315]: 2026-02-02 15:06:25.530883208 +0000 UTC m=+0.099728260 container init a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:25 compute-0 podman[75315]: 2026-02-02 15:06:25.536351702 +0000 UTC m=+0.105196764 container start a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:06:25 compute-0 bash[75315]: a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59
Feb 02 15:06:25 compute-0 podman[75315]: 2026-02-02 15:06:25.45157778 +0000 UTC m=+0.020422892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:25 compute-0 systemd[1]: Started Ceph mon.compute-0 for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:06:25 compute-0 ceph-mon[75334]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb 02 15:06:25 compute-0 ceph-mon[75334]: pidfile_write: ignore empty --pid-file
Feb 02 15:06:25 compute-0 ceph-mon[75334]: load: jerasure load: lrc 
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: RocksDB version: 7.9.2
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Git sha 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: DB SUMMARY
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: DB Session ID:  808TM54KTF2S4YGE1ZJW
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: CURRENT file:  CURRENT
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                         Options.error_if_exists: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                       Options.create_if_missing: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                                     Options.env: 0x55e1f09b9440
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                                Options.info_log: 0x55e1f12b3e80
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                              Options.statistics: (nil)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                               Options.use_fsync: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                              Options.db_log_dir: 
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                                 Options.wal_dir: 
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                    Options.write_buffer_manager: 0x55e1f12fe140
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.unordered_write: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                               Options.row_cache: None
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                              Options.wal_filter: None
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.two_write_queues: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.wal_compression: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.atomic_flush: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.max_background_jobs: 2
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.max_background_compactions: -1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.max_subcompactions: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.max_total_wal_size: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                          Options.max_open_files: -1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:       Options.compaction_readahead_size: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Compression algorithms supported:
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         kZSTD supported: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         kXpressCompression supported: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         kBZip2Compression supported: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         kLZ4Compression supported: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         kZlibCompression supported: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         kSnappyCompression supported: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:           Options.merge_operator: 
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:        Options.compaction_filter: None
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1f130aa00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e1f12ef8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:        Options.write_buffer_size: 33554432
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:  Options.max_write_buffer_number: 2
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:          Options.compression: NoCompression
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.num_levels: 7
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b7096c04-39ee-4763-9c12-88827d921c4c
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044785584847, "job": 1, "event": "recovery_started", "wal_files": [9]}
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044785588577, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044785, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044785588740, "job": 1, "event": "recovery_finished"}
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e1f131ce00
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: DB pointer 0x55e1f1466000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:06:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.33 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.33 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1f12ef8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 15:06:25 compute-0 ceph-mon[75334]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@-1(???) e1 preinit fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@-1(???).mds e1 new map
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-02-02T15:06:23:601132+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 02 15:06:25 compute-0 ceph-mon[75334]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T15:06:21.638370+0000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : created 2026-02-02T15:06:21.638370+0000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : fsmap 
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 02 15:06:25 compute-0 podman[75335]: 2026-02-02 15:06:25.6120036 +0000 UTC m=+0.047436416 container create be4749121dfff8b0f0a40cc6704e606145543665619b442f7c81683526703345 (image=quay.io/ceph/ceph:v20, name=epic_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 02 15:06:25 compute-0 systemd[1]: Started libpod-conmon-be4749121dfff8b0f0a40cc6704e606145543665619b442f7c81683526703345.scope.
Feb 02 15:06:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85d926f3db359926fbe3c8d3bba32d26f77d61331f50f8964e2a77a8be8203f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85d926f3db359926fbe3c8d3bba32d26f77d61331f50f8964e2a77a8be8203f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85d926f3db359926fbe3c8d3bba32d26f77d61331f50f8964e2a77a8be8203f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: monmap epoch 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:25 compute-0 ceph-mon[75334]: last_changed 2026-02-02T15:06:21.638370+0000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: created 2026-02-02T15:06:21.638370+0000
Feb 02 15:06:25 compute-0 ceph-mon[75334]: min_mon_release 20 (tentacle)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: election_strategy: 1
Feb 02 15:06:25 compute-0 ceph-mon[75334]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 02 15:06:25 compute-0 ceph-mon[75334]: fsmap 
Feb 02 15:06:25 compute-0 ceph-mon[75334]: osdmap e1: 0 total, 0 up, 0 in
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mgrmap e1: no daemons active
Feb 02 15:06:25 compute-0 podman[75335]: 2026-02-02 15:06:25.690301412 +0000 UTC m=+0.125734278 container init be4749121dfff8b0f0a40cc6704e606145543665619b442f7c81683526703345 (image=quay.io/ceph/ceph:v20, name=epic_cohen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 15:06:25 compute-0 podman[75335]: 2026-02-02 15:06:25.598347744 +0000 UTC m=+0.033780560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:25 compute-0 podman[75335]: 2026-02-02 15:06:25.698593646 +0000 UTC m=+0.134026492 container start be4749121dfff8b0f0a40cc6704e606145543665619b442f7c81683526703345 (image=quay.io/ceph/ceph:v20, name=epic_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:25 compute-0 podman[75335]: 2026-02-02 15:06:25.702773679 +0000 UTC m=+0.138206495 container attach be4749121dfff8b0f0a40cc6704e606145543665619b442f7c81683526703345 (image=quay.io/ceph/ceph:v20, name=epic_cohen, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 15:06:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Feb 02 15:06:25 compute-0 systemd[1]: libpod-be4749121dfff8b0f0a40cc6704e606145543665619b442f7c81683526703345.scope: Deactivated successfully.
Feb 02 15:06:25 compute-0 podman[75335]: 2026-02-02 15:06:25.907940286 +0000 UTC m=+0.343373132 container died be4749121dfff8b0f0a40cc6704e606145543665619b442f7c81683526703345 (image=quay.io/ceph/ceph:v20, name=epic_cohen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e85d926f3db359926fbe3c8d3bba32d26f77d61331f50f8964e2a77a8be8203f-merged.mount: Deactivated successfully.
Feb 02 15:06:25 compute-0 podman[75335]: 2026-02-02 15:06:25.948104602 +0000 UTC m=+0.383537418 container remove be4749121dfff8b0f0a40cc6704e606145543665619b442f7c81683526703345 (image=quay.io/ceph/ceph:v20, name=epic_cohen, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:25 compute-0 systemd[1]: libpod-conmon-be4749121dfff8b0f0a40cc6704e606145543665619b442f7c81683526703345.scope: Deactivated successfully.
Feb 02 15:06:26 compute-0 podman[75427]: 2026-02-02 15:06:26.013075307 +0000 UTC m=+0.044030351 container create 0df1fd3c4b7aeb303394c6e7f28c7b099a3721c764ea6a71373f7068c536be56 (image=quay.io/ceph/ceph:v20, name=upbeat_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:06:26 compute-0 systemd[1]: Started libpod-conmon-0df1fd3c4b7aeb303394c6e7f28c7b099a3721c764ea6a71373f7068c536be56.scope.
Feb 02 15:06:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d3a842e862e9e8f18fdbbdb978d2526e9cea4b0b88eb8515781f5ad883a156/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d3a842e862e9e8f18fdbbdb978d2526e9cea4b0b88eb8515781f5ad883a156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d3a842e862e9e8f18fdbbdb978d2526e9cea4b0b88eb8515781f5ad883a156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:26 compute-0 podman[75427]: 2026-02-02 15:06:26.078169146 +0000 UTC m=+0.109124170 container init 0df1fd3c4b7aeb303394c6e7f28c7b099a3721c764ea6a71373f7068c536be56 (image=quay.io/ceph/ceph:v20, name=upbeat_keller, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:26 compute-0 podman[75427]: 2026-02-02 15:06:26.084583003 +0000 UTC m=+0.115538047 container start 0df1fd3c4b7aeb303394c6e7f28c7b099a3721c764ea6a71373f7068c536be56 (image=quay.io/ceph/ceph:v20, name=upbeat_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:06:26 compute-0 podman[75427]: 2026-02-02 15:06:25.989428307 +0000 UTC m=+0.020383401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:26 compute-0 podman[75427]: 2026-02-02 15:06:26.088656313 +0000 UTC m=+0.119611347 container attach 0df1fd3c4b7aeb303394c6e7f28c7b099a3721c764ea6a71373f7068c536be56 (image=quay.io/ceph/ceph:v20, name=upbeat_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 02 15:06:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Feb 02 15:06:26 compute-0 systemd[1]: libpod-0df1fd3c4b7aeb303394c6e7f28c7b099a3721c764ea6a71373f7068c536be56.scope: Deactivated successfully.
Feb 02 15:06:26 compute-0 podman[75427]: 2026-02-02 15:06:26.314484549 +0000 UTC m=+0.345439553 container died 0df1fd3c4b7aeb303394c6e7f28c7b099a3721c764ea6a71373f7068c536be56 (image=quay.io/ceph/ceph:v20, name=upbeat_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d3a842e862e9e8f18fdbbdb978d2526e9cea4b0b88eb8515781f5ad883a156-merged.mount: Deactivated successfully.
Feb 02 15:06:26 compute-0 podman[75427]: 2026-02-02 15:06:26.354008189 +0000 UTC m=+0.384963193 container remove 0df1fd3c4b7aeb303394c6e7f28c7b099a3721c764ea6a71373f7068c536be56 (image=quay.io/ceph/ceph:v20, name=upbeat_keller, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:06:26 compute-0 systemd[1]: libpod-conmon-0df1fd3c4b7aeb303394c6e7f28c7b099a3721c764ea6a71373f7068c536be56.scope: Deactivated successfully.
Feb 02 15:06:26 compute-0 systemd[1]: Reloading.
Feb 02 15:06:26 compute-0 systemd-sysv-generator[75505]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:06:26 compute-0 systemd-rc-local-generator[75501]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:06:26 compute-0 systemd[1]: Reloading.
Feb 02 15:06:26 compute-0 systemd-rc-local-generator[75548]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:06:26 compute-0 systemd-sysv-generator[75554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:06:26 compute-0 systemd[1]: Starting Ceph mgr.compute-0.rxryxi for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:06:27 compute-0 podman[75609]: 2026-02-02 15:06:27.066486494 +0000 UTC m=+0.020890734 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:27 compute-0 podman[75609]: 2026-02-02 15:06:27.406887962 +0000 UTC m=+0.361292182 container create b5c8003f4156ee59dc5d142a0f59c051e22d50f246e7ffe73bf8f408e3761baa (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81cf53e9f8f64ff95163f090393d7e13c29bec87f8a2e6c355943527541cd0aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81cf53e9f8f64ff95163f090393d7e13c29bec87f8a2e6c355943527541cd0aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81cf53e9f8f64ff95163f090393d7e13c29bec87f8a2e6c355943527541cd0aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81cf53e9f8f64ff95163f090393d7e13c29bec87f8a2e6c355943527541cd0aa/merged/var/lib/ceph/mgr/ceph-compute-0.rxryxi supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:27 compute-0 podman[75609]: 2026-02-02 15:06:27.463638035 +0000 UTC m=+0.418042275 container init b5c8003f4156ee59dc5d142a0f59c051e22d50f246e7ffe73bf8f408e3761baa (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 15:06:27 compute-0 podman[75609]: 2026-02-02 15:06:27.468975826 +0000 UTC m=+0.423380046 container start b5c8003f4156ee59dc5d142a0f59c051e22d50f246e7ffe73bf8f408e3761baa (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:06:27 compute-0 bash[75609]: b5c8003f4156ee59dc5d142a0f59c051e22d50f246e7ffe73bf8f408e3761baa
Feb 02 15:06:27 compute-0 systemd[1]: Started Ceph mgr.compute-0.rxryxi for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:06:27 compute-0 ceph-mgr[75628]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:06:27 compute-0 ceph-mgr[75628]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 02 15:06:27 compute-0 ceph-mgr[75628]: pidfile_write: ignore empty --pid-file
Feb 02 15:06:27 compute-0 podman[75629]: 2026-02-02 15:06:27.54608701 +0000 UTC m=+0.037083742 container create 0f9a6fcaac11767f3dad2c2dd0b545bf2fdfc2b7ea699a2941c3d5226138a45f (image=quay.io/ceph/ceph:v20, name=flamboyant_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 02 15:06:27 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'alerts'
Feb 02 15:06:27 compute-0 systemd[1]: Started libpod-conmon-0f9a6fcaac11767f3dad2c2dd0b545bf2fdfc2b7ea699a2941c3d5226138a45f.scope.
Feb 02 15:06:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1992de93fcb0e8e37abfebfc994b924e1db3524a0aee5cb7cadd0cd2d2adf1b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1992de93fcb0e8e37abfebfc994b924e1db3524a0aee5cb7cadd0cd2d2adf1b6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1992de93fcb0e8e37abfebfc994b924e1db3524a0aee5cb7cadd0cd2d2adf1b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:27 compute-0 podman[75629]: 2026-02-02 15:06:27.625595952 +0000 UTC m=+0.116592694 container init 0f9a6fcaac11767f3dad2c2dd0b545bf2fdfc2b7ea699a2941c3d5226138a45f (image=quay.io/ceph/ceph:v20, name=flamboyant_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:06:27 compute-0 podman[75629]: 2026-02-02 15:06:27.529170474 +0000 UTC m=+0.020167216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:27 compute-0 podman[75629]: 2026-02-02 15:06:27.634418428 +0000 UTC m=+0.125415140 container start 0f9a6fcaac11767f3dad2c2dd0b545bf2fdfc2b7ea699a2941c3d5226138a45f (image=quay.io/ceph/ceph:v20, name=flamboyant_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:27 compute-0 podman[75629]: 2026-02-02 15:06:27.637784542 +0000 UTC m=+0.128781264 container attach 0f9a6fcaac11767f3dad2c2dd0b545bf2fdfc2b7ea699a2941c3d5226138a45f (image=quay.io/ceph/ceph:v20, name=flamboyant_mahavira, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 02 15:06:27 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'balancer'
Feb 02 15:06:27 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'cephadm'
Feb 02 15:06:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 02 15:06:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070616853' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]: 
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]: {
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "health": {
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "status": "HEALTH_OK",
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "checks": {},
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "mutes": []
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     },
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "election_epoch": 5,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "quorum": [
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         0
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     ],
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "quorum_names": [
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "compute-0"
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     ],
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "quorum_age": 2,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "monmap": {
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "epoch": 1,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "min_mon_release_name": "tentacle",
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "num_mons": 1
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     },
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "osdmap": {
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "epoch": 1,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "num_osds": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "num_up_osds": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "osd_up_since": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "num_in_osds": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "osd_in_since": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "num_remapped_pgs": 0
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     },
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "pgmap": {
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "pgs_by_state": [],
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "num_pgs": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "num_pools": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "num_objects": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "data_bytes": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "bytes_used": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "bytes_avail": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "bytes_total": 0
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     },
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "fsmap": {
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "epoch": 1,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "btime": "2026-02-02T15:06:23:601132+0000",
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "by_rank": [],
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "up:standby": 0
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     },
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "mgrmap": {
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "available": false,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "num_standbys": 0,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "modules": [
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:             "iostat",
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:             "nfs"
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         ],
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "services": {}
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     },
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "servicemap": {
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "epoch": 1,
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "modified": "2026-02-02T15:06:23.603344+0000",
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:         "services": {}
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     },
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]:     "progress_events": {}
Feb 02 15:06:27 compute-0 flamboyant_mahavira[75665]: }
Feb 02 15:06:27 compute-0 systemd[1]: libpod-0f9a6fcaac11767f3dad2c2dd0b545bf2fdfc2b7ea699a2941c3d5226138a45f.scope: Deactivated successfully.
Feb 02 15:06:27 compute-0 podman[75629]: 2026-02-02 15:06:27.828031943 +0000 UTC m=+0.319028705 container died 0f9a6fcaac11767f3dad2c2dd0b545bf2fdfc2b7ea699a2941c3d5226138a45f (image=quay.io/ceph/ceph:v20, name=flamboyant_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb 02 15:06:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1992de93fcb0e8e37abfebfc994b924e1db3524a0aee5cb7cadd0cd2d2adf1b6-merged.mount: Deactivated successfully.
Feb 02 15:06:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2070616853' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 02 15:06:27 compute-0 podman[75629]: 2026-02-02 15:06:27.869762967 +0000 UTC m=+0.360759689 container remove 0f9a6fcaac11767f3dad2c2dd0b545bf2fdfc2b7ea699a2941c3d5226138a45f (image=quay.io/ceph/ceph:v20, name=flamboyant_mahavira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:06:27 compute-0 systemd[1]: libpod-conmon-0f9a6fcaac11767f3dad2c2dd0b545bf2fdfc2b7ea699a2941c3d5226138a45f.scope: Deactivated successfully.
Feb 02 15:06:28 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'crash'
Feb 02 15:06:28 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'dashboard'
Feb 02 15:06:29 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'devicehealth'
Feb 02 15:06:29 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'diskprediction_local'
Feb 02 15:06:29 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 02 15:06:29 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 02 15:06:29 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]:   from numpy import show_config as show_numpy_config
Feb 02 15:06:29 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'influx'
Feb 02 15:06:29 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'insights'
Feb 02 15:06:29 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'iostat'
Feb 02 15:06:29 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'k8sevents'
Feb 02 15:06:29 compute-0 podman[75713]: 2026-02-02 15:06:29.928578412 +0000 UTC m=+0.039516332 container create 6bcb52c57f6b8a7891caba1d4042c2df3d28c46b2f503d36a604729274f0745b (image=quay.io/ceph/ceph:v20, name=exciting_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:06:29 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'localpool'
Feb 02 15:06:29 compute-0 systemd[1]: Started libpod-conmon-6bcb52c57f6b8a7891caba1d4042c2df3d28c46b2f503d36a604729274f0745b.scope.
Feb 02 15:06:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f90facdf263e9416346092cb4a35be0c39e95f19eaa255b75fad32d31be7639/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f90facdf263e9416346092cb4a35be0c39e95f19eaa255b75fad32d31be7639/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f90facdf263e9416346092cb4a35be0c39e95f19eaa255b75fad32d31be7639/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:30 compute-0 podman[75713]: 2026-02-02 15:06:29.913339127 +0000 UTC m=+0.024277047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:30 compute-0 podman[75713]: 2026-02-02 15:06:30.016663285 +0000 UTC m=+0.127601235 container init 6bcb52c57f6b8a7891caba1d4042c2df3d28c46b2f503d36a604729274f0745b (image=quay.io/ceph/ceph:v20, name=exciting_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:30 compute-0 podman[75713]: 2026-02-02 15:06:30.020693644 +0000 UTC m=+0.131631604 container start 6bcb52c57f6b8a7891caba1d4042c2df3d28c46b2f503d36a604729274f0745b (image=quay.io/ceph/ceph:v20, name=exciting_buck, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:30 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'mds_autoscaler'
Feb 02 15:06:30 compute-0 podman[75713]: 2026-02-02 15:06:30.028195777 +0000 UTC m=+0.139133707 container attach 6bcb52c57f6b8a7891caba1d4042c2df3d28c46b2f503d36a604729274f0745b (image=quay.io/ceph/ceph:v20, name=exciting_buck, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb 02 15:06:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 02 15:06:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1624877511' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 02 15:06:30 compute-0 exciting_buck[75729]: 
Feb 02 15:06:30 compute-0 exciting_buck[75729]: {
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "health": {
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "status": "HEALTH_OK",
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "checks": {},
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "mutes": []
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     },
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "election_epoch": 5,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "quorum": [
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         0
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     ],
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "quorum_names": [
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "compute-0"
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     ],
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "quorum_age": 4,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "monmap": {
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "epoch": 1,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "min_mon_release_name": "tentacle",
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "num_mons": 1
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     },
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "osdmap": {
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "epoch": 1,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "num_osds": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "num_up_osds": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "osd_up_since": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "num_in_osds": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "osd_in_since": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "num_remapped_pgs": 0
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     },
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "pgmap": {
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "pgs_by_state": [],
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "num_pgs": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "num_pools": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "num_objects": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "data_bytes": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "bytes_used": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "bytes_avail": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "bytes_total": 0
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     },
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "fsmap": {
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "epoch": 1,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "btime": "2026-02-02T15:06:23:601132+0000",
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "by_rank": [],
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "up:standby": 0
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     },
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "mgrmap": {
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "available": false,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "num_standbys": 0,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "modules": [
Feb 02 15:06:30 compute-0 exciting_buck[75729]:             "iostat",
Feb 02 15:06:30 compute-0 exciting_buck[75729]:             "nfs"
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         ],
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "services": {}
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     },
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "servicemap": {
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "epoch": 1,
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "modified": "2026-02-02T15:06:23.603344+0000",
Feb 02 15:06:30 compute-0 exciting_buck[75729]:         "services": {}
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     },
Feb 02 15:06:30 compute-0 exciting_buck[75729]:     "progress_events": {}
Feb 02 15:06:30 compute-0 exciting_buck[75729]: }
Feb 02 15:06:30 compute-0 systemd[1]: libpod-6bcb52c57f6b8a7891caba1d4042c2df3d28c46b2f503d36a604729274f0745b.scope: Deactivated successfully.
Feb 02 15:06:30 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'mirroring'
Feb 02 15:06:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1624877511' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 02 15:06:30 compute-0 podman[75755]: 2026-02-02 15:06:30.275337986 +0000 UTC m=+0.036254531 container died 6bcb52c57f6b8a7891caba1d4042c2df3d28c46b2f503d36a604729274f0745b (image=quay.io/ceph/ceph:v20, name=exciting_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:06:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f90facdf263e9416346092cb4a35be0c39e95f19eaa255b75fad32d31be7639-merged.mount: Deactivated successfully.
Feb 02 15:06:30 compute-0 podman[75755]: 2026-02-02 15:06:30.305674591 +0000 UTC m=+0.066591146 container remove 6bcb52c57f6b8a7891caba1d4042c2df3d28c46b2f503d36a604729274f0745b (image=quay.io/ceph/ceph:v20, name=exciting_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 15:06:30 compute-0 systemd[1]: libpod-conmon-6bcb52c57f6b8a7891caba1d4042c2df3d28c46b2f503d36a604729274f0745b.scope: Deactivated successfully.
Feb 02 15:06:30 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'nfs'
Feb 02 15:06:30 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'orchestrator'
Feb 02 15:06:30 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'osd_perf_query'
Feb 02 15:06:30 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'osd_support'
Feb 02 15:06:30 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'pg_autoscaler'
Feb 02 15:06:30 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'progress'
Feb 02 15:06:31 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'prometheus'
Feb 02 15:06:31 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'rbd_support'
Feb 02 15:06:31 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'rgw'
Feb 02 15:06:31 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'rook'
Feb 02 15:06:32 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'selftest'
Feb 02 15:06:32 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'smb'
Feb 02 15:06:32 compute-0 podman[75770]: 2026-02-02 15:06:32.406902516 +0000 UTC m=+0.074384297 container create 68cedcffaab413a44365726e08f82dcf3a58de66bd13bccb41338796d4df9807 (image=quay.io/ceph/ceph:v20, name=exciting_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:06:32 compute-0 systemd[1]: Started libpod-conmon-68cedcffaab413a44365726e08f82dcf3a58de66bd13bccb41338796d4df9807.scope.
Feb 02 15:06:32 compute-0 podman[75770]: 2026-02-02 15:06:32.365104639 +0000 UTC m=+0.032586510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b5118ecf9ea35db771ebd8986e707f493fcd4e61302bc462e1efcd08b436eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b5118ecf9ea35db771ebd8986e707f493fcd4e61302bc462e1efcd08b436eb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b5118ecf9ea35db771ebd8986e707f493fcd4e61302bc462e1efcd08b436eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:32 compute-0 podman[75770]: 2026-02-02 15:06:32.524318739 +0000 UTC m=+0.191800550 container init 68cedcffaab413a44365726e08f82dcf3a58de66bd13bccb41338796d4df9807 (image=quay.io/ceph/ceph:v20, name=exciting_zhukovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:06:32 compute-0 podman[75770]: 2026-02-02 15:06:32.528677906 +0000 UTC m=+0.196159687 container start 68cedcffaab413a44365726e08f82dcf3a58de66bd13bccb41338796d4df9807 (image=quay.io/ceph/ceph:v20, name=exciting_zhukovsky, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:32 compute-0 podman[75770]: 2026-02-02 15:06:32.554157062 +0000 UTC m=+0.221638873 container attach 68cedcffaab413a44365726e08f82dcf3a58de66bd13bccb41338796d4df9807 (image=quay.io/ceph/ceph:v20, name=exciting_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:06:32 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'snap_schedule'
Feb 02 15:06:32 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'stats'
Feb 02 15:06:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 02 15:06:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588663668' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]: 
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]: {
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "health": {
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "status": "HEALTH_OK",
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "checks": {},
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "mutes": []
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     },
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "election_epoch": 5,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "quorum": [
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         0
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     ],
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "quorum_names": [
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "compute-0"
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     ],
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "quorum_age": 7,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "monmap": {
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "epoch": 1,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "min_mon_release_name": "tentacle",
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "num_mons": 1
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     },
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "osdmap": {
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "epoch": 1,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "num_osds": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "num_up_osds": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "osd_up_since": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "num_in_osds": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "osd_in_since": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "num_remapped_pgs": 0
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     },
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "pgmap": {
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "pgs_by_state": [],
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "num_pgs": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "num_pools": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "num_objects": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "data_bytes": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "bytes_used": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "bytes_avail": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "bytes_total": 0
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     },
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "fsmap": {
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "epoch": 1,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "btime": "2026-02-02T15:06:23:601132+0000",
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "by_rank": [],
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "up:standby": 0
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     },
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "mgrmap": {
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "available": false,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "num_standbys": 0,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "modules": [
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:             "iostat",
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:             "nfs"
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         ],
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "services": {}
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     },
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "servicemap": {
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "epoch": 1,
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "modified": "2026-02-02T15:06:23.603344+0000",
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:         "services": {}
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     },
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]:     "progress_events": {}
Feb 02 15:06:32 compute-0 exciting_zhukovsky[75786]: }
Feb 02 15:06:32 compute-0 systemd[1]: libpod-68cedcffaab413a44365726e08f82dcf3a58de66bd13bccb41338796d4df9807.scope: Deactivated successfully.
Feb 02 15:06:32 compute-0 podman[75770]: 2026-02-02 15:06:32.701328305 +0000 UTC m=+0.368810096 container died 68cedcffaab413a44365726e08f82dcf3a58de66bd13bccb41338796d4df9807 (image=quay.io/ceph/ceph:v20, name=exciting_zhukovsky, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b5118ecf9ea35db771ebd8986e707f493fcd4e61302bc462e1efcd08b436eb-merged.mount: Deactivated successfully.
Feb 02 15:06:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3588663668' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 02 15:06:32 compute-0 podman[75770]: 2026-02-02 15:06:32.749009026 +0000 UTC m=+0.416490807 container remove 68cedcffaab413a44365726e08f82dcf3a58de66bd13bccb41338796d4df9807 (image=quay.io/ceph/ceph:v20, name=exciting_zhukovsky, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:32 compute-0 systemd[1]: libpod-conmon-68cedcffaab413a44365726e08f82dcf3a58de66bd13bccb41338796d4df9807.scope: Deactivated successfully.
Feb 02 15:06:32 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'status'
Feb 02 15:06:32 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'telegraf'
Feb 02 15:06:32 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'telemetry'
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'test_orchestrator'
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'volumes'
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: ms_deliver_dispatch: unhandled message 0x555c50f5d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rxryxi
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr handle_mgr_map Activating!
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr handle_mgr_map I am now activating
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.rxryxi(active, starting, since 0.0232478s)
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mds metadata"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e1 all = 1
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rxryxi", "id": "compute-0.rxryxi"} v 0)
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mgr metadata", "who": "compute-0.rxryxi", "id": "compute-0.rxryxi"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: balancer
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [balancer INFO root] Starting
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: crash
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:06:33
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [balancer INFO root] No pools available
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Manager daemon compute-0.rxryxi is now available
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: devicehealth
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: iostat
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [devicehealth INFO root] Starting
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: nfs
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: orchestrator
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: pg_autoscaler
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: progress
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [progress INFO root] Loading...
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [progress INFO root] No stored events to load
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [progress INFO root] Loaded [] historic events
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [progress INFO root] Loaded OSDMap, ready.
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support INFO root] recovery thread starting
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support INFO root] starting setup
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: rbd_support
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: status
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: telemetry
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/mirror_snapshot_schedule"} v 0)
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/mirror_snapshot_schedule"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support INFO root] PerfHandler: starting
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TaskHandler: starting
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/trash_purge_schedule"} v 0)
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/trash_purge_schedule"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: [rbd_support INFO root] setup complete
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Feb 02 15:06:33 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: volumes
Feb 02 15:06:33 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:33 compute-0 ceph-mon[75334]: Activating manager daemon compute-0.rxryxi
Feb 02 15:06:33 compute-0 ceph-mon[75334]: mgrmap e2: compute-0.rxryxi(active, starting, since 0.0232478s)
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mds metadata"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mgr metadata", "who": "compute-0.rxryxi", "id": "compute-0.rxryxi"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: Manager daemon compute-0.rxryxi is now available
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/mirror_snapshot_schedule"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/trash_purge_schedule"} : dispatch
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:33 compute-0 ceph-mon[75334]: from='mgr.14102 192.168.122.100:0/927681521' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.rxryxi(active, since 1.03259s)
Feb 02 15:06:34 compute-0 podman[75902]: 2026-02-02 15:06:34.794163504 +0000 UTC m=+0.029113296 container create 31ffed5844db32ad60e8156ac04b0f3cea6c20cc65a29c8955663f19546c9cb9 (image=quay.io/ceph/ceph:v20, name=serene_wilbur, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:06:34 compute-0 systemd[1]: Started libpod-conmon-31ffed5844db32ad60e8156ac04b0f3cea6c20cc65a29c8955663f19546c9cb9.scope.
Feb 02 15:06:34 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75275fe284f6854c32ede37766ad9f80c350e31b57e06ad9626249c677c91383/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75275fe284f6854c32ede37766ad9f80c350e31b57e06ad9626249c677c91383/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75275fe284f6854c32ede37766ad9f80c350e31b57e06ad9626249c677c91383/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:34 compute-0 podman[75902]: 2026-02-02 15:06:34.861860196 +0000 UTC m=+0.096810008 container init 31ffed5844db32ad60e8156ac04b0f3cea6c20cc65a29c8955663f19546c9cb9 (image=quay.io/ceph/ceph:v20, name=serene_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:34 compute-0 podman[75902]: 2026-02-02 15:06:34.86688867 +0000 UTC m=+0.101838472 container start 31ffed5844db32ad60e8156ac04b0f3cea6c20cc65a29c8955663f19546c9cb9 (image=quay.io/ceph/ceph:v20, name=serene_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:34 compute-0 podman[75902]: 2026-02-02 15:06:34.870467587 +0000 UTC m=+0.105417379 container attach 31ffed5844db32ad60e8156ac04b0f3cea6c20cc65a29c8955663f19546c9cb9 (image=quay.io/ceph/ceph:v20, name=serene_wilbur, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:06:34 compute-0 podman[75902]: 2026-02-02 15:06:34.781908663 +0000 UTC m=+0.016858475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 02 15:06:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375442504' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 02 15:06:35 compute-0 serene_wilbur[75918]: 
Feb 02 15:06:35 compute-0 serene_wilbur[75918]: {
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "health": {
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "status": "HEALTH_OK",
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "checks": {},
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "mutes": []
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     },
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "election_epoch": 5,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "quorum": [
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         0
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     ],
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "quorum_names": [
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "compute-0"
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     ],
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "quorum_age": 9,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "monmap": {
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "epoch": 1,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "min_mon_release_name": "tentacle",
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "num_mons": 1
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     },
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "osdmap": {
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "epoch": 1,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "num_osds": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "num_up_osds": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "osd_up_since": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "num_in_osds": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "osd_in_since": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "num_remapped_pgs": 0
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     },
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "pgmap": {
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "pgs_by_state": [],
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "num_pgs": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "num_pools": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "num_objects": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "data_bytes": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "bytes_used": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "bytes_avail": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "bytes_total": 0
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     },
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "fsmap": {
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "epoch": 1,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "btime": "2026-02-02T15:06:23:601132+0000",
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "by_rank": [],
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "up:standby": 0
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     },
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "mgrmap": {
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "available": true,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "num_standbys": 0,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "modules": [
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:             "iostat",
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:             "nfs"
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         ],
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "services": {}
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     },
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "servicemap": {
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "epoch": 1,
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "modified": "2026-02-02T15:06:23.603344+0000",
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:         "services": {}
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     },
Feb 02 15:06:35 compute-0 serene_wilbur[75918]:     "progress_events": {}
Feb 02 15:06:35 compute-0 serene_wilbur[75918]: }
Feb 02 15:06:35 compute-0 systemd[1]: libpod-31ffed5844db32ad60e8156ac04b0f3cea6c20cc65a29c8955663f19546c9cb9.scope: Deactivated successfully.
Feb 02 15:06:35 compute-0 podman[75902]: 2026-02-02 15:06:35.353376905 +0000 UTC m=+0.588326737 container died 31ffed5844db32ad60e8156ac04b0f3cea6c20cc65a29c8955663f19546c9cb9 (image=quay.io/ceph/ceph:v20, name=serene_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-75275fe284f6854c32ede37766ad9f80c350e31b57e06ad9626249c677c91383-merged.mount: Deactivated successfully.
Feb 02 15:06:35 compute-0 podman[75902]: 2026-02-02 15:06:35.393961142 +0000 UTC m=+0.628910964 container remove 31ffed5844db32ad60e8156ac04b0f3cea6c20cc65a29c8955663f19546c9cb9 (image=quay.io/ceph/ceph:v20, name=serene_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:06:35 compute-0 systemd[1]: libpod-conmon-31ffed5844db32ad60e8156ac04b0f3cea6c20cc65a29c8955663f19546c9cb9.scope: Deactivated successfully.
Feb 02 15:06:35 compute-0 podman[75956]: 2026-02-02 15:06:35.460545827 +0000 UTC m=+0.046009751 container create bcc6f92331670fb02bda4c3547195b78047b00382740e102bd0e3ce2214e51c6 (image=quay.io/ceph/ceph:v20, name=wizardly_einstein, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:06:35 compute-0 systemd[1]: Started libpod-conmon-bcc6f92331670fb02bda4c3547195b78047b00382740e102bd0e3ce2214e51c6.scope.
Feb 02 15:06:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2ab9c0b01642a0a42f26fb85a0677b628a229093e8da2708c49f68bd1bb340/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2ab9c0b01642a0a42f26fb85a0677b628a229093e8da2708c49f68bd1bb340/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2ab9c0b01642a0a42f26fb85a0677b628a229093e8da2708c49f68bd1bb340/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2ab9c0b01642a0a42f26fb85a0677b628a229093e8da2708c49f68bd1bb340/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:35 compute-0 podman[75956]: 2026-02-02 15:06:35.443049267 +0000 UTC m=+0.028513181 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:35 compute-0 podman[75956]: 2026-02-02 15:06:35.54126241 +0000 UTC m=+0.126726384 container init bcc6f92331670fb02bda4c3547195b78047b00382740e102bd0e3ce2214e51c6 (image=quay.io/ceph/ceph:v20, name=wizardly_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:06:35 compute-0 podman[75956]: 2026-02-02 15:06:35.546359794 +0000 UTC m=+0.131823678 container start bcc6f92331670fb02bda4c3547195b78047b00382740e102bd0e3ce2214e51c6 (image=quay.io/ceph/ceph:v20, name=wizardly_einstein, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:06:35 compute-0 podman[75956]: 2026-02-02 15:06:35.549778579 +0000 UTC m=+0.135242513 container attach bcc6f92331670fb02bda4c3547195b78047b00382740e102bd0e3ce2214e51c6 (image=quay.io/ceph/ceph:v20, name=wizardly_einstein, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:35 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:06:35 compute-0 ceph-mon[75334]: mgrmap e3: compute-0.rxryxi(active, since 1.03259s)
Feb 02 15:06:35 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/375442504' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 02 15:06:35 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:06:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.rxryxi(active, since 2s)
Feb 02 15:06:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 02 15:06:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4059359667' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 02 15:06:35 compute-0 wizardly_einstein[75972]: 
Feb 02 15:06:35 compute-0 wizardly_einstein[75972]: [global]
Feb 02 15:06:35 compute-0 wizardly_einstein[75972]:         fsid = e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:35 compute-0 wizardly_einstein[75972]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 02 15:06:35 compute-0 wizardly_einstein[75972]:         osd_crush_chooseleaf_type = 0
Feb 02 15:06:35 compute-0 systemd[1]: libpod-bcc6f92331670fb02bda4c3547195b78047b00382740e102bd0e3ce2214e51c6.scope: Deactivated successfully.
Feb 02 15:06:35 compute-0 podman[75998]: 2026-02-02 15:06:35.98765102 +0000 UTC m=+0.021161671 container died bcc6f92331670fb02bda4c3547195b78047b00382740e102bd0e3ce2214e51c6 (image=quay.io/ceph/ceph:v20, name=wizardly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:06:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c2ab9c0b01642a0a42f26fb85a0677b628a229093e8da2708c49f68bd1bb340-merged.mount: Deactivated successfully.
Feb 02 15:06:36 compute-0 podman[75998]: 2026-02-02 15:06:36.018132808 +0000 UTC m=+0.051643449 container remove bcc6f92331670fb02bda4c3547195b78047b00382740e102bd0e3ce2214e51c6 (image=quay.io/ceph/ceph:v20, name=wizardly_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:36 compute-0 systemd[1]: libpod-conmon-bcc6f92331670fb02bda4c3547195b78047b00382740e102bd0e3ce2214e51c6.scope: Deactivated successfully.
Feb 02 15:06:36 compute-0 podman[76013]: 2026-02-02 15:06:36.083456933 +0000 UTC m=+0.044228168 container create c5be1346ff62bd252f5bfa3b8c00df7f20453a7b7910f5f382aaca5c784b7558 (image=quay.io/ceph/ceph:v20, name=busy_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:36 compute-0 systemd[1]: Started libpod-conmon-c5be1346ff62bd252f5bfa3b8c00df7f20453a7b7910f5f382aaca5c784b7558.scope.
Feb 02 15:06:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4634203e5329a642353f222d64b864c8eacc8fa4b52e5a4d83e02ba056242259/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4634203e5329a642353f222d64b864c8eacc8fa4b52e5a4d83e02ba056242259/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4634203e5329a642353f222d64b864c8eacc8fa4b52e5a4d83e02ba056242259/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:36 compute-0 podman[76013]: 2026-02-02 15:06:36.138685098 +0000 UTC m=+0.099456383 container init c5be1346ff62bd252f5bfa3b8c00df7f20453a7b7910f5f382aaca5c784b7558 (image=quay.io/ceph/ceph:v20, name=busy_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:36 compute-0 podman[76013]: 2026-02-02 15:06:36.142262337 +0000 UTC m=+0.103033572 container start c5be1346ff62bd252f5bfa3b8c00df7f20453a7b7910f5f382aaca5c784b7558 (image=quay.io/ceph/ceph:v20, name=busy_solomon, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:06:36 compute-0 podman[76013]: 2026-02-02 15:06:36.145696661 +0000 UTC m=+0.106467896 container attach c5be1346ff62bd252f5bfa3b8c00df7f20453a7b7910f5f382aaca5c784b7558 (image=quay.io/ceph/ceph:v20, name=busy_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:06:36 compute-0 podman[76013]: 2026-02-02 15:06:36.068533846 +0000 UTC m=+0.029305101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Feb 02 15:06:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2049152506' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb 02 15:06:36 compute-0 ceph-mon[75334]: mgrmap e4: compute-0.rxryxi(active, since 2s)
Feb 02 15:06:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4059359667' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 02 15:06:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2049152506' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb 02 15:06:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2049152506' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: mgr respawn  1: '-n'
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: mgr respawn  2: 'mgr.compute-0.rxryxi'
Feb 02 15:06:36 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.rxryxi(active, since 3s)
Feb 02 15:06:36 compute-0 systemd[1]: libpod-c5be1346ff62bd252f5bfa3b8c00df7f20453a7b7910f5f382aaca5c784b7558.scope: Deactivated successfully.
Feb 02 15:06:36 compute-0 podman[76013]: 2026-02-02 15:06:36.661237289 +0000 UTC m=+0.622008544 container died c5be1346ff62bd252f5bfa3b8c00df7f20453a7b7910f5f382aaca5c784b7558 (image=quay.io/ceph/ceph:v20, name=busy_solomon, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4634203e5329a642353f222d64b864c8eacc8fa4b52e5a4d83e02ba056242259-merged.mount: Deactivated successfully.
Feb 02 15:06:36 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]: ignoring --setuser ceph since I am not root
Feb 02 15:06:36 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]: ignoring --setgroup ceph since I am not root
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: pidfile_write: ignore empty --pid-file
Feb 02 15:06:36 compute-0 podman[76013]: 2026-02-02 15:06:36.727315923 +0000 UTC m=+0.688087198 container remove c5be1346ff62bd252f5bfa3b8c00df7f20453a7b7910f5f382aaca5c784b7558 (image=quay.io/ceph/ceph:v20, name=busy_solomon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'alerts'
Feb 02 15:06:36 compute-0 systemd[1]: libpod-conmon-c5be1346ff62bd252f5bfa3b8c00df7f20453a7b7910f5f382aaca5c784b7558.scope: Deactivated successfully.
Feb 02 15:06:36 compute-0 podman[76086]: 2026-02-02 15:06:36.787201512 +0000 UTC m=+0.044998185 container create 79f20bafad38ad46b50672b165e8a5de005687b901183f030dc32f9c91e1f0c0 (image=quay.io/ceph/ceph:v20, name=crazy_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:06:36 compute-0 systemd[1]: Started libpod-conmon-79f20bafad38ad46b50672b165e8a5de005687b901183f030dc32f9c91e1f0c0.scope.
Feb 02 15:06:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92fceec30394d86057496e01f818c30b048f50685587236a10362e0c00c4329c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92fceec30394d86057496e01f818c30b048f50685587236a10362e0c00c4329c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92fceec30394d86057496e01f818c30b048f50685587236a10362e0c00c4329c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'balancer'
Feb 02 15:06:36 compute-0 podman[76086]: 2026-02-02 15:06:36.839381114 +0000 UTC m=+0.097177807 container init 79f20bafad38ad46b50672b165e8a5de005687b901183f030dc32f9c91e1f0c0 (image=quay.io/ceph/ceph:v20, name=crazy_mirzakhani, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:36 compute-0 podman[76086]: 2026-02-02 15:06:36.843245918 +0000 UTC m=+0.101042571 container start 79f20bafad38ad46b50672b165e8a5de005687b901183f030dc32f9c91e1f0c0 (image=quay.io/ceph/ceph:v20, name=crazy_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:06:36 compute-0 podman[76086]: 2026-02-02 15:06:36.857615922 +0000 UTC m=+0.115412595 container attach 79f20bafad38ad46b50672b165e8a5de005687b901183f030dc32f9c91e1f0c0 (image=quay.io/ceph/ceph:v20, name=crazy_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 15:06:36 compute-0 podman[76086]: 2026-02-02 15:06:36.766766711 +0000 UTC m=+0.024563414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:36 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'cephadm'
Feb 02 15:06:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 02 15:06:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572221113' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 02 15:06:37 compute-0 crazy_mirzakhani[76102]: {
Feb 02 15:06:37 compute-0 crazy_mirzakhani[76102]:     "epoch": 5,
Feb 02 15:06:37 compute-0 crazy_mirzakhani[76102]:     "available": true,
Feb 02 15:06:37 compute-0 crazy_mirzakhani[76102]:     "active_name": "compute-0.rxryxi",
Feb 02 15:06:37 compute-0 crazy_mirzakhani[76102]:     "num_standby": 0
Feb 02 15:06:37 compute-0 crazy_mirzakhani[76102]: }
Feb 02 15:06:37 compute-0 systemd[1]: libpod-79f20bafad38ad46b50672b165e8a5de005687b901183f030dc32f9c91e1f0c0.scope: Deactivated successfully.
Feb 02 15:06:37 compute-0 podman[76086]: 2026-02-02 15:06:37.352477613 +0000 UTC m=+0.610274276 container died 79f20bafad38ad46b50672b165e8a5de005687b901183f030dc32f9c91e1f0c0 (image=quay.io/ceph/ceph:v20, name=crazy_mirzakhani, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-92fceec30394d86057496e01f818c30b048f50685587236a10362e0c00c4329c-merged.mount: Deactivated successfully.
Feb 02 15:06:37 compute-0 podman[76086]: 2026-02-02 15:06:37.388536188 +0000 UTC m=+0.646332841 container remove 79f20bafad38ad46b50672b165e8a5de005687b901183f030dc32f9c91e1f0c0 (image=quay.io/ceph/ceph:v20, name=crazy_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:37 compute-0 systemd[1]: libpod-conmon-79f20bafad38ad46b50672b165e8a5de005687b901183f030dc32f9c91e1f0c0.scope: Deactivated successfully.
Feb 02 15:06:37 compute-0 podman[76149]: 2026-02-02 15:06:37.437766677 +0000 UTC m=+0.032951579 container create c0bd1f38b347c063db87e05c93713790356153773ab93a603d229609fd0e0d18 (image=quay.io/ceph/ceph:v20, name=boring_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:37 compute-0 systemd[1]: Started libpod-conmon-c0bd1f38b347c063db87e05c93713790356153773ab93a603d229609fd0e0d18.scope.
Feb 02 15:06:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a05c40dac9f4dd80b21e7d11feb20bd2c0ee3610934359e3699195d912beeb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a05c40dac9f4dd80b21e7d11feb20bd2c0ee3610934359e3699195d912beeb0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a05c40dac9f4dd80b21e7d11feb20bd2c0ee3610934359e3699195d912beeb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:37 compute-0 podman[76149]: 2026-02-02 15:06:37.48877007 +0000 UTC m=+0.083955002 container init c0bd1f38b347c063db87e05c93713790356153773ab93a603d229609fd0e0d18 (image=quay.io/ceph/ceph:v20, name=boring_mahavira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:06:37 compute-0 podman[76149]: 2026-02-02 15:06:37.495077844 +0000 UTC m=+0.090262756 container start c0bd1f38b347c063db87e05c93713790356153773ab93a603d229609fd0e0d18 (image=quay.io/ceph/ceph:v20, name=boring_mahavira, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:37 compute-0 podman[76149]: 2026-02-02 15:06:37.498821717 +0000 UTC m=+0.094006649 container attach c0bd1f38b347c063db87e05c93713790356153773ab93a603d229609fd0e0d18 (image=quay.io/ceph/ceph:v20, name=boring_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:37 compute-0 podman[76149]: 2026-02-02 15:06:37.423945368 +0000 UTC m=+0.019130300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:37 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'crash'
Feb 02 15:06:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2049152506' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 02 15:06:37 compute-0 ceph-mon[75334]: mgrmap e5: compute-0.rxryxi(active, since 3s)
Feb 02 15:06:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2572221113' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 02 15:06:37 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'dashboard'
Feb 02 15:06:38 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'devicehealth'
Feb 02 15:06:38 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'diskprediction_local'
Feb 02 15:06:38 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 02 15:06:38 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 02 15:06:38 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]:   from numpy import show_config as show_numpy_config
Feb 02 15:06:38 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'influx'
Feb 02 15:06:38 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'insights'
Feb 02 15:06:38 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'iostat'
Feb 02 15:06:38 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'k8sevents'
Feb 02 15:06:39 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'localpool'
Feb 02 15:06:39 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'mds_autoscaler'
Feb 02 15:06:39 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'mirroring'
Feb 02 15:06:39 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'nfs'
Feb 02 15:06:39 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'orchestrator'
Feb 02 15:06:39 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'osd_perf_query'
Feb 02 15:06:39 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'osd_support'
Feb 02 15:06:40 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'pg_autoscaler'
Feb 02 15:06:40 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'progress'
Feb 02 15:06:40 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'prometheus'
Feb 02 15:06:40 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'rbd_support'
Feb 02 15:06:40 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'rgw'
Feb 02 15:06:40 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'rook'
Feb 02 15:06:41 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'selftest'
Feb 02 15:06:41 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'smb'
Feb 02 15:06:41 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'snap_schedule'
Feb 02 15:06:41 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'stats'
Feb 02 15:06:41 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'status'
Feb 02 15:06:41 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'telegraf'
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'telemetry'
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'test_orchestrator'
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: mgr[py] Loading python module 'volumes'
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rxryxi restarted
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rxryxi
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: ms_deliver_dispatch: unhandled message 0x56513d168000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: mgr handle_mgr_map Activating!
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: mgr handle_mgr_map I am now activating
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.rxryxi(active, starting, since 0.0154816s)
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rxryxi", "id": "compute-0.rxryxi"} v 0)
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mgr metadata", "who": "compute-0.rxryxi", "id": "compute-0.rxryxi"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mds metadata"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e1 all = 1
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: balancer
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Manager daemon compute-0.rxryxi is now available
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Starting
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:06:42
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:06:42 compute-0 ceph-mgr[75628]: [balancer INFO root] No pools available
Feb 02 15:06:42 compute-0 ceph-mon[75334]: Active manager daemon compute-0.rxryxi restarted
Feb 02 15:06:42 compute-0 ceph-mon[75334]: Activating manager daemon compute-0.rxryxi
Feb 02 15:06:42 compute-0 ceph-mon[75334]: osdmap e2: 0 total, 0 up, 0 in
Feb 02 15:06:42 compute-0 ceph-mon[75334]: mgrmap e6: compute-0.rxryxi(active, starting, since 0.0154816s)
Feb 02 15:06:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mgr metadata", "who": "compute-0.rxryxi", "id": "compute-0.rxryxi"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mds metadata"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata"} : dispatch
Feb 02 15:06:42 compute-0 ceph-mon[75334]: Manager daemon compute-0.rxryxi is now available
Feb 02 15:06:43 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.rxryxi(active, since 1.15899s)
Feb 02 15:06:43 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb 02 15:06:43 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb 02 15:06:43 compute-0 boring_mahavira[76165]: {
Feb 02 15:06:43 compute-0 boring_mahavira[76165]:     "mgrmap_epoch": 7,
Feb 02 15:06:43 compute-0 boring_mahavira[76165]:     "initialized": true
Feb 02 15:06:43 compute-0 boring_mahavira[76165]: }
Feb 02 15:06:43 compute-0 systemd[1]: libpod-c0bd1f38b347c063db87e05c93713790356153773ab93a603d229609fd0e0d18.scope: Deactivated successfully.
Feb 02 15:06:43 compute-0 podman[76149]: 2026-02-02 15:06:43.872354407 +0000 UTC m=+6.467539319 container died c0bd1f38b347c063db87e05c93713790356153773ab93a603d229609fd0e0d18 (image=quay.io/ceph/ceph:v20, name=boring_mahavira, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:06:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a05c40dac9f4dd80b21e7d11feb20bd2c0ee3610934359e3699195d912beeb0-merged.mount: Deactivated successfully.
Feb 02 15:06:43 compute-0 podman[76149]: 2026-02-02 15:06:43.903674305 +0000 UTC m=+6.498859217 container remove c0bd1f38b347c063db87e05c93713790356153773ab93a603d229609fd0e0d18 (image=quay.io/ceph/ceph:v20, name=boring_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:43 compute-0 systemd[1]: libpod-conmon-c0bd1f38b347c063db87e05c93713790356153773ab93a603d229609fd0e0d18.scope: Deactivated successfully.
Feb 02 15:06:43 compute-0 podman[76234]: 2026-02-02 15:06:43.961582407 +0000 UTC m=+0.042145606 container create 0fe7a018cb0ae219443b6c86bbce90f06946558edae414b4d9fe6b64ff436b7f (image=quay.io/ceph/ceph:v20, name=dreamy_engelbart, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:06:43 compute-0 systemd[1]: Started libpod-conmon-0fe7a018cb0ae219443b6c86bbce90f06946558edae414b4d9fe6b64ff436b7f.scope.
Feb 02 15:06:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1ede2fb55acffd8e343ec047f607a90f01d5bc2a552cf66ec105a8409aed40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1ede2fb55acffd8e343ec047f607a90f01d5bc2a552cf66ec105a8409aed40/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1ede2fb55acffd8e343ec047f607a90f01d5bc2a552cf66ec105a8409aed40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:44 compute-0 podman[76234]: 2026-02-02 15:06:44.030159981 +0000 UTC m=+0.110723200 container init 0fe7a018cb0ae219443b6c86bbce90f06946558edae414b4d9fe6b64ff436b7f (image=quay.io/ceph/ceph:v20, name=dreamy_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:06:44 compute-0 podman[76234]: 2026-02-02 15:06:44.034382454 +0000 UTC m=+0.114945643 container start 0fe7a018cb0ae219443b6c86bbce90f06946558edae414b4d9fe6b64ff436b7f (image=quay.io/ceph/ceph:v20, name=dreamy_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:06:44 compute-0 podman[76234]: 2026-02-02 15:06:44.037794369 +0000 UTC m=+0.118357558 container attach 0fe7a018cb0ae219443b6c86bbce90f06946558edae414b4d9fe6b64ff436b7f (image=quay.io/ceph/ceph:v20, name=dreamy_engelbart, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:44 compute-0 podman[76234]: 2026-02-02 15:06:43.943290378 +0000 UTC m=+0.023853587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2820322426' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: cephadm
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: crash
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: devicehealth
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [devicehealth INFO root] Starting
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: iostat
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: nfs
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: orchestrator
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: pg_autoscaler
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: progress
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [progress INFO root] Loading...
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [progress INFO root] No stored events to load
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [progress INFO root] Loaded [] historic events
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [progress INFO root] Loaded OSDMap, ready.
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] recovery thread starting
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] starting setup
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: rbd_support
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: status
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: telemetry
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/mirror_snapshot_schedule"} v 0)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/mirror_snapshot_schedule"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] PerfHandler: starting
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TaskHandler: starting
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/trash_purge_schedule"} v 0)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/trash_purge_schedule"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] setup complete
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr load Constructed class from module: volumes
Feb 02 15:06:44 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:06:44 compute-0 ceph-mon[75334]: mgrmap e7: compute-0.rxryxi(active, since 1.15899s)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2820322426' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:44 compute-0 ceph-mon[75334]: Found migration_current of "None". Setting to last migration.
Feb 02 15:06:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/mirror_snapshot_schedule"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rxryxi/trash_purge_schedule"} : dispatch
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2820322426' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb 02 15:06:44 compute-0 dreamy_engelbart[76251]: module 'orchestrator' is already enabled (always-on)
Feb 02 15:06:44 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.rxryxi(active, since 2s)
Feb 02 15:06:44 compute-0 systemd[1]: libpod-0fe7a018cb0ae219443b6c86bbce90f06946558edae414b4d9fe6b64ff436b7f.scope: Deactivated successfully.
Feb 02 15:06:44 compute-0 podman[76234]: 2026-02-02 15:06:44.880127962 +0000 UTC m=+0.960691151 container died 0fe7a018cb0ae219443b6c86bbce90f06946558edae414b4d9fe6b64ff436b7f (image=quay.io/ceph/ceph:v20, name=dreamy_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f1ede2fb55acffd8e343ec047f607a90f01d5bc2a552cf66ec105a8409aed40-merged.mount: Deactivated successfully.
Feb 02 15:06:44 compute-0 podman[76234]: 2026-02-02 15:06:44.923723933 +0000 UTC m=+1.004287122 container remove 0fe7a018cb0ae219443b6c86bbce90f06946558edae414b4d9fe6b64ff436b7f (image=quay.io/ceph/ceph:v20, name=dreamy_engelbart, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:44 compute-0 systemd[1]: libpod-conmon-0fe7a018cb0ae219443b6c86bbce90f06946558edae414b4d9fe6b64ff436b7f.scope: Deactivated successfully.
Feb 02 15:06:44 compute-0 podman[76367]: 2026-02-02 15:06:44.983255474 +0000 UTC m=+0.040355672 container create 22dcee9ca27b0a6dada8b3cccd4752a0294388caa028270069fae022b39bde1e (image=quay.io/ceph/ceph:v20, name=stupefied_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:45 compute-0 systemd[1]: Started libpod-conmon-22dcee9ca27b0a6dada8b3cccd4752a0294388caa028270069fae022b39bde1e.scope.
Feb 02 15:06:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213cde0decb62a55019a2b1c60806bb3da79a9e5d690b595f8969dc87a35e2de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213cde0decb62a55019a2b1c60806bb3da79a9e5d690b595f8969dc87a35e2de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213cde0decb62a55019a2b1c60806bb3da79a9e5d690b595f8969dc87a35e2de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:45 compute-0 podman[76367]: 2026-02-02 15:06:45.051886159 +0000 UTC m=+0.108986377 container init 22dcee9ca27b0a6dada8b3cccd4752a0294388caa028270069fae022b39bde1e (image=quay.io/ceph/ceph:v20, name=stupefied_joliot, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Feb 02 15:06:45 compute-0 podman[76367]: 2026-02-02 15:06:45.056131373 +0000 UTC m=+0.113231571 container start 22dcee9ca27b0a6dada8b3cccd4752a0294388caa028270069fae022b39bde1e (image=quay.io/ceph/ceph:v20, name=stupefied_joliot, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:06:45 compute-0 podman[76367]: 2026-02-02 15:06:45.059590219 +0000 UTC m=+0.116690417 container attach 22dcee9ca27b0a6dada8b3cccd4752a0294388caa028270069fae022b39bde1e (image=quay.io/ceph/ceph:v20, name=stupefied_joliot, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:45 compute-0 podman[76367]: 2026-02-02 15:06:44.966066072 +0000 UTC m=+0.023166280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:45 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Feb 02 15:06:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 15:06:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:45 compute-0 systemd[1]: libpod-22dcee9ca27b0a6dada8b3cccd4752a0294388caa028270069fae022b39bde1e.scope: Deactivated successfully.
Feb 02 15:06:45 compute-0 podman[76367]: 2026-02-02 15:06:45.483835896 +0000 UTC m=+0.540936104 container died 22dcee9ca27b0a6dada8b3cccd4752a0294388caa028270069fae022b39bde1e (image=quay.io/ceph/ceph:v20, name=stupefied_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-213cde0decb62a55019a2b1c60806bb3da79a9e5d690b595f8969dc87a35e2de-merged.mount: Deactivated successfully.
Feb 02 15:06:45 compute-0 podman[76367]: 2026-02-02 15:06:45.524344241 +0000 UTC m=+0.581444439 container remove 22dcee9ca27b0a6dada8b3cccd4752a0294388caa028270069fae022b39bde1e (image=quay.io/ceph/ceph:v20, name=stupefied_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:06:45 compute-0 systemd[1]: libpod-conmon-22dcee9ca27b0a6dada8b3cccd4752a0294388caa028270069fae022b39bde1e.scope: Deactivated successfully.
Feb 02 15:06:45 compute-0 podman[76418]: 2026-02-02 15:06:45.589359397 +0000 UTC m=+0.043944750 container create 5ee1e6d24a92b2f7988d8910996133e3ac9ead67f0d4702d77652089e5271c14 (image=quay.io/ceph/ceph:v20, name=tender_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019907941 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:06:45 compute-0 systemd[1]: Started libpod-conmon-5ee1e6d24a92b2f7988d8910996133e3ac9ead67f0d4702d77652089e5271c14.scope.
Feb 02 15:06:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32144187ef420100a67a1f9b4353345a88deba83d299a18997c70ae702397423/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32144187ef420100a67a1f9b4353345a88deba83d299a18997c70ae702397423/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32144187ef420100a67a1f9b4353345a88deba83d299a18997c70ae702397423/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:45 compute-0 podman[76418]: 2026-02-02 15:06:45.569078109 +0000 UTC m=+0.023663422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:45 compute-0 podman[76418]: 2026-02-02 15:06:45.682033412 +0000 UTC m=+0.136618745 container init 5ee1e6d24a92b2f7988d8910996133e3ac9ead67f0d4702d77652089e5271c14 (image=quay.io/ceph/ceph:v20, name=tender_villani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 02 15:06:45 compute-0 podman[76418]: 2026-02-02 15:06:45.68644893 +0000 UTC m=+0.141034243 container start 5ee1e6d24a92b2f7988d8910996133e3ac9ead67f0d4702d77652089e5271c14 (image=quay.io/ceph/ceph:v20, name=tender_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:06:45 compute-0 podman[76418]: 2026-02-02 15:06:45.689174737 +0000 UTC m=+0.143760050 container attach 5ee1e6d24a92b2f7988d8910996133e3ac9ead67f0d4702d77652089e5271c14 (image=quay.io/ceph/ceph:v20, name=tender_villani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2820322426' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb 02 15:06:45 compute-0 ceph-mon[75334]: mgrmap e8: compute-0.rxryxi(active, since 2s)
Feb 02 15:06:45 compute-0 ceph-mon[75334]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:45 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:45 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Feb 02 15:06:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO root] Set ssh ssh_user
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Feb 02 15:06:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Feb 02 15:06:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO root] Set ssh ssh_config
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Feb 02 15:06:46 compute-0 tender_villani[76435]: ssh user set to ceph-admin. sudo will be used
Feb 02 15:06:46 compute-0 systemd[1]: libpod-5ee1e6d24a92b2f7988d8910996133e3ac9ead67f0d4702d77652089e5271c14.scope: Deactivated successfully.
Feb 02 15:06:46 compute-0 podman[76418]: 2026-02-02 15:06:46.084080674 +0000 UTC m=+0.538665987 container died 5ee1e6d24a92b2f7988d8910996133e3ac9ead67f0d4702d77652089e5271c14 (image=quay.io/ceph/ceph:v20, name=tender_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-32144187ef420100a67a1f9b4353345a88deba83d299a18997c70ae702397423-merged.mount: Deactivated successfully.
Feb 02 15:06:46 compute-0 podman[76418]: 2026-02-02 15:06:46.118057169 +0000 UTC m=+0.572642482 container remove 5ee1e6d24a92b2f7988d8910996133e3ac9ead67f0d4702d77652089e5271c14 (image=quay.io/ceph/ceph:v20, name=tender_villani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:46 compute-0 systemd[1]: libpod-conmon-5ee1e6d24a92b2f7988d8910996133e3ac9ead67f0d4702d77652089e5271c14.scope: Deactivated successfully.
Feb 02 15:06:46 compute-0 podman[76473]: 2026-02-02 15:06:46.177522819 +0000 UTC m=+0.040456975 container create e54f76f79e77c09b32706c55a7207657a7ef3ea25dc72bce2f9427a2bef5189e (image=quay.io/ceph/ceph:v20, name=confident_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:46 compute-0 systemd[1]: Started libpod-conmon-e54f76f79e77c09b32706c55a7207657a7ef3ea25dc72bce2f9427a2bef5189e.scope.
Feb 02 15:06:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b29746cef69b725f14fd479ddbc67cfcf966930b51268ea5ed48bce4610593/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b29746cef69b725f14fd479ddbc67cfcf966930b51268ea5ed48bce4610593/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b29746cef69b725f14fd479ddbc67cfcf966930b51268ea5ed48bce4610593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b29746cef69b725f14fd479ddbc67cfcf966930b51268ea5ed48bce4610593/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b29746cef69b725f14fd479ddbc67cfcf966930b51268ea5ed48bce4610593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 podman[76473]: 2026-02-02 15:06:46.160499141 +0000 UTC m=+0.023433307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:46 compute-0 podman[76473]: 2026-02-02 15:06:46.271897857 +0000 UTC m=+0.134832063 container init e54f76f79e77c09b32706c55a7207657a7ef3ea25dc72bce2f9427a2bef5189e (image=quay.io/ceph/ceph:v20, name=confident_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:06:46 compute-0 podman[76473]: 2026-02-02 15:06:46.276736425 +0000 UTC m=+0.139670601 container start e54f76f79e77c09b32706c55a7207657a7ef3ea25dc72bce2f9427a2bef5189e (image=quay.io/ceph/ceph:v20, name=confident_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:46 compute-0 podman[76473]: 2026-02-02 15:06:46.279512433 +0000 UTC m=+0.142446599 container attach e54f76f79e77c09b32706c55a7207657a7ef3ea25dc72bce2f9427a2bef5189e (image=quay.io/ceph/ceph:v20, name=confident_merkle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO cherrypy.error] [02/Feb/2026:15:06:46] ENGINE Bus STARTING
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : [02/Feb/2026:15:06:46] ENGINE Bus STARTING
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO cherrypy.error] [02/Feb/2026:15:06:46] ENGINE Serving on http://192.168.122.100:8765
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : [02/Feb/2026:15:06:46] ENGINE Serving on http://192.168.122.100:8765
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO cherrypy.error] [02/Feb/2026:15:06:46] ENGINE Serving on https://192.168.122.100:7150
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : [02/Feb/2026:15:06:46] ENGINE Serving on https://192.168.122.100:7150
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO cherrypy.error] [02/Feb/2026:15:06:46] ENGINE Bus STARTED
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : [02/Feb/2026:15:06:46] ENGINE Bus STARTED
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO cherrypy.error] [02/Feb/2026:15:06:46] ENGINE Client ('192.168.122.100', 58792) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : [02/Feb/2026:15:06:46] ENGINE Client ('192.168.122.100', 58792) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 15:06:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 15:06:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Feb 02 15:06:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO root] Set ssh ssh_identity_key
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: [cephadm INFO root] Set ssh private key
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Set ssh private key
Feb 02 15:06:46 compute-0 systemd[1]: libpod-e54f76f79e77c09b32706c55a7207657a7ef3ea25dc72bce2f9427a2bef5189e.scope: Deactivated successfully.
Feb 02 15:06:46 compute-0 podman[76473]: 2026-02-02 15:06:46.702638673 +0000 UTC m=+0.565572849 container died e54f76f79e77c09b32706c55a7207657a7ef3ea25dc72bce2f9427a2bef5189e (image=quay.io/ceph/ceph:v20, name=confident_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:06:46 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4b29746cef69b725f14fd479ddbc67cfcf966930b51268ea5ed48bce4610593-merged.mount: Deactivated successfully.
Feb 02 15:06:46 compute-0 podman[76473]: 2026-02-02 15:06:46.739268802 +0000 UTC m=+0.602202968 container remove e54f76f79e77c09b32706c55a7207657a7ef3ea25dc72bce2f9427a2bef5189e (image=quay.io/ceph/ceph:v20, name=confident_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:46 compute-0 systemd[1]: libpod-conmon-e54f76f79e77c09b32706c55a7207657a7ef3ea25dc72bce2f9427a2bef5189e.scope: Deactivated successfully.
Feb 02 15:06:46 compute-0 podman[76551]: 2026-02-02 15:06:46.79577964 +0000 UTC m=+0.037001440 container create f5c5b3b44ed3224024b82e87680c8298f5fda898aa24ca6ffae38654f1367a28 (image=quay.io/ceph/ceph:v20, name=nifty_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:46 compute-0 systemd[1]: Started libpod-conmon-f5c5b3b44ed3224024b82e87680c8298f5fda898aa24ca6ffae38654f1367a28.scope.
Feb 02 15:06:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70880e640c4aa49a9dafc55b5c2da0728487e3f16fdb64d3d7c8ff8163bfe13/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70880e640c4aa49a9dafc55b5c2da0728487e3f16fdb64d3d7c8ff8163bfe13/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70880e640c4aa49a9dafc55b5c2da0728487e3f16fdb64d3d7c8ff8163bfe13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70880e640c4aa49a9dafc55b5c2da0728487e3f16fdb64d3d7c8ff8163bfe13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70880e640c4aa49a9dafc55b5c2da0728487e3f16fdb64d3d7c8ff8163bfe13/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:46 compute-0 podman[76551]: 2026-02-02 15:06:46.867389239 +0000 UTC m=+0.108611049 container init f5c5b3b44ed3224024b82e87680c8298f5fda898aa24ca6ffae38654f1367a28 (image=quay.io/ceph/ceph:v20, name=nifty_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:46 compute-0 podman[76551]: 2026-02-02 15:06:46.874858172 +0000 UTC m=+0.116079992 container start f5c5b3b44ed3224024b82e87680c8298f5fda898aa24ca6ffae38654f1367a28 (image=quay.io/ceph/ceph:v20, name=nifty_moser, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:46 compute-0 podman[76551]: 2026-02-02 15:06:46.779422548 +0000 UTC m=+0.020644358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:46 compute-0 podman[76551]: 2026-02-02 15:06:46.878507532 +0000 UTC m=+0.119729352 container attach f5c5b3b44ed3224024b82e87680c8298f5fda898aa24ca6ffae38654f1367a28 (image=quay.io/ceph/ceph:v20, name=nifty_moser, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:06:47 compute-0 ceph-mon[75334]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:47 compute-0 ceph-mon[75334]: Set ssh ssh_user
Feb 02 15:06:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:47 compute-0 ceph-mon[75334]: Set ssh ssh_config
Feb 02 15:06:47 compute-0 ceph-mon[75334]: ssh user set to ceph-admin. sudo will be used
Feb 02 15:06:47 compute-0 ceph-mon[75334]: [02/Feb/2026:15:06:46] ENGINE Bus STARTING
Feb 02 15:06:47 compute-0 ceph-mon[75334]: [02/Feb/2026:15:06:46] ENGINE Serving on http://192.168.122.100:8765
Feb 02 15:06:47 compute-0 ceph-mon[75334]: [02/Feb/2026:15:06:46] ENGINE Serving on https://192.168.122.100:7150
Feb 02 15:06:47 compute-0 ceph-mon[75334]: [02/Feb/2026:15:06:46] ENGINE Bus STARTED
Feb 02 15:06:47 compute-0 ceph-mon[75334]: [02/Feb/2026:15:06:46] ENGINE Client ('192.168.122.100', 58792) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 02 15:06:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:47 compute-0 ceph-mon[75334]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:47 compute-0 ceph-mon[75334]: Set ssh ssh_identity_key
Feb 02 15:06:47 compute-0 ceph-mon[75334]: Set ssh private key
Feb 02 15:06:47 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Feb 02 15:06:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:47 compute-0 ceph-mgr[75628]: [cephadm INFO root] Set ssh ssh_identity_pub
Feb 02 15:06:47 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Feb 02 15:06:47 compute-0 systemd[1]: libpod-f5c5b3b44ed3224024b82e87680c8298f5fda898aa24ca6ffae38654f1367a28.scope: Deactivated successfully.
Feb 02 15:06:47 compute-0 podman[76551]: 2026-02-02 15:06:47.313672967 +0000 UTC m=+0.554894767 container died f5c5b3b44ed3224024b82e87680c8298f5fda898aa24ca6ffae38654f1367a28 (image=quay.io/ceph/ceph:v20, name=nifty_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70880e640c4aa49a9dafc55b5c2da0728487e3f16fdb64d3d7c8ff8163bfe13-merged.mount: Deactivated successfully.
Feb 02 15:06:47 compute-0 podman[76551]: 2026-02-02 15:06:47.355512944 +0000 UTC m=+0.596734744 container remove f5c5b3b44ed3224024b82e87680c8298f5fda898aa24ca6ffae38654f1367a28 (image=quay.io/ceph/ceph:v20, name=nifty_moser, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 02 15:06:47 compute-0 systemd[1]: libpod-conmon-f5c5b3b44ed3224024b82e87680c8298f5fda898aa24ca6ffae38654f1367a28.scope: Deactivated successfully.
Feb 02 15:06:47 compute-0 podman[76605]: 2026-02-02 15:06:47.421241698 +0000 UTC m=+0.047160399 container create 4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507 (image=quay.io/ceph/ceph:v20, name=gallant_rosalind, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:47 compute-0 systemd[1]: Started libpod-conmon-4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507.scope.
Feb 02 15:06:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0372a50a6b0dda37e04144c169d4f48f16c8d16c7a5df30f7f7992a90c0f29bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0372a50a6b0dda37e04144c169d4f48f16c8d16c7a5df30f7f7992a90c0f29bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0372a50a6b0dda37e04144c169d4f48f16c8d16c7a5df30f7f7992a90c0f29bf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:47 compute-0 podman[76605]: 2026-02-02 15:06:47.397740911 +0000 UTC m=+0.023659602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:47 compute-0 podman[76605]: 2026-02-02 15:06:47.511817692 +0000 UTC m=+0.137736453 container init 4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507 (image=quay.io/ceph/ceph:v20, name=gallant_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:06:47 compute-0 podman[76605]: 2026-02-02 15:06:47.518378303 +0000 UTC m=+0.144296964 container start 4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507 (image=quay.io/ceph/ceph:v20, name=gallant_rosalind, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:06:47 compute-0 podman[76605]: 2026-02-02 15:06:47.522685389 +0000 UTC m=+0.148604090 container attach 4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507 (image=quay.io/ceph/ceph:v20, name=gallant_rosalind, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:06:47 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:47 compute-0 gallant_rosalind[76621]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6ST1+YqShTx450s/bakVWqCIKJu6gskJIxNPRbE8E1p29MjdYtB+nk+X665ddFVEE6az/N4jlO5rDxc1/A4iDzwB/wO9m4NMbXXaXDIT7czep6m0KZTARS/GcfHUH0KIObPpYTBm+iZptv8X/+9ba6+aMcIEuUFTo0VZpBaEaq1xBtQH0p8VDd+zqQSLPZij4YliTmzR+MZN8/o4WAemoReEMTjr2R35FJ89Go5L1vgOgdvCBCyS2rFrXE2GTnfKsofzABKXCMuGZQy0yrcb0JtgoZyYA9ZAwJi8B25SyMjbSEzOPWi+XaeKqbRzpRjkam9Hy1sTY6L/f54j/DMO0uxiewLASTwTgpq7Pkdc1EEyixGANkpk7xAmkSqoFBZJXhd3StFx4WH3eN5/6pqP8DUUVf2VXuawFj4jzik9HeTaQOqKC70N1g0bLtvcmClLueT+MDyyOCtJezfbQ4Xb8rWDSOBdpT2UL3GQzsQLdnAUiSkJoeN2Pa2CRRm2P4H8= zuul@controller
Feb 02 15:06:47 compute-0 systemd[1]: libpod-4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507.scope: Deactivated successfully.
Feb 02 15:06:47 compute-0 conmon[76621]: conmon 4609e8f17a19e1955b8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507.scope/container/memory.events
Feb 02 15:06:47 compute-0 podman[76605]: 2026-02-02 15:06:47.917464433 +0000 UTC m=+0.543383104 container died 4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507 (image=quay.io/ceph/ceph:v20, name=gallant_rosalind, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0372a50a6b0dda37e04144c169d4f48f16c8d16c7a5df30f7f7992a90c0f29bf-merged.mount: Deactivated successfully.
Feb 02 15:06:47 compute-0 podman[76605]: 2026-02-02 15:06:47.959909805 +0000 UTC m=+0.585828466 container remove 4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507 (image=quay.io/ceph/ceph:v20, name=gallant_rosalind, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:06:47 compute-0 systemd[1]: libpod-conmon-4609e8f17a19e1955b8ec00f5ab599dda8504e0e2144367d3fbcfc7c621b9507.scope: Deactivated successfully.
Feb 02 15:06:48 compute-0 podman[76660]: 2026-02-02 15:06:48.01714736 +0000 UTC m=+0.036293472 container create 11dd16af072e4fde0cf5d0360a85e5738275b0a3ae0d89d43e201ce8fe311139 (image=quay.io/ceph/ceph:v20, name=inspiring_poitras, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:48 compute-0 systemd[1]: Started libpod-conmon-11dd16af072e4fde0cf5d0360a85e5738275b0a3ae0d89d43e201ce8fe311139.scope.
Feb 02 15:06:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9a1e648f2f5e4b14922d9cd39e7eb411ceecfa5f5fb2c675b28733a013c0cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9a1e648f2f5e4b14922d9cd39e7eb411ceecfa5f5fb2c675b28733a013c0cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9a1e648f2f5e4b14922d9cd39e7eb411ceecfa5f5fb2c675b28733a013c0cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:48 compute-0 podman[76660]: 2026-02-02 15:06:48.079342067 +0000 UTC m=+0.098488219 container init 11dd16af072e4fde0cf5d0360a85e5738275b0a3ae0d89d43e201ce8fe311139 (image=quay.io/ceph/ceph:v20, name=inspiring_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:06:48 compute-0 podman[76660]: 2026-02-02 15:06:48.08639943 +0000 UTC m=+0.105545542 container start 11dd16af072e4fde0cf5d0360a85e5738275b0a3ae0d89d43e201ce8fe311139 (image=quay.io/ceph/ceph:v20, name=inspiring_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:06:48 compute-0 podman[76660]: 2026-02-02 15:06:48.090394879 +0000 UTC m=+0.109541021 container attach 11dd16af072e4fde0cf5d0360a85e5738275b0a3ae0d89d43e201ce8fe311139 (image=quay.io/ceph/ceph:v20, name=inspiring_poitras, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:06:48 compute-0 podman[76660]: 2026-02-02 15:06:48.000199674 +0000 UTC m=+0.019345786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:48 compute-0 ceph-mon[75334]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:48 compute-0 ceph-mon[75334]: Set ssh ssh_identity_pub
Feb 02 15:06:48 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:48 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:06:48 compute-0 sshd-session[76702]: Accepted publickey for ceph-admin from 192.168.122.100 port 44858 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:48 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:06:48 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Feb 02 15:06:48 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 02 15:06:48 compute-0 systemd-logind[786]: New session 21 of user ceph-admin.
Feb 02 15:06:48 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 02 15:06:48 compute-0 systemd[1]: Starting User Manager for UID 42477...
Feb 02 15:06:48 compute-0 systemd[76706]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:48 compute-0 systemd[76706]: Queued start job for default target Main User Target.
Feb 02 15:06:48 compute-0 sshd-session[76719]: Accepted publickey for ceph-admin from 192.168.122.100 port 44874 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:48 compute-0 systemd[76706]: Created slice User Application Slice.
Feb 02 15:06:48 compute-0 systemd[76706]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 02 15:06:48 compute-0 systemd[76706]: Started Daily Cleanup of User's Temporary Directories.
Feb 02 15:06:48 compute-0 systemd[76706]: Reached target Paths.
Feb 02 15:06:48 compute-0 systemd[76706]: Reached target Timers.
Feb 02 15:06:48 compute-0 systemd-logind[786]: New session 23 of user ceph-admin.
Feb 02 15:06:48 compute-0 systemd[76706]: Starting D-Bus User Message Bus Socket...
Feb 02 15:06:48 compute-0 systemd[76706]: Starting Create User's Volatile Files and Directories...
Feb 02 15:06:48 compute-0 systemd[76706]: Finished Create User's Volatile Files and Directories.
Feb 02 15:06:48 compute-0 systemd[76706]: Listening on D-Bus User Message Bus Socket.
Feb 02 15:06:48 compute-0 systemd[76706]: Reached target Sockets.
Feb 02 15:06:48 compute-0 systemd[76706]: Reached target Basic System.
Feb 02 15:06:48 compute-0 systemd[76706]: Reached target Main User Target.
Feb 02 15:06:48 compute-0 systemd[76706]: Startup finished in 139ms.
Feb 02 15:06:48 compute-0 systemd[1]: Started User Manager for UID 42477.
Feb 02 15:06:48 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Feb 02 15:06:48 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Feb 02 15:06:48 compute-0 sshd-session[76702]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:48 compute-0 sshd-session[76719]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:49 compute-0 sudo[76726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:06:49 compute-0 sudo[76726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:49 compute-0 sudo[76726]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:49 compute-0 sshd-session[76751]: Accepted publickey for ceph-admin from 192.168.122.100 port 44888 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:49 compute-0 systemd-logind[786]: New session 24 of user ceph-admin.
Feb 02 15:06:49 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Feb 02 15:06:49 compute-0 ceph-mon[75334]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:49 compute-0 ceph-mon[75334]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:49 compute-0 sshd-session[76751]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:49 compute-0 sudo[76755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Feb 02 15:06:49 compute-0 sudo[76755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:49 compute-0 sudo[76755]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:49 compute-0 sshd-session[76780]: Accepted publickey for ceph-admin from 192.168.122.100 port 44890 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:49 compute-0 systemd-logind[786]: New session 25 of user ceph-admin.
Feb 02 15:06:49 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Feb 02 15:06:49 compute-0 sshd-session[76780]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:49 compute-0 sudo[76784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Feb 02 15:06:49 compute-0 sudo[76784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:49 compute-0 sudo[76784]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:49 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Feb 02 15:06:49 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Feb 02 15:06:49 compute-0 sshd-session[76809]: Accepted publickey for ceph-admin from 192.168.122.100 port 44894 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:49 compute-0 systemd-logind[786]: New session 26 of user ceph-admin.
Feb 02 15:06:49 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Feb 02 15:06:49 compute-0 sshd-session[76809]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:50 compute-0 sudo[76813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:50 compute-0 sudo[76813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:50 compute-0 sudo[76813]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:50 compute-0 sshd-session[76838]: Accepted publickey for ceph-admin from 192.168.122.100 port 44906 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:50 compute-0 systemd-logind[786]: New session 27 of user ceph-admin.
Feb 02 15:06:50 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Feb 02 15:06:50 compute-0 ceph-mon[75334]: Deploying cephadm binary to compute-0
Feb 02 15:06:50 compute-0 sshd-session[76838]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:50 compute-0 sudo[76842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:50 compute-0 sudo[76842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:50 compute-0 sudo[76842]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052699 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:06:50 compute-0 sshd-session[76867]: Accepted publickey for ceph-admin from 192.168.122.100 port 44912 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:50 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:06:50 compute-0 systemd-logind[786]: New session 28 of user ceph-admin.
Feb 02 15:06:50 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Feb 02 15:06:50 compute-0 sshd-session[76867]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:50 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:06:50 compute-0 sudo[76871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Feb 02 15:06:50 compute-0 sudo[76871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:50 compute-0 sudo[76871]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:50 compute-0 sshd-session[76896]: Accepted publickey for ceph-admin from 192.168.122.100 port 44928 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:50 compute-0 systemd-logind[786]: New session 29 of user ceph-admin.
Feb 02 15:06:50 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Feb 02 15:06:50 compute-0 sshd-session[76896]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:51 compute-0 sudo[76900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:06:51 compute-0 sudo[76900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:51 compute-0 sudo[76900]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:51 compute-0 sshd-session[76925]: Accepted publickey for ceph-admin from 192.168.122.100 port 44936 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:51 compute-0 systemd-logind[786]: New session 30 of user ceph-admin.
Feb 02 15:06:51 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Feb 02 15:06:51 compute-0 sshd-session[76925]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:51 compute-0 sudo[76929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Feb 02 15:06:51 compute-0 sudo[76929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:51 compute-0 sudo[76929]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:51 compute-0 sshd-session[76954]: Accepted publickey for ceph-admin from 192.168.122.100 port 44942 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:51 compute-0 systemd-logind[786]: New session 31 of user ceph-admin.
Feb 02 15:06:51 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Feb 02 15:06:51 compute-0 sshd-session[76954]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:52 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:06:52 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:06:53 compute-0 sshd-session[76981]: Accepted publickey for ceph-admin from 192.168.122.100 port 44956 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:53 compute-0 systemd-logind[786]: New session 32 of user ceph-admin.
Feb 02 15:06:53 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Feb 02 15:06:53 compute-0 sshd-session[76981]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:53 compute-0 sudo[76985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Feb 02 15:06:53 compute-0 sudo[76985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:53 compute-0 sudo[76985]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:53 compute-0 sshd-session[77010]: Accepted publickey for ceph-admin from 192.168.122.100 port 44966 ssh2: RSA SHA256:8E2fPoccuph4Q/vfcIsHmompp+5/TcdUxVhv17icdPU
Feb 02 15:06:53 compute-0 systemd-logind[786]: New session 33 of user ceph-admin.
Feb 02 15:06:53 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Feb 02 15:06:53 compute-0 sshd-session[77010]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 02 15:06:53 compute-0 sudo[77014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Feb 02 15:06:53 compute-0 sudo[77014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:53 compute-0 sudo[77014]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 15:06:53 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:53 compute-0 ceph-mgr[75628]: [cephadm INFO root] Added host compute-0
Feb 02 15:06:53 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 02 15:06:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 15:06:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:53 compute-0 inspiring_poitras[76676]: Added host 'compute-0' with addr '192.168.122.100'
Feb 02 15:06:53 compute-0 systemd[1]: libpod-11dd16af072e4fde0cf5d0360a85e5738275b0a3ae0d89d43e201ce8fe311139.scope: Deactivated successfully.
Feb 02 15:06:53 compute-0 podman[76660]: 2026-02-02 15:06:53.823409661 +0000 UTC m=+5.842555773 container died 11dd16af072e4fde0cf5d0360a85e5738275b0a3ae0d89d43e201ce8fe311139 (image=quay.io/ceph/ceph:v20, name=inspiring_poitras, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-af9a1e648f2f5e4b14922d9cd39e7eb411ceecfa5f5fb2c675b28733a013c0cf-merged.mount: Deactivated successfully.
Feb 02 15:06:53 compute-0 sudo[77059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:06:53 compute-0 podman[76660]: 2026-02-02 15:06:53.862583013 +0000 UTC m=+5.881729125 container remove 11dd16af072e4fde0cf5d0360a85e5738275b0a3ae0d89d43e201ce8fe311139 (image=quay.io/ceph/ceph:v20, name=inspiring_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:06:53 compute-0 sudo[77059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:53 compute-0 sudo[77059]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:53 compute-0 systemd[1]: libpod-conmon-11dd16af072e4fde0cf5d0360a85e5738275b0a3ae0d89d43e201ce8fe311139.scope: Deactivated successfully.
Feb 02 15:06:53 compute-0 sudo[77100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Feb 02 15:06:53 compute-0 sudo[77100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:53 compute-0 podman[77098]: 2026-02-02 15:06:53.940131747 +0000 UTC m=+0.055145435 container create b3516b73d8ce4e236c2e751d581db20eb1bb8b00ef18343fb8008692453c2dfe (image=quay.io/ceph/ceph:v20, name=nervous_gagarin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:53 compute-0 systemd[1]: Started libpod-conmon-b3516b73d8ce4e236c2e751d581db20eb1bb8b00ef18343fb8008692453c2dfe.scope.
Feb 02 15:06:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83160b9c1d179ad3d964f4fcc330451f38283521bda9b1ffbf4dfbb1461f5c34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83160b9c1d179ad3d964f4fcc330451f38283521bda9b1ffbf4dfbb1461f5c34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83160b9c1d179ad3d964f4fcc330451f38283521bda9b1ffbf4dfbb1461f5c34/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:54 compute-0 podman[77098]: 2026-02-02 15:06:53.917911891 +0000 UTC m=+0.032925679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:54 compute-0 podman[77098]: 2026-02-02 15:06:54.021205878 +0000 UTC m=+0.136219586 container init b3516b73d8ce4e236c2e751d581db20eb1bb8b00ef18343fb8008692453c2dfe (image=quay.io/ceph/ceph:v20, name=nervous_gagarin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:06:54 compute-0 podman[77098]: 2026-02-02 15:06:54.027766929 +0000 UTC m=+0.142780617 container start b3516b73d8ce4e236c2e751d581db20eb1bb8b00ef18343fb8008692453c2dfe (image=quay.io/ceph/ceph:v20, name=nervous_gagarin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:06:54 compute-0 podman[77098]: 2026-02-02 15:06:54.045315589 +0000 UTC m=+0.160329287 container attach b3516b73d8ce4e236c2e751d581db20eb1bb8b00ef18343fb8008692453c2dfe (image=quay.io/ceph/ceph:v20, name=nervous_gagarin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:06:54 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:54 compute-0 ceph-mgr[75628]: [cephadm INFO root] Saving service mon spec with placement count:5
Feb 02 15:06:54 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Feb 02 15:06:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 15:06:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:54 compute-0 nervous_gagarin[77139]: Scheduled mon update...
Feb 02 15:06:54 compute-0 systemd[1]: libpod-b3516b73d8ce4e236c2e751d581db20eb1bb8b00ef18343fb8008692453c2dfe.scope: Deactivated successfully.
Feb 02 15:06:54 compute-0 podman[77098]: 2026-02-02 15:06:54.47251496 +0000 UTC m=+0.587528728 container died b3516b73d8ce4e236c2e751d581db20eb1bb8b00ef18343fb8008692453c2dfe (image=quay.io/ceph/ceph:v20, name=nervous_gagarin, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-83160b9c1d179ad3d964f4fcc330451f38283521bda9b1ffbf4dfbb1461f5c34-merged.mount: Deactivated successfully.
Feb 02 15:06:54 compute-0 podman[77098]: 2026-02-02 15:06:54.52546687 +0000 UTC m=+0.640480558 container remove b3516b73d8ce4e236c2e751d581db20eb1bb8b00ef18343fb8008692453c2dfe (image=quay.io/ceph/ceph:v20, name=nervous_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:54 compute-0 systemd[1]: libpod-conmon-b3516b73d8ce4e236c2e751d581db20eb1bb8b00ef18343fb8008692453c2dfe.scope: Deactivated successfully.
Feb 02 15:06:54 compute-0 podman[77201]: 2026-02-02 15:06:54.57556458 +0000 UTC m=+0.035018041 container create 2ad09dae9c6c0df21c369802d21d9989f064349a7150db18d2992b607fd329d4 (image=quay.io/ceph/ceph:v20, name=friendly_jackson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:54 compute-0 systemd[1]: Started libpod-conmon-2ad09dae9c6c0df21c369802d21d9989f064349a7150db18d2992b607fd329d4.scope.
Feb 02 15:06:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2d5922c66387fb708e5a05c853569a3627cc41f02e3915c4800a002e75ade7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2d5922c66387fb708e5a05c853569a3627cc41f02e3915c4800a002e75ade7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2d5922c66387fb708e5a05c853569a3627cc41f02e3915c4800a002e75ade7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:54 compute-0 podman[77201]: 2026-02-02 15:06:54.637039499 +0000 UTC m=+0.096492950 container init 2ad09dae9c6c0df21c369802d21d9989f064349a7150db18d2992b607fd329d4 (image=quay.io/ceph/ceph:v20, name=friendly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:54 compute-0 podman[77201]: 2026-02-02 15:06:54.641491659 +0000 UTC m=+0.100945120 container start 2ad09dae9c6c0df21c369802d21d9989f064349a7150db18d2992b607fd329d4 (image=quay.io/ceph/ceph:v20, name=friendly_jackson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:54 compute-0 podman[77201]: 2026-02-02 15:06:54.645090847 +0000 UTC m=+0.104544308 container attach 2ad09dae9c6c0df21c369802d21d9989f064349a7150db18d2992b607fd329d4 (image=quay.io/ceph/ceph:v20, name=friendly_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:54 compute-0 podman[77201]: 2026-02-02 15:06:54.559406574 +0000 UTC m=+0.018860115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:54 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:06:54 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:06:54 compute-0 podman[77172]: 2026-02-02 15:06:54.734035221 +0000 UTC m=+0.569155116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:54 compute-0 ceph-mon[75334]: Added host compute-0
Feb 02 15:06:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:06:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:54 compute-0 podman[77252]: 2026-02-02 15:06:54.832639933 +0000 UTC m=+0.040695651 container create 1a36506c1c24b3008a590d04a827d7afbe09ff1de3e28bff0009a7620c2c64a6 (image=quay.io/ceph/ceph:v20, name=mystifying_wing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:54 compute-0 systemd[1]: Started libpod-conmon-1a36506c1c24b3008a590d04a827d7afbe09ff1de3e28bff0009a7620c2c64a6.scope.
Feb 02 15:06:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:54 compute-0 podman[77252]: 2026-02-02 15:06:54.89529003 +0000 UTC m=+0.103345578 container init 1a36506c1c24b3008a590d04a827d7afbe09ff1de3e28bff0009a7620c2c64a6 (image=quay.io/ceph/ceph:v20, name=mystifying_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:54 compute-0 podman[77252]: 2026-02-02 15:06:54.900657962 +0000 UTC m=+0.108713460 container start 1a36506c1c24b3008a590d04a827d7afbe09ff1de3e28bff0009a7620c2c64a6 (image=quay.io/ceph/ceph:v20, name=mystifying_wing, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:54 compute-0 podman[77252]: 2026-02-02 15:06:54.904382024 +0000 UTC m=+0.112437542 container attach 1a36506c1c24b3008a590d04a827d7afbe09ff1de3e28bff0009a7620c2c64a6 (image=quay.io/ceph/ceph:v20, name=mystifying_wing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:54 compute-0 podman[77252]: 2026-02-02 15:06:54.810080559 +0000 UTC m=+0.018136087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:54 compute-0 mystifying_wing[77266]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb 02 15:06:54 compute-0 systemd[1]: libpod-1a36506c1c24b3008a590d04a827d7afbe09ff1de3e28bff0009a7620c2c64a6.scope: Deactivated successfully.
Feb 02 15:06:54 compute-0 podman[77252]: 2026-02-02 15:06:54.985989998 +0000 UTC m=+0.194045496 container died 1a36506c1c24b3008a590d04a827d7afbe09ff1de3e28bff0009a7620c2c64a6 (image=quay.io/ceph/ceph:v20, name=mystifying_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7491c0c2fb79ed607e9e7cb2fffc88c29a6a158a5f20a3e02471d7c4a60c0d54-merged.mount: Deactivated successfully.
Feb 02 15:06:55 compute-0 podman[77252]: 2026-02-02 15:06:55.0198872 +0000 UTC m=+0.227942708 container remove 1a36506c1c24b3008a590d04a827d7afbe09ff1de3e28bff0009a7620c2c64a6 (image=quay.io/ceph/ceph:v20, name=mystifying_wing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:06:55 compute-0 systemd[1]: libpod-conmon-1a36506c1c24b3008a590d04a827d7afbe09ff1de3e28bff0009a7620c2c64a6.scope: Deactivated successfully.
Feb 02 15:06:55 compute-0 sudo[77100]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Feb 02 15:06:55 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:55 compute-0 ceph-mgr[75628]: [cephadm INFO root] Saving service mgr spec with placement count:2
Feb 02 15:06:55 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Feb 02 15:06:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 15:06:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:55 compute-0 friendly_jackson[77217]: Scheduled mgr update...
Feb 02 15:06:55 compute-0 systemd[1]: libpod-2ad09dae9c6c0df21c369802d21d9989f064349a7150db18d2992b607fd329d4.scope: Deactivated successfully.
Feb 02 15:06:55 compute-0 podman[77201]: 2026-02-02 15:06:55.110044194 +0000 UTC m=+0.569497645 container died 2ad09dae9c6c0df21c369802d21d9989f064349a7150db18d2992b607fd329d4 (image=quay.io/ceph/ceph:v20, name=friendly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b2d5922c66387fb708e5a05c853569a3627cc41f02e3915c4800a002e75ade7-merged.mount: Deactivated successfully.
Feb 02 15:06:55 compute-0 podman[77201]: 2026-02-02 15:06:55.139918937 +0000 UTC m=+0.599372388 container remove 2ad09dae9c6c0df21c369802d21d9989f064349a7150db18d2992b607fd329d4 (image=quay.io/ceph/ceph:v20, name=friendly_jackson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:06:55 compute-0 sudo[77288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:06:55 compute-0 sudo[77288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:55 compute-0 systemd[1]: libpod-conmon-2ad09dae9c6c0df21c369802d21d9989f064349a7150db18d2992b607fd329d4.scope: Deactivated successfully.
Feb 02 15:06:55 compute-0 sudo[77288]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:55 compute-0 sudo[77327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 02 15:06:55 compute-0 sudo[77327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:55 compute-0 podman[77324]: 2026-02-02 15:06:55.198296441 +0000 UTC m=+0.039053260 container create 294939f5c443945641e32b0bb3c0428cbea2a21a9c6c1bd2a358caa391aff6d6 (image=quay.io/ceph/ceph:v20, name=elated_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:06:55 compute-0 systemd[1]: Started libpod-conmon-294939f5c443945641e32b0bb3c0428cbea2a21a9c6c1bd2a358caa391aff6d6.scope.
Feb 02 15:06:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44baa1b8b812a8d1957599eed7a38d47d67cfe7a007448ed4ed8ef1221359f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44baa1b8b812a8d1957599eed7a38d47d67cfe7a007448ed4ed8ef1221359f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44baa1b8b812a8d1957599eed7a38d47d67cfe7a007448ed4ed8ef1221359f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:55 compute-0 podman[77324]: 2026-02-02 15:06:55.261051912 +0000 UTC m=+0.101808781 container init 294939f5c443945641e32b0bb3c0428cbea2a21a9c6c1bd2a358caa391aff6d6 (image=quay.io/ceph/ceph:v20, name=elated_spence, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:55 compute-0 podman[77324]: 2026-02-02 15:06:55.265767018 +0000 UTC m=+0.106523857 container start 294939f5c443945641e32b0bb3c0428cbea2a21a9c6c1bd2a358caa391aff6d6 (image=quay.io/ceph/ceph:v20, name=elated_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 02 15:06:55 compute-0 podman[77324]: 2026-02-02 15:06:55.268859563 +0000 UTC m=+0.109616382 container attach 294939f5c443945641e32b0bb3c0428cbea2a21a9c6c1bd2a358caa391aff6d6 (image=quay.io/ceph/ceph:v20, name=elated_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:55 compute-0 podman[77324]: 2026-02-02 15:06:55.184328318 +0000 UTC m=+0.025085157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:55 compute-0 sudo[77327]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:06:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:55 compute-0 sudo[77410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:06:55 compute-0 sudo[77410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:55 compute-0 sudo[77410]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:55 compute-0 sudo[77435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:06:55 compute-0 sudo[77435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054703 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:06:55 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:55 compute-0 ceph-mgr[75628]: [cephadm INFO root] Saving service crash spec with placement *
Feb 02 15:06:55 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Feb 02 15:06:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 15:06:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:55 compute-0 elated_spence[77366]: Scheduled crash update...
Feb 02 15:06:55 compute-0 systemd[1]: libpod-294939f5c443945641e32b0bb3c0428cbea2a21a9c6c1bd2a358caa391aff6d6.scope: Deactivated successfully.
Feb 02 15:06:55 compute-0 podman[77324]: 2026-02-02 15:06:55.695400687 +0000 UTC m=+0.536157536 container died 294939f5c443945641e32b0bb3c0428cbea2a21a9c6c1bd2a358caa391aff6d6 (image=quay.io/ceph/ceph:v20, name=elated_spence, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c44baa1b8b812a8d1957599eed7a38d47d67cfe7a007448ed4ed8ef1221359f0-merged.mount: Deactivated successfully.
Feb 02 15:06:55 compute-0 podman[77324]: 2026-02-02 15:06:55.733210716 +0000 UTC m=+0.573967555 container remove 294939f5c443945641e32b0bb3c0428cbea2a21a9c6c1bd2a358caa391aff6d6 (image=quay.io/ceph/ceph:v20, name=elated_spence, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:06:55 compute-0 systemd[1]: libpod-conmon-294939f5c443945641e32b0bb3c0428cbea2a21a9c6c1bd2a358caa391aff6d6.scope: Deactivated successfully.
Feb 02 15:06:55 compute-0 podman[77476]: 2026-02-02 15:06:55.783196523 +0000 UTC m=+0.032462139 container create 49c0bb0a3180d2e5e048a8e7c460a9eb7260147401b383fcc9b6d5ea555f48da (image=quay.io/ceph/ceph:v20, name=trusting_perlman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:06:55 compute-0 ceph-mon[75334]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:55 compute-0 ceph-mon[75334]: Saving service mon spec with placement count:5
Feb 02 15:06:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:55 compute-0 systemd[1]: Started libpod-conmon-49c0bb0a3180d2e5e048a8e7c460a9eb7260147401b383fcc9b6d5ea555f48da.scope.
Feb 02 15:06:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec50e21c0a29742a43b5b47c2d9ee232e1f864c3d6a6de97227144c6591ec0d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec50e21c0a29742a43b5b47c2d9ee232e1f864c3d6a6de97227144c6591ec0d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec50e21c0a29742a43b5b47c2d9ee232e1f864c3d6a6de97227144c6591ec0d8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:55 compute-0 podman[77476]: 2026-02-02 15:06:55.864693514 +0000 UTC m=+0.113959190 container init 49c0bb0a3180d2e5e048a8e7c460a9eb7260147401b383fcc9b6d5ea555f48da (image=quay.io/ceph/ceph:v20, name=trusting_perlman, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:06:55 compute-0 podman[77476]: 2026-02-02 15:06:55.768492322 +0000 UTC m=+0.017757938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:55 compute-0 podman[77476]: 2026-02-02 15:06:55.869744918 +0000 UTC m=+0.119010524 container start 49c0bb0a3180d2e5e048a8e7c460a9eb7260147401b383fcc9b6d5ea555f48da (image=quay.io/ceph/ceph:v20, name=trusting_perlman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:55 compute-0 podman[77476]: 2026-02-02 15:06:55.87513163 +0000 UTC m=+0.124397286 container attach 49c0bb0a3180d2e5e048a8e7c460a9eb7260147401b383fcc9b6d5ea555f48da (image=quay.io/ceph/ceph:v20, name=trusting_perlman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:55 compute-0 podman[77540]: 2026-02-02 15:06:55.997766742 +0000 UTC m=+0.048543673 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:56 compute-0 podman[77540]: 2026-02-02 15:06:56.074751572 +0000 UTC m=+0.125528473 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Feb 02 15:06:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2060246041' entity='client.admin' 
Feb 02 15:06:56 compute-0 systemd[1]: libpod-49c0bb0a3180d2e5e048a8e7c460a9eb7260147401b383fcc9b6d5ea555f48da.scope: Deactivated successfully.
Feb 02 15:06:56 compute-0 podman[77476]: 2026-02-02 15:06:56.358813557 +0000 UTC m=+0.608079153 container died 49c0bb0a3180d2e5e048a8e7c460a9eb7260147401b383fcc9b6d5ea555f48da (image=quay.io/ceph/ceph:v20, name=trusting_perlman, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec50e21c0a29742a43b5b47c2d9ee232e1f864c3d6a6de97227144c6591ec0d8-merged.mount: Deactivated successfully.
Feb 02 15:06:56 compute-0 podman[77476]: 2026-02-02 15:06:56.505196521 +0000 UTC m=+0.754462127 container remove 49c0bb0a3180d2e5e048a8e7c460a9eb7260147401b383fcc9b6d5ea555f48da (image=quay.io/ceph/ceph:v20, name=trusting_perlman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:06:56 compute-0 systemd[1]: libpod-conmon-49c0bb0a3180d2e5e048a8e7c460a9eb7260147401b383fcc9b6d5ea555f48da.scope: Deactivated successfully.
Feb 02 15:06:56 compute-0 podman[77630]: 2026-02-02 15:06:56.569064949 +0000 UTC m=+0.041892669 container create 52766688aa227cd8606eba82ca8778d62b2006c102e7084e756f8d16115149b3 (image=quay.io/ceph/ceph:v20, name=hungry_driscoll, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:06:56 compute-0 systemd[1]: Started libpod-conmon-52766688aa227cd8606eba82ca8778d62b2006c102e7084e756f8d16115149b3.scope.
Feb 02 15:06:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d10cda26d1113c3cc0ae0038531888cb833824b9f00d641311a6e6c1402cb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d10cda26d1113c3cc0ae0038531888cb833824b9f00d641311a6e6c1402cb1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d10cda26d1113c3cc0ae0038531888cb833824b9f00d641311a6e6c1402cb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:56 compute-0 sudo[77435]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:56 compute-0 podman[77630]: 2026-02-02 15:06:56.645493426 +0000 UTC m=+0.118321166 container init 52766688aa227cd8606eba82ca8778d62b2006c102e7084e756f8d16115149b3 (image=quay.io/ceph/ceph:v20, name=hungry_driscoll, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:06:56 compute-0 podman[77630]: 2026-02-02 15:06:56.547693125 +0000 UTC m=+0.020520875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:06:56 compute-0 podman[77630]: 2026-02-02 15:06:56.650429387 +0000 UTC m=+0.123257097 container start 52766688aa227cd8606eba82ca8778d62b2006c102e7084e756f8d16115149b3 (image=quay.io/ceph/ceph:v20, name=hungry_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:56 compute-0 podman[77630]: 2026-02-02 15:06:56.656368793 +0000 UTC m=+0.129196603 container attach 52766688aa227cd8606eba82ca8778d62b2006c102e7084e756f8d16115149b3 (image=quay.io/ceph/ceph:v20, name=hungry_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:56 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:06:56 compute-0 sudo[77670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:06:56 compute-0 sudo[77670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:56 compute-0 sudo[77670]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:56 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:06:56 compute-0 sudo[77695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:06:56 compute-0 sudo[77695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:56 compute-0 ceph-mon[75334]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:56 compute-0 ceph-mon[75334]: Saving service mgr spec with placement count:2
Feb 02 15:06:56 compute-0 ceph-mon[75334]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:56 compute-0 ceph-mon[75334]: Saving service crash spec with placement *
Feb 02 15:06:56 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2060246041' entity='client.admin' 
Feb 02 15:06:56 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:56 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77751 (sysctl)
Feb 02 15:06:56 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 02 15:06:56 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 02 15:06:57 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Feb 02 15:06:57 compute-0 sudo[77695]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:57 compute-0 sudo[77774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:06:57 compute-0 sudo[77774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:57 compute-0 sudo[77774]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:57 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:57 compute-0 systemd[1]: libpod-52766688aa227cd8606eba82ca8778d62b2006c102e7084e756f8d16115149b3.scope: Deactivated successfully.
Feb 02 15:06:57 compute-0 podman[77630]: 2026-02-02 15:06:57.319693163 +0000 UTC m=+0.792520873 container died 52766688aa227cd8606eba82ca8778d62b2006c102e7084e756f8d16115149b3 (image=quay.io/ceph/ceph:v20, name=hungry_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:57 compute-0 sudo[77799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 02 15:06:57 compute-0 sudo[77799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d10cda26d1113c3cc0ae0038531888cb833824b9f00d641311a6e6c1402cb1-merged.mount: Deactivated successfully.
Feb 02 15:06:57 compute-0 podman[77630]: 2026-02-02 15:06:57.413864002 +0000 UTC m=+0.886691762 container remove 52766688aa227cd8606eba82ca8778d62b2006c102e7084e756f8d16115149b3 (image=quay.io/ceph/ceph:v20, name=hungry_driscoll, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:57 compute-0 systemd[1]: libpod-conmon-52766688aa227cd8606eba82ca8778d62b2006c102e7084e756f8d16115149b3.scope: Deactivated successfully.
Feb 02 15:06:57 compute-0 podman[77835]: 2026-02-02 15:06:57.488824269 +0000 UTC m=+0.055636073 container create d560cba4782bdf1afa2d162423bde67252b21e2029710389bc00157014deaf86 (image=quay.io/ceph/ceph:v20, name=nostalgic_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:57 compute-0 systemd[1]: Started libpod-conmon-d560cba4782bdf1afa2d162423bde67252b21e2029710389bc00157014deaf86.scope.
Feb 02 15:06:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05daa87b31f8ce3e5bf38d54deb8923b3e6ed58a0e55868823291c15867dd74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05daa87b31f8ce3e5bf38d54deb8923b3e6ed58a0e55868823291c15867dd74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05daa87b31f8ce3e5bf38d54deb8923b3e6ed58a0e55868823291c15867dd74/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:57 compute-0 podman[77835]: 2026-02-02 15:06:57.464419516 +0000 UTC m=+0.031231320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:57 compute-0 podman[77835]: 2026-02-02 15:06:57.569226771 +0000 UTC m=+0.136038545 container init d560cba4782bdf1afa2d162423bde67252b21e2029710389bc00157014deaf86 (image=quay.io/ceph/ceph:v20, name=nostalgic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 15:06:57 compute-0 podman[77835]: 2026-02-02 15:06:57.574669386 +0000 UTC m=+0.141481140 container start d560cba4782bdf1afa2d162423bde67252b21e2029710389bc00157014deaf86 (image=quay.io/ceph/ceph:v20, name=nostalgic_murdock, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:57 compute-0 podman[77835]: 2026-02-02 15:06:57.577389329 +0000 UTC m=+0.144201123 container attach d560cba4782bdf1afa2d162423bde67252b21e2029710389bc00157014deaf86 (image=quay.io/ceph/ceph:v20, name=nostalgic_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:06:57 compute-0 sudo[77799]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:06:57 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:57 compute-0 sudo[77875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:06:57 compute-0 sudo[77875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:57 compute-0 sudo[77875]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:57 compute-0 sudo[77919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- inventory --format=json-pretty --filter-for-batch
Feb 02 15:06:57 compute-0 sudo[77919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:57 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 15:06:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:58 compute-0 ceph-mgr[75628]: [cephadm INFO root] Added label _admin to host compute-0
Feb 02 15:06:58 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Feb 02 15:06:58 compute-0 nostalgic_murdock[77852]: Added label _admin to host compute-0
Feb 02 15:06:58 compute-0 podman[77957]: 2026-02-02 15:06:58.006137285 +0000 UTC m=+0.029839038 container create 4bdcf5e9037fed605cf838ddeaaca16ffdd6a961451064700bf8074cbb521fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:58 compute-0 systemd[1]: libpod-d560cba4782bdf1afa2d162423bde67252b21e2029710389bc00157014deaf86.scope: Deactivated successfully.
Feb 02 15:06:58 compute-0 podman[77835]: 2026-02-02 15:06:58.021878009 +0000 UTC m=+0.588689773 container died d560cba4782bdf1afa2d162423bde67252b21e2029710389bc00157014deaf86 (image=quay.io/ceph/ceph:v20, name=nostalgic_murdock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:06:58 compute-0 systemd[1]: Started libpod-conmon-4bdcf5e9037fed605cf838ddeaaca16ffdd6a961451064700bf8074cbb521fd2.scope.
Feb 02 15:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b05daa87b31f8ce3e5bf38d54deb8923b3e6ed58a0e55868823291c15867dd74-merged.mount: Deactivated successfully.
Feb 02 15:06:58 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:58 compute-0 podman[77835]: 2026-02-02 15:06:58.079930505 +0000 UTC m=+0.646742269 container remove d560cba4782bdf1afa2d162423bde67252b21e2029710389bc00157014deaf86 (image=quay.io/ceph/ceph:v20, name=nostalgic_murdock, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:58 compute-0 podman[77957]: 2026-02-02 15:06:58.084577072 +0000 UTC m=+0.108278845 container init 4bdcf5e9037fed605cf838ddeaaca16ffdd6a961451064700bf8074cbb521fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 15:06:58 compute-0 podman[77957]: 2026-02-02 15:06:58.088446351 +0000 UTC m=+0.112148105 container start 4bdcf5e9037fed605cf838ddeaaca16ffdd6a961451064700bf8074cbb521fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:58 compute-0 podman[77957]: 2026-02-02 15:06:57.99286063 +0000 UTC m=+0.016562393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:06:58 compute-0 jovial_ganguly[77982]: 167 167
Feb 02 15:06:58 compute-0 systemd[1]: libpod-4bdcf5e9037fed605cf838ddeaaca16ffdd6a961451064700bf8074cbb521fd2.scope: Deactivated successfully.
Feb 02 15:06:58 compute-0 podman[77957]: 2026-02-02 15:06:58.091863381 +0000 UTC m=+0.115565134 container attach 4bdcf5e9037fed605cf838ddeaaca16ffdd6a961451064700bf8074cbb521fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:06:58 compute-0 podman[77957]: 2026-02-02 15:06:58.092045905 +0000 UTC m=+0.115747648 container died 4bdcf5e9037fed605cf838ddeaaca16ffdd6a961451064700bf8074cbb521fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-307c89425ccdc10d9512dafbd220d873428322262a1e589669dd304d292b6ee8-merged.mount: Deactivated successfully.
Feb 02 15:06:58 compute-0 systemd[1]: libpod-conmon-d560cba4782bdf1afa2d162423bde67252b21e2029710389bc00157014deaf86.scope: Deactivated successfully.
Feb 02 15:06:58 compute-0 podman[77957]: 2026-02-02 15:06:58.129966648 +0000 UTC m=+0.153668411 container remove 4bdcf5e9037fed605cf838ddeaaca16ffdd6a961451064700bf8074cbb521fd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:06:58 compute-0 systemd[1]: libpod-conmon-4bdcf5e9037fed605cf838ddeaaca16ffdd6a961451064700bf8074cbb521fd2.scope: Deactivated successfully.
Feb 02 15:06:58 compute-0 podman[77991]: 2026-02-02 15:06:58.157269437 +0000 UTC m=+0.059487301 container create a497192444788582e4cbf62a48708ce7a32757697e2fe5baee14414370e81862 (image=quay.io/ceph/ceph:v20, name=trusting_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:58 compute-0 systemd[1]: Started libpod-conmon-a497192444788582e4cbf62a48708ce7a32757697e2fe5baee14414370e81862.scope.
Feb 02 15:06:58 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11042261c05526c46f247e7371f27ee40fbc5e97d9e152256597c368786f44dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11042261c05526c46f247e7371f27ee40fbc5e97d9e152256597c368786f44dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11042261c05526c46f247e7371f27ee40fbc5e97d9e152256597c368786f44dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:58 compute-0 podman[77991]: 2026-02-02 15:06:58.225482658 +0000 UTC m=+0.127700552 container init a497192444788582e4cbf62a48708ce7a32757697e2fe5baee14414370e81862 (image=quay.io/ceph/ceph:v20, name=trusting_leakey, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:58 compute-0 podman[77991]: 2026-02-02 15:06:58.230501914 +0000 UTC m=+0.132719778 container start a497192444788582e4cbf62a48708ce7a32757697e2fe5baee14414370e81862 (image=quay.io/ceph/ceph:v20, name=trusting_leakey, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:06:58 compute-0 podman[77991]: 2026-02-02 15:06:58.233253478 +0000 UTC m=+0.135471362 container attach a497192444788582e4cbf62a48708ce7a32757697e2fe5baee14414370e81862 (image=quay.io/ceph/ceph:v20, name=trusting_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:06:58 compute-0 podman[77991]: 2026-02-02 15:06:58.144484862 +0000 UTC m=+0.046702746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:58 compute-0 ceph-mon[75334]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:58 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:06:58 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:06:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Feb 02 15:06:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/83677927' entity='client.admin' 
Feb 02 15:06:58 compute-0 trusting_leakey[78019]: set mgr/dashboard/cluster/status
Feb 02 15:06:58 compute-0 systemd[1]: libpod-a497192444788582e4cbf62a48708ce7a32757697e2fe5baee14414370e81862.scope: Deactivated successfully.
Feb 02 15:06:58 compute-0 podman[77991]: 2026-02-02 15:06:58.765606971 +0000 UTC m=+0.667824875 container died a497192444788582e4cbf62a48708ce7a32757697e2fe5baee14414370e81862 (image=quay.io/ceph/ceph:v20, name=trusting_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-11042261c05526c46f247e7371f27ee40fbc5e97d9e152256597c368786f44dd-merged.mount: Deactivated successfully.
Feb 02 15:06:58 compute-0 podman[77991]: 2026-02-02 15:06:58.810839382 +0000 UTC m=+0.713057256 container remove a497192444788582e4cbf62a48708ce7a32757697e2fe5baee14414370e81862 (image=quay.io/ceph/ceph:v20, name=trusting_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:06:58 compute-0 systemd[1]: libpod-conmon-a497192444788582e4cbf62a48708ce7a32757697e2fe5baee14414370e81862.scope: Deactivated successfully.
Feb 02 15:06:58 compute-0 systemd[1]: Reloading.
Feb 02 15:06:58 compute-0 systemd-sysv-generator[78089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:06:58 compute-0 systemd-rc-local-generator[78086]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:06:59 compute-0 sudo[74300]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:59 compute-0 podman[78105]: 2026-02-02 15:06:59.213266562 +0000 UTC m=+0.056220746 container create 64cdf707952ba865b0b506d56bdb148f683b5e8ef007b5cc77049ebad59cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:06:59 compute-0 systemd[1]: Started libpod-conmon-64cdf707952ba865b0b506d56bdb148f683b5e8ef007b5cc77049ebad59cce27.scope.
Feb 02 15:06:59 compute-0 podman[78105]: 2026-02-02 15:06:59.183144239 +0000 UTC m=+0.026098493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:06:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3421c1371cf73948e88e295706211b529980cade72c693063904fbb5d0ce7abf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3421c1371cf73948e88e295706211b529980cade72c693063904fbb5d0ce7abf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3421c1371cf73948e88e295706211b529980cade72c693063904fbb5d0ce7abf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3421c1371cf73948e88e295706211b529980cade72c693063904fbb5d0ce7abf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:59 compute-0 podman[78105]: 2026-02-02 15:06:59.314091874 +0000 UTC m=+0.157046098 container init 64cdf707952ba865b0b506d56bdb148f683b5e8ef007b5cc77049ebad59cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_montalcini, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:06:59 compute-0 podman[78105]: 2026-02-02 15:06:59.324137057 +0000 UTC m=+0.167091241 container start 64cdf707952ba865b0b506d56bdb148f683b5e8ef007b5cc77049ebad59cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:06:59 compute-0 podman[78105]: 2026-02-02 15:06:59.328308313 +0000 UTC m=+0.171262507 container attach 64cdf707952ba865b0b506d56bdb148f683b5e8ef007b5cc77049ebad59cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_montalcini, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:06:59 compute-0 sudo[78149]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjiioxygkgzhxkzlmvqdbjtvvavsgcck ; /usr/bin/python3'
Feb 02 15:06:59 compute-0 sudo[78149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:06:59 compute-0 python3[78151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:06:59 compute-0 podman[78157]: 2026-02-02 15:06:59.649078672 +0000 UTC m=+0.065240774 container create a5a09cc2eddd858d4bccc01107906b33038f2c504e77b113ee8efa6882cd7486 (image=quay.io/ceph/ceph:v20, name=sweet_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:06:59 compute-0 systemd[1]: Started libpod-conmon-a5a09cc2eddd858d4bccc01107906b33038f2c504e77b113ee8efa6882cd7486.scope.
Feb 02 15:06:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39ef0ecbd727cdde66d744de220658413270cea875d0d7f02894faa9f40fc82/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39ef0ecbd727cdde66d744de220658413270cea875d0d7f02894faa9f40fc82/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:06:59 compute-0 podman[78157]: 2026-02-02 15:06:59.618486987 +0000 UTC m=+0.034649089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:06:59 compute-0 podman[78157]: 2026-02-02 15:06:59.731051889 +0000 UTC m=+0.147214031 container init a5a09cc2eddd858d4bccc01107906b33038f2c504e77b113ee8efa6882cd7486 (image=quay.io/ceph/ceph:v20, name=sweet_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:59 compute-0 podman[78157]: 2026-02-02 15:06:59.737004647 +0000 UTC m=+0.153166759 container start a5a09cc2eddd858d4bccc01107906b33038f2c504e77b113ee8efa6882cd7486 (image=quay.io/ceph/ceph:v20, name=sweet_ramanujan, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:59 compute-0 podman[78157]: 2026-02-02 15:06:59.748697736 +0000 UTC m=+0.164859848 container attach a5a09cc2eddd858d4bccc01107906b33038f2c504e77b113ee8efa6882cd7486 (image=quay.io/ceph/ceph:v20, name=sweet_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 02 15:06:59 compute-0 ceph-mon[75334]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:06:59 compute-0 ceph-mon[75334]: Added label _admin to host compute-0
Feb 02 15:06:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/83677927' entity='client.admin' 
Feb 02 15:06:59 compute-0 busy_montalcini[78121]: [
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:     {
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         "available": false,
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         "being_replaced": false,
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         "ceph_device_lvm": false,
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         "lsm_data": {},
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         "lvs": [],
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         "path": "/dev/sr0",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         "rejected_reasons": [
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "Has a FileSystem",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "Insufficient space (<5GB)"
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         ],
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         "sys_api": {
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "actuators": null,
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "device_nodes": [
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:                 "sr0"
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             ],
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "devname": "sr0",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "human_readable_size": "482.00 KB",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "id_bus": "ata",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "model": "QEMU DVD-ROM",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "nr_requests": "2",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "parent": "/dev/sr0",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "partitions": {},
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "path": "/dev/sr0",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "removable": "1",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "rev": "2.5+",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "ro": "0",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "rotational": "1",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "sas_address": "",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "sas_device_handle": "",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "scheduler_mode": "mq-deadline",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "sectors": 0,
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "sectorsize": "2048",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "size": 493568.0,
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "support_discard": "2048",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "type": "disk",
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:             "vendor": "QEMU"
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:         }
Feb 02 15:06:59 compute-0 busy_montalcini[78121]:     }
Feb 02 15:06:59 compute-0 busy_montalcini[78121]: ]
Feb 02 15:06:59 compute-0 systemd[1]: libpod-64cdf707952ba865b0b506d56bdb148f683b5e8ef007b5cc77049ebad59cce27.scope: Deactivated successfully.
Feb 02 15:06:59 compute-0 podman[78105]: 2026-02-02 15:06:59.828451884 +0000 UTC m=+0.671406048 container died 64cdf707952ba865b0b506d56bdb148f683b5e8ef007b5cc77049ebad59cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_montalcini, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:06:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3421c1371cf73948e88e295706211b529980cade72c693063904fbb5d0ce7abf-merged.mount: Deactivated successfully.
Feb 02 15:06:59 compute-0 podman[78105]: 2026-02-02 15:06:59.867036402 +0000 UTC m=+0.709990576 container remove 64cdf707952ba865b0b506d56bdb148f683b5e8ef007b5cc77049ebad59cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:06:59 compute-0 systemd[1]: libpod-conmon-64cdf707952ba865b0b506d56bdb148f683b5e8ef007b5cc77049ebad59cce27.scope: Deactivated successfully.
Feb 02 15:06:59 compute-0 sudo[77919]: pam_unix(sudo:session): session closed for user root
Feb 02 15:06:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:06:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:06:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:06:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:06:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:06:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 15:06:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:06:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:06:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:06:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:06:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:06:59 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb 02 15:06:59 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb 02 15:06:59 compute-0 sudo[78873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 15:06:59 compute-0 sudo[78873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:06:59 compute-0 sudo[78873]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[78898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph
Feb 02 15:07:00 compute-0 sudo[78898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[78898]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[78923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.conf.new
Feb 02 15:07:00 compute-0 sudo[78923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[78923]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Feb 02 15:07:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/252000921' entity='client.admin' 
Feb 02 15:07:00 compute-0 systemd[1]: libpod-a5a09cc2eddd858d4bccc01107906b33038f2c504e77b113ee8efa6882cd7486.scope: Deactivated successfully.
Feb 02 15:07:00 compute-0 podman[78157]: 2026-02-02 15:07:00.158028315 +0000 UTC m=+0.574190407 container died a5a09cc2eddd858d4bccc01107906b33038f2c504e77b113ee8efa6882cd7486 (image=quay.io/ceph/ceph:v20, name=sweet_ramanujan, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:00 compute-0 sudo[78948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:00 compute-0 sudo[78948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[78948]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e39ef0ecbd727cdde66d744de220658413270cea875d0d7f02894faa9f40fc82-merged.mount: Deactivated successfully.
Feb 02 15:07:00 compute-0 podman[78157]: 2026-02-02 15:07:00.194969767 +0000 UTC m=+0.611131859 container remove a5a09cc2eddd858d4bccc01107906b33038f2c504e77b113ee8efa6882cd7486 (image=quay.io/ceph/ceph:v20, name=sweet_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:07:00 compute-0 systemd[1]: libpod-conmon-a5a09cc2eddd858d4bccc01107906b33038f2c504e77b113ee8efa6882cd7486.scope: Deactivated successfully.
Feb 02 15:07:00 compute-0 sudo[78149]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[78987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.conf.new
Feb 02 15:07:00 compute-0 sudo[78987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[78987]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[79035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.conf.new
Feb 02 15:07:00 compute-0 sudo[79035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79035]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[79060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.conf.new
Feb 02 15:07:00 compute-0 sudo[79060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79060]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[79085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 02 15:07:00 compute-0 sudo[79085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79085]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.conf
Feb 02 15:07:00 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.conf
Feb 02 15:07:00 compute-0 sudo[79110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config
Feb 02 15:07:00 compute-0 sudo[79110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79110]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[79135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config
Feb 02 15:07:00 compute-0 sudo[79135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79135]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[79183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.conf.new
Feb 02 15:07:00 compute-0 sudo[79183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79183]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:00 compute-0 sudo[79237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:00 compute-0 sudo[79237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79237]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:00 compute-0 sudo[79285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.conf.new
Feb 02 15:07:00 compute-0 sudo[79285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79285]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 ceph-mgr[75628]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 02 15:07:00 compute-0 sudo[79333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.conf.new
Feb 02 15:07:00 compute-0 sudo[79333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79333]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[79358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.conf.new
Feb 02 15:07:00 compute-0 sudo[79358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79358]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 sudo[79406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.conf.new /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.conf
Feb 02 15:07:00 compute-0 sudo[79406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:07:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:00 compute-0 ceph-mon[75334]: Updating compute-0:/etc/ceph/ceph.conf
Feb 02 15:07:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/252000921' entity='client.admin' 
Feb 02 15:07:00 compute-0 ceph-mon[75334]: Updating compute-0:/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.conf
Feb 02 15:07:00 compute-0 sudo[79406]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:00 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 15:07:00 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 15:07:00 compute-0 sudo[79503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrxilcygsijslqwplxjrgiwlzvezdutq ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770044820.5246308-36553-46119570856067/async_wrapper.py j921088867536 30 /home/zuul/.ansible/tmp/ansible-tmp-1770044820.5246308-36553-46119570856067/AnsiballZ_command.py _'
Feb 02 15:07:00 compute-0 sudo[79503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:00 compute-0 sudo[79455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 02 15:07:00 compute-0 sudo[79455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:00 compute-0 sudo[79455]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 sudo[79508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph
Feb 02 15:07:01 compute-0 sudo[79508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79508]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 sudo[79533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.client.admin.keyring.new
Feb 02 15:07:01 compute-0 sudo[79533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79533]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 ansible-async_wrapper.py[79505]: Invoked with j921088867536 30 /home/zuul/.ansible/tmp/ansible-tmp-1770044820.5246308-36553-46119570856067/AnsiballZ_command.py _
Feb 02 15:07:01 compute-0 ansible-async_wrapper.py[79579]: Starting module and watcher
Feb 02 15:07:01 compute-0 ansible-async_wrapper.py[79579]: Start watching 79582 (30)
Feb 02 15:07:01 compute-0 ansible-async_wrapper.py[79582]: Start module (79582)
Feb 02 15:07:01 compute-0 ansible-async_wrapper.py[79505]: Return async_wrapper task started.
Feb 02 15:07:01 compute-0 sudo[79503]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 sudo[79558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:01 compute-0 sudo[79558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79558]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 sudo[79588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.client.admin.keyring.new
Feb 02 15:07:01 compute-0 sudo[79588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79588]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 sudo[79636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.client.admin.keyring.new
Feb 02 15:07:01 compute-0 sudo[79636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79636]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 python3[79585]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:01 compute-0 sudo[79661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.client.admin.keyring.new
Feb 02 15:07:01 compute-0 sudo[79661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79661]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 sudo[79692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 02 15:07:01 compute-0 sudo[79692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79692]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.client.admin.keyring
Feb 02 15:07:01 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.client.admin.keyring
Feb 02 15:07:01 compute-0 podman[79685]: 2026-02-02 15:07:01.39719213 +0000 UTC m=+0.069738658 container create 24d6134083a47620edfd8d04e251979eb619d514e9f723eb2a03cd63dd5fd576 (image=quay.io/ceph/ceph:v20, name=gifted_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:01 compute-0 sudo[79724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config
Feb 02 15:07:01 compute-0 sudo[79724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79724]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 podman[79685]: 2026-02-02 15:07:01.348326304 +0000 UTC m=+0.020872872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:01 compute-0 systemd[1]: Started libpod-conmon-24d6134083a47620edfd8d04e251979eb619d514e9f723eb2a03cd63dd5fd576.scope.
Feb 02 15:07:01 compute-0 sudo[79749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config
Feb 02 15:07:01 compute-0 sudo[79749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79749]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5a988cae2626aefad110248acc1e04b2d065d58e43ff3bf47672e7d415d9ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5a988cae2626aefad110248acc1e04b2d065d58e43ff3bf47672e7d415d9ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:01 compute-0 sudo[79779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.client.admin.keyring.new
Feb 02 15:07:01 compute-0 sudo[79779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79779]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 podman[79685]: 2026-02-02 15:07:01.575789964 +0000 UTC m=+0.248336512 container init 24d6134083a47620edfd8d04e251979eb619d514e9f723eb2a03cd63dd5fd576 (image=quay.io/ceph/ceph:v20, name=gifted_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:01 compute-0 podman[79685]: 2026-02-02 15:07:01.580315598 +0000 UTC m=+0.252862136 container start 24d6134083a47620edfd8d04e251979eb619d514e9f723eb2a03cd63dd5fd576 (image=quay.io/ceph/ceph:v20, name=gifted_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 02 15:07:01 compute-0 sudo[79804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:01 compute-0 sudo[79804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79804]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 sudo[79830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.client.admin.keyring.new
Feb 02 15:07:01 compute-0 sudo[79830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79830]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 podman[79685]: 2026-02-02 15:07:01.671211862 +0000 UTC m=+0.343758390 container attach 24d6134083a47620edfd8d04e251979eb619d514e9f723eb2a03cd63dd5fd576 (image=quay.io/ceph/ceph:v20, name=gifted_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:07:01 compute-0 sudo[79895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.client.admin.keyring.new
Feb 02 15:07:01 compute-0 sudo[79895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79895]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 sudo[79922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.client.admin.keyring.new
Feb 02 15:07:01 compute-0 sudo[79922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79922]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 sudo[79947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-e43470b2-6632-573a-87d3-0f5428ec59e9/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.client.admin.keyring.new /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.client.admin.keyring
Feb 02 15:07:01 compute-0 sudo[79947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:01 compute-0 sudo[79947]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:01 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:01 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:07:01 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:01 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev 52dc7dfb-dc8a-4c87-ac4b-ae31a647849e (Updating crash deployment (+1 -> 1))
Feb 02 15:07:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 02 15:07:01 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 02 15:07:01 compute-0 ceph-mon[75334]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 02 15:07:01 compute-0 ceph-mon[75334]: Updating compute-0:/var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/config/ceph.client.admin.keyring
Feb 02 15:07:01 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:01 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:01 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:01 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 02 15:07:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:01 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Feb 02 15:07:01 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Feb 02 15:07:01 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:07:01 compute-0 gifted_jones[79775]: 
Feb 02 15:07:01 compute-0 gifted_jones[79775]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 02 15:07:02 compute-0 sudo[79972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:02 compute-0 sudo[79972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:02 compute-0 sudo[79972]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:02 compute-0 systemd[1]: libpod-24d6134083a47620edfd8d04e251979eb619d514e9f723eb2a03cd63dd5fd576.scope: Deactivated successfully.
Feb 02 15:07:02 compute-0 podman[79685]: 2026-02-02 15:07:02.012080304 +0000 UTC m=+0.684626842 container died 24d6134083a47620edfd8d04e251979eb619d514e9f723eb2a03cd63dd5fd576 (image=quay.io/ceph/ceph:v20, name=gifted_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:02 compute-0 sudo[80000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:02 compute-0 sudo[80000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d5a988cae2626aefad110248acc1e04b2d065d58e43ff3bf47672e7d415d9ba-merged.mount: Deactivated successfully.
Feb 02 15:07:02 compute-0 podman[79685]: 2026-02-02 15:07:02.194701861 +0000 UTC m=+0.867248429 container remove 24d6134083a47620edfd8d04e251979eb619d514e9f723eb2a03cd63dd5fd576 (image=quay.io/ceph/ceph:v20, name=gifted_jones, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:02 compute-0 systemd[1]: libpod-conmon-24d6134083a47620edfd8d04e251979eb619d514e9f723eb2a03cd63dd5fd576.scope: Deactivated successfully.
Feb 02 15:07:02 compute-0 ansible-async_wrapper.py[79582]: Module complete (79582)
Feb 02 15:07:02 compute-0 sudo[80120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxlyogiowurhqobgwcbvfigtmgruqngj ; /usr/bin/python3'
Feb 02 15:07:02 compute-0 sudo[80120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:02 compute-0 podman[80128]: 2026-02-02 15:07:02.419195582 +0000 UTC m=+0.024133176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:02 compute-0 python3[80127]: ansible-ansible.legacy.async_status Invoked with jid=j921088867536.79505 mode=status _async_dir=/root/.ansible_async
Feb 02 15:07:02 compute-0 podman[80128]: 2026-02-02 15:07:02.538178613 +0000 UTC m=+0.143116117 container create d000b89d414046034969a69966296d3372fe876620763d048ec8bc2fd2c0b1d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 15:07:02 compute-0 sudo[80120]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:02 compute-0 systemd[1]: Started libpod-conmon-d000b89d414046034969a69966296d3372fe876620763d048ec8bc2fd2c0b1d4.scope.
Feb 02 15:07:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:02 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:02 compute-0 sudo[80194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncjhqefnxjluarqwejhcklayalythfv ; /usr/bin/python3'
Feb 02 15:07:02 compute-0 sudo[80194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:02 compute-0 podman[80128]: 2026-02-02 15:07:02.701353852 +0000 UTC m=+0.306291396 container init d000b89d414046034969a69966296d3372fe876620763d048ec8bc2fd2c0b1d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:02 compute-0 ceph-mgr[75628]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Feb 02 15:07:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:02 compute-0 ceph-mon[75334]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 02 15:07:02 compute-0 podman[80128]: 2026-02-02 15:07:02.708878405 +0000 UTC m=+0.313815909 container start d000b89d414046034969a69966296d3372fe876620763d048ec8bc2fd2c0b1d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:07:02 compute-0 modest_rhodes[80163]: 167 167
Feb 02 15:07:02 compute-0 systemd[1]: libpod-d000b89d414046034969a69966296d3372fe876620763d048ec8bc2fd2c0b1d4.scope: Deactivated successfully.
Feb 02 15:07:02 compute-0 podman[80128]: 2026-02-02 15:07:02.771434887 +0000 UTC m=+0.376372421 container attach d000b89d414046034969a69966296d3372fe876620763d048ec8bc2fd2c0b1d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:02 compute-0 podman[80128]: 2026-02-02 15:07:02.772929331 +0000 UTC m=+0.377866885 container died d000b89d414046034969a69966296d3372fe876620763d048ec8bc2fd2c0b1d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 02 15:07:02 compute-0 python3[80196]: ansible-ansible.legacy.async_status Invoked with jid=j921088867536.79505 mode=cleanup _async_dir=/root/.ansible_async
Feb 02 15:07:02 compute-0 sudo[80194]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-93aeeecf288aebb73d8a56ff4e42e151c5f4df1f153790299963218b4b01f81e-merged.mount: Deactivated successfully.
Feb 02 15:07:02 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 02 15:07:02 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 02 15:07:02 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:02 compute-0 ceph-mon[75334]: Deploying daemon crash.compute-0 on compute-0
Feb 02 15:07:02 compute-0 ceph-mon[75334]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:07:02 compute-0 ceph-mon[75334]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:02 compute-0 ceph-mon[75334]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 02 15:07:03 compute-0 podman[80128]: 2026-02-02 15:07:03.002349616 +0000 UTC m=+0.607287160 container remove d000b89d414046034969a69966296d3372fe876620763d048ec8bc2fd2c0b1d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:03 compute-0 systemd[1]: libpod-conmon-d000b89d414046034969a69966296d3372fe876620763d048ec8bc2fd2c0b1d4.scope: Deactivated successfully.
Feb 02 15:07:03 compute-0 sudo[80236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-achnhcllmuraofhmzxnbfkirfqxjxjzc ; /usr/bin/python3'
Feb 02 15:07:03 compute-0 sudo[80236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:03 compute-0 systemd[1]: Reloading.
Feb 02 15:07:03 compute-0 python3[80238]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 02 15:07:03 compute-0 systemd-rc-local-generator[80265]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:03 compute-0 systemd-sysv-generator[80269]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:03 compute-0 sudo[80236]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:03 compute-0 systemd[1]: Reloading.
Feb 02 15:07:03 compute-0 systemd-rc-local-generator[80305]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:03 compute-0 systemd-sysv-generator[80308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:03 compute-0 systemd[1]: Starting Ceph crash.compute-0 for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:07:03 compute-0 sudo[80350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkkbjvzgnpnnkppuyswvaggozygwmjrf ; /usr/bin/python3'
Feb 02 15:07:03 compute-0 sudo[80350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:03 compute-0 podman[80392]: 2026-02-02 15:07:03.961663494 +0000 UTC m=+0.041837375 container create 74836a9dee83076978c86731057bc2a0e4712182885ed8af43de4c5c14d2450d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:03 compute-0 python3[80357]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffdf5356efe1d44e1cd5dc436683c89212ee54849eb4e6391a537092d1020e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffdf5356efe1d44e1cd5dc436683c89212ee54849eb4e6391a537092d1020e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffdf5356efe1d44e1cd5dc436683c89212ee54849eb4e6391a537092d1020e2/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffdf5356efe1d44e1cd5dc436683c89212ee54849eb4e6391a537092d1020e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:04 compute-0 podman[80392]: 2026-02-02 15:07:04.026082158 +0000 UTC m=+0.106256019 container init 74836a9dee83076978c86731057bc2a0e4712182885ed8af43de4c5c14d2450d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:04 compute-0 podman[80392]: 2026-02-02 15:07:04.033190982 +0000 UTC m=+0.113364823 container start 74836a9dee83076978c86731057bc2a0e4712182885ed8af43de4c5c14d2450d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:04 compute-0 podman[80392]: 2026-02-02 15:07:03.938226364 +0000 UTC m=+0.018400255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:04 compute-0 bash[80392]: 74836a9dee83076978c86731057bc2a0e4712182885ed8af43de4c5c14d2450d
Feb 02 15:07:04 compute-0 systemd[1]: Started Ceph crash.compute-0 for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:07:04 compute-0 podman[80405]: 2026-02-02 15:07:04.042983048 +0000 UTC m=+0.058574331 container create 0945d1480fa0dae4ba78d303a1285c437ba30b65c30111ebd96c7e502e9b5fbd (image=quay.io/ceph/ceph:v20, name=musing_bardeen, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:07:04 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0[80414]: INFO:ceph-crash:pinging cluster to exercise our key
Feb 02 15:07:04 compute-0 sudo[80000]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:04 compute-0 systemd[1]: Started libpod-conmon-0945d1480fa0dae4ba78d303a1285c437ba30b65c30111ebd96c7e502e9b5fbd.scope.
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:04 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev 52dc7dfb-dc8a-4c87-ac4b-ae31a647849e (Updating crash deployment (+1 -> 1))
Feb 02 15:07:04 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event 52dc7dfb-dc8a-4c87-ac4b-ae31a647849e (Updating crash deployment (+1 -> 1)) in 2 seconds
Feb 02 15:07:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 02 15:07:04 compute-0 podman[80405]: 2026-02-02 15:07:04.02179409 +0000 UTC m=+0.037385373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:04 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev aa2fa8d9-7a92-4a09-80a5-c6c39ccd608f (Updating mgr deployment (+1 -> 2))
Feb 02 15:07:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmnvvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmnvvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmnvvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 02 15:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbfb383fb02f79880afa05656f0d1b58c5c252590c72401b5cbb9a49719545e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbfb383fb02f79880afa05656f0d1b58c5c252590c72401b5cbb9a49719545e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbfb383fb02f79880afa05656f0d1b58c5c252590c72401b5cbb9a49719545e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:04 compute-0 podman[80405]: 2026-02-02 15:07:04.150103005 +0000 UTC m=+0.165694278 container init 0945d1480fa0dae4ba78d303a1285c437ba30b65c30111ebd96c7e502e9b5fbd (image=quay.io/ceph/ceph:v20, name=musing_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:07:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mgr services"} : dispatch
Feb 02 15:07:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:04 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.pmnvvl on compute-0
Feb 02 15:07:04 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.pmnvvl on compute-0
Feb 02 15:07:04 compute-0 podman[80405]: 2026-02-02 15:07:04.159222685 +0000 UTC m=+0.174813928 container start 0945d1480fa0dae4ba78d303a1285c437ba30b65c30111ebd96c7e502e9b5fbd (image=quay.io/ceph/ceph:v20, name=musing_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:07:04 compute-0 podman[80405]: 2026-02-02 15:07:04.163182527 +0000 UTC m=+0.178773770 container attach 0945d1480fa0dae4ba78d303a1285c437ba30b65c30111ebd96c7e502e9b5fbd (image=quay.io/ceph/ceph:v20, name=musing_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:04 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0[80414]: 2026-02-02T15:07:04.161+0000 7fd9bdfc6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 02 15:07:04 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0[80414]: 2026-02-02T15:07:04.161+0000 7fd9bdfc6640 -1 AuthRegistry(0x7fd9b8052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 02 15:07:04 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0[80414]: 2026-02-02T15:07:04.163+0000 7fd9bdfc6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 02 15:07:04 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0[80414]: 2026-02-02T15:07:04.163+0000 7fd9bdfc6640 -1 AuthRegistry(0x7fd9bdfc4fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 02 15:07:04 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0[80414]: 2026-02-02T15:07:04.164+0000 7fd9b77fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Feb 02 15:07:04 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0[80414]: 2026-02-02T15:07:04.164+0000 7fd9bdfc6640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Feb 02 15:07:04 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0[80414]: [errno 13] RADOS permission denied (error connecting to the cluster)
Feb 02 15:07:04 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-crash-compute-0[80414]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb 02 15:07:04 compute-0 sudo[80435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:04 compute-0 sudo[80435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:04 compute-0 sudo[80435]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:04 compute-0 sudo[80468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:04 compute-0 sudo[80468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:04 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:07:04 compute-0 musing_bardeen[80429]: 
Feb 02 15:07:04 compute-0 musing_bardeen[80429]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 02 15:07:04 compute-0 systemd[1]: libpod-0945d1480fa0dae4ba78d303a1285c437ba30b65c30111ebd96c7e502e9b5fbd.scope: Deactivated successfully.
Feb 02 15:07:04 compute-0 podman[80405]: 2026-02-02 15:07:04.586277713 +0000 UTC m=+0.601868976 container died 0945d1480fa0dae4ba78d303a1285c437ba30b65c30111ebd96c7e502e9b5fbd (image=quay.io/ceph/ceph:v20, name=musing_bardeen, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbfb383fb02f79880afa05656f0d1b58c5c252590c72401b5cbb9a49719545e4-merged.mount: Deactivated successfully.
Feb 02 15:07:04 compute-0 podman[80405]: 2026-02-02 15:07:04.629386976 +0000 UTC m=+0.644978229 container remove 0945d1480fa0dae4ba78d303a1285c437ba30b65c30111ebd96c7e502e9b5fbd (image=quay.io/ceph/ceph:v20, name=musing_bardeen, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:04 compute-0 systemd[1]: libpod-conmon-0945d1480fa0dae4ba78d303a1285c437ba30b65c30111ebd96c7e502e9b5fbd.scope: Deactivated successfully.
Feb 02 15:07:04 compute-0 sudo[80350]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:04 compute-0 ceph-mgr[75628]: [progress INFO root] Writing back 1 completed events
Feb 02 15:07:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 15:07:04 compute-0 podman[80563]: 2026-02-02 15:07:04.671722891 +0000 UTC m=+0.039850489 container create c119775e0c05a5a364f163577e37c16a3892fbd24fc7631930b1991a0710ded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:04 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:04 compute-0 systemd[1]: Started libpod-conmon-c119775e0c05a5a364f163577e37c16a3892fbd24fc7631930b1991a0710ded9.scope.
Feb 02 15:07:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:04 compute-0 podman[80563]: 2026-02-02 15:07:04.733899853 +0000 UTC m=+0.102027511 container init c119775e0c05a5a364f163577e37c16a3892fbd24fc7631930b1991a0710ded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:04 compute-0 podman[80563]: 2026-02-02 15:07:04.74246753 +0000 UTC m=+0.110595148 container start c119775e0c05a5a364f163577e37c16a3892fbd24fc7631930b1991a0710ded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:07:04 compute-0 beautiful_goldwasser[80580]: 167 167
Feb 02 15:07:04 compute-0 podman[80563]: 2026-02-02 15:07:04.746344999 +0000 UTC m=+0.114472637 container attach c119775e0c05a5a364f163577e37c16a3892fbd24fc7631930b1991a0710ded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:04 compute-0 systemd[1]: libpod-c119775e0c05a5a364f163577e37c16a3892fbd24fc7631930b1991a0710ded9.scope: Deactivated successfully.
Feb 02 15:07:04 compute-0 podman[80563]: 2026-02-02 15:07:04.748061819 +0000 UTC m=+0.116189467 container died c119775e0c05a5a364f163577e37c16a3892fbd24fc7631930b1991a0710ded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:04 compute-0 podman[80563]: 2026-02-02 15:07:04.651993666 +0000 UTC m=+0.020121294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-52db36da36fc9f14242054a1120c5f3f058fd560f295d91d641b77a247673540-merged.mount: Deactivated successfully.
Feb 02 15:07:04 compute-0 podman[80563]: 2026-02-02 15:07:04.783734581 +0000 UTC m=+0.151862189 container remove c119775e0c05a5a364f163577e37c16a3892fbd24fc7631930b1991a0710ded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:04 compute-0 systemd[1]: libpod-conmon-c119775e0c05a5a364f163577e37c16a3892fbd24fc7631930b1991a0710ded9.scope: Deactivated successfully.
Feb 02 15:07:04 compute-0 systemd[1]: Reloading.
Feb 02 15:07:04 compute-0 systemd-rc-local-generator[80623]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:04 compute-0 systemd-sysv-generator[80630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:05 compute-0 sudo[80654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltowdrfyfhmmxtqzpdfumgkybxzvrczg ; /usr/bin/python3'
Feb 02 15:07:05 compute-0 sudo[80654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmnvvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pmnvvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mgr services"} : dispatch
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:05 compute-0 ceph-mon[75334]: Deploying daemon mgr.compute-0.pmnvvl on compute-0
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:07:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mon[75334]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:05 compute-0 systemd[1]: Reloading.
Feb 02 15:07:05 compute-0 systemd-sysv-generator[80686]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:05 compute-0 systemd-rc-local-generator[80682]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:05 compute-0 python3[80659]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:05 compute-0 podman[80697]: 2026-02-02 15:07:05.304662131 +0000 UTC m=+0.061260753 container create 2f6161d8c7cba36e90b879e693c9805ef7d07d9afbd25de5827d8688052afe47 (image=quay.io/ceph/ceph:v20, name=bold_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:07:05 compute-0 systemd[1]: Starting Ceph mgr.compute-0.pmnvvl for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:07:05 compute-0 systemd[1]: Started libpod-conmon-2f6161d8c7cba36e90b879e693c9805ef7d07d9afbd25de5827d8688052afe47.scope.
Feb 02 15:07:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9fd09712aff23f3ba8cee14afcd79c1782c251ccdf70e453832eef9b687041/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9fd09712aff23f3ba8cee14afcd79c1782c251ccdf70e453832eef9b687041/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9fd09712aff23f3ba8cee14afcd79c1782c251ccdf70e453832eef9b687041/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:05 compute-0 podman[80697]: 2026-02-02 15:07:05.288695943 +0000 UTC m=+0.045294585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:05 compute-0 podman[80697]: 2026-02-02 15:07:05.399407204 +0000 UTC m=+0.156005916 container init 2f6161d8c7cba36e90b879e693c9805ef7d07d9afbd25de5827d8688052afe47 (image=quay.io/ceph/ceph:v20, name=bold_davinci, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:05 compute-0 podman[80697]: 2026-02-02 15:07:05.409595728 +0000 UTC m=+0.166194380 container start 2f6161d8c7cba36e90b879e693c9805ef7d07d9afbd25de5827d8688052afe47 (image=quay.io/ceph/ceph:v20, name=bold_davinci, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:07:05 compute-0 podman[80697]: 2026-02-02 15:07:05.413693543 +0000 UTC m=+0.170292195 container attach 2f6161d8c7cba36e90b879e693c9805ef7d07d9afbd25de5827d8688052afe47 (image=quay.io/ceph/ceph:v20, name=bold_davinci, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:05 compute-0 podman[80765]: 2026-02-02 15:07:05.566523993 +0000 UTC m=+0.050688189 container create 51be39d21437174fce5e5e78430c6b90728e0dbcdf37bd042005b759aa04ae2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-pmnvvl, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:05 compute-0 podman[80765]: 2026-02-02 15:07:05.540847702 +0000 UTC m=+0.025011928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f32d44015e98c246205cd703d074891c56c2c6b9c77f9409c552386f2ed23a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f32d44015e98c246205cd703d074891c56c2c6b9c77f9409c552386f2ed23a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f32d44015e98c246205cd703d074891c56c2c6b9c77f9409c552386f2ed23a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f32d44015e98c246205cd703d074891c56c2c6b9c77f9409c552386f2ed23a1/merged/var/lib/ceph/mgr/ceph-compute-0.pmnvvl supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:05 compute-0 podman[80765]: 2026-02-02 15:07:05.732103307 +0000 UTC m=+0.216267583 container init 51be39d21437174fce5e5e78430c6b90728e0dbcdf37bd042005b759aa04ae2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-pmnvvl, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:05 compute-0 podman[80765]: 2026-02-02 15:07:05.736301164 +0000 UTC m=+0.220465400 container start 51be39d21437174fce5e5e78430c6b90728e0dbcdf37bd042005b759aa04ae2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-pmnvvl, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:05 compute-0 bash[80765]: 51be39d21437174fce5e5e78430c6b90728e0dbcdf37bd042005b759aa04ae2d
Feb 02 15:07:05 compute-0 systemd[1]: Started Ceph mgr.compute-0.pmnvvl for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:07:05 compute-0 ceph-mgr[80803]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:07:05 compute-0 ceph-mgr[80803]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 02 15:07:05 compute-0 ceph-mgr[80803]: pidfile_write: ignore empty --pid-file
Feb 02 15:07:05 compute-0 sudo[80468]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:05 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'alerts'
Feb 02 15:07:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 15:07:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev aa2fa8d9-7a92-4a09-80a5-c6c39ccd608f (Updating mgr deployment (+1 -> 2))
Feb 02 15:07:05 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event aa2fa8d9-7a92-4a09-80a5-c6c39ccd608f (Updating mgr deployment (+1 -> 2)) in 2 seconds
Feb 02 15:07:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 15:07:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Feb 02 15:07:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/197151525' entity='client.admin' 
Feb 02 15:07:05 compute-0 systemd[1]: libpod-2f6161d8c7cba36e90b879e693c9805ef7d07d9afbd25de5827d8688052afe47.scope: Deactivated successfully.
Feb 02 15:07:05 compute-0 podman[80697]: 2026-02-02 15:07:05.862947501 +0000 UTC m=+0.619546113 container died 2f6161d8c7cba36e90b879e693c9805ef7d07d9afbd25de5827d8688052afe47 (image=quay.io/ceph/ceph:v20, name=bold_davinci, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:05 compute-0 sudo[80824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:07:05 compute-0 sudo[80824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:05 compute-0 sudo[80824]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba9fd09712aff23f3ba8cee14afcd79c1782c251ccdf70e453832eef9b687041-merged.mount: Deactivated successfully.
Feb 02 15:07:05 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'balancer'
Feb 02 15:07:05 compute-0 podman[80697]: 2026-02-02 15:07:05.906055985 +0000 UTC m=+0.662654597 container remove 2f6161d8c7cba36e90b879e693c9805ef7d07d9afbd25de5827d8688052afe47 (image=quay.io/ceph/ceph:v20, name=bold_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:05 compute-0 sudo[80654]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:05 compute-0 systemd[1]: libpod-conmon-2f6161d8c7cba36e90b879e693c9805ef7d07d9afbd25de5827d8688052afe47.scope: Deactivated successfully.
Feb 02 15:07:05 compute-0 sudo[80861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:05 compute-0 sudo[80861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:05 compute-0 sudo[80861]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:05 compute-0 sudo[80886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:07:05 compute-0 sudo[80886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:05 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'cephadm'
Feb 02 15:07:06 compute-0 sudo[80934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxbzufaweiikdckgbbptumnzonnafcst ; /usr/bin/python3'
Feb 02 15:07:06 compute-0 sudo[80934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:06 compute-0 ansible-async_wrapper.py[79579]: Done in kid B.
Feb 02 15:07:06 compute-0 python3[80936]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:06 compute-0 podman[80938]: 2026-02-02 15:07:06.189317949 +0000 UTC m=+0.045305555 container create 708e50f8ba9872e39688891d3e733273d83b9975e612c4b3afc6abb5337e739c (image=quay.io/ceph/ceph:v20, name=angry_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 02 15:07:06 compute-0 systemd[1]: Started libpod-conmon-708e50f8ba9872e39688891d3e733273d83b9975e612c4b3afc6abb5337e739c.scope.
Feb 02 15:07:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821519eb216543c2cb5f30ea8995a7a90e73ced090fedf61c870bedb9c07f477/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821519eb216543c2cb5f30ea8995a7a90e73ced090fedf61c870bedb9c07f477/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821519eb216543c2cb5f30ea8995a7a90e73ced090fedf61c870bedb9c07f477/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:06 compute-0 podman[80938]: 2026-02-02 15:07:06.165591543 +0000 UTC m=+0.021579219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:06 compute-0 podman[80938]: 2026-02-02 15:07:06.266890676 +0000 UTC m=+0.122878292 container init 708e50f8ba9872e39688891d3e733273d83b9975e612c4b3afc6abb5337e739c (image=quay.io/ceph/ceph:v20, name=angry_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:06 compute-0 podman[80938]: 2026-02-02 15:07:06.274186394 +0000 UTC m=+0.130173990 container start 708e50f8ba9872e39688891d3e733273d83b9975e612c4b3afc6abb5337e739c (image=quay.io/ceph/ceph:v20, name=angry_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:06 compute-0 podman[80938]: 2026-02-02 15:07:06.28400929 +0000 UTC m=+0.139996896 container attach 708e50f8ba9872e39688891d3e733273d83b9975e612c4b3afc6abb5337e739c (image=quay.io/ceph/ceph:v20, name=angry_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:07:06 compute-0 podman[81002]: 2026-02-02 15:07:06.362273803 +0000 UTC m=+0.049590283 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:06 compute-0 podman[81002]: 2026-02-02 15:07:06.454187401 +0000 UTC m=+0.141503861 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/353968766' entity='client.admin' 
Feb 02 15:07:06 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:06 compute-0 systemd[1]: libpod-708e50f8ba9872e39688891d3e733273d83b9975e612c4b3afc6abb5337e739c.scope: Deactivated successfully.
Feb 02 15:07:06 compute-0 podman[80938]: 2026-02-02 15:07:06.702291786 +0000 UTC m=+0.558279382 container died 708e50f8ba9872e39688891d3e733273d83b9975e612c4b3afc6abb5337e739c (image=quay.io/ceph/ceph:v20, name=angry_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-821519eb216543c2cb5f30ea8995a7a90e73ced090fedf61c870bedb9c07f477-merged.mount: Deactivated successfully.
Feb 02 15:07:06 compute-0 podman[80938]: 2026-02-02 15:07:06.742723857 +0000 UTC m=+0.598711453 container remove 708e50f8ba9872e39688891d3e733273d83b9975e612c4b3afc6abb5337e739c (image=quay.io/ceph/ceph:v20, name=angry_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:06 compute-0 sudo[80934]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:06 compute-0 systemd[1]: libpod-conmon-708e50f8ba9872e39688891d3e733273d83b9975e612c4b3afc6abb5337e739c.scope: Deactivated successfully.
Feb 02 15:07:06 compute-0 sudo[80886]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:06 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'crash'
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/197151525' entity='client.admin' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/353968766' entity='client.admin' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:06 compute-0 sudo[81157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:07:06 compute-0 sudo[81157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:06 compute-0 sudo[81157]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:06 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Feb 02 15:07:06 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Feb 02 15:07:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 02 15:07:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 02 15:07:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:06 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb 02 15:07:06 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb 02 15:07:06 compute-0 sudo[81182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:06 compute-0 sudo[81182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:06 compute-0 sudo[81182]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:06 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'dashboard'
Feb 02 15:07:06 compute-0 sudo[81207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:06 compute-0 sudo[81207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:06 compute-0 sudo[81254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbafspbnfovorfdnkipoghqcmjnbcxbx ; /usr/bin/python3'
Feb 02 15:07:06 compute-0 sudo[81254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:07 compute-0 python3[81257]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:07 compute-0 podman[81258]: 2026-02-02 15:07:07.113675352 +0000 UTC m=+0.028382114 container create aa4fd7911eaabb573e4419ca4a67c62d10d054890a437f335f6119dc911e5d30 (image=quay.io/ceph/ceph:v20, name=inspiring_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:07:07 compute-0 systemd[1]: Started libpod-conmon-aa4fd7911eaabb573e4419ca4a67c62d10d054890a437f335f6119dc911e5d30.scope.
Feb 02 15:07:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ad32a0187568cd6773e46231846795ad3513d37cfece6825dc7e029efd15ba/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ad32a0187568cd6773e46231846795ad3513d37cfece6825dc7e029efd15ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ad32a0187568cd6773e46231846795ad3513d37cfece6825dc7e029efd15ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:07 compute-0 podman[81258]: 2026-02-02 15:07:07.177723388 +0000 UTC m=+0.092430190 container init aa4fd7911eaabb573e4419ca4a67c62d10d054890a437f335f6119dc911e5d30 (image=quay.io/ceph/ceph:v20, name=inspiring_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:07 compute-0 podman[81258]: 2026-02-02 15:07:07.182301933 +0000 UTC m=+0.097008705 container start aa4fd7911eaabb573e4419ca4a67c62d10d054890a437f335f6119dc911e5d30 (image=quay.io/ceph/ceph:v20, name=inspiring_sinoussi, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:07 compute-0 podman[81258]: 2026-02-02 15:07:07.185810265 +0000 UTC m=+0.100517067 container attach aa4fd7911eaabb573e4419ca4a67c62d10d054890a437f335f6119dc911e5d30 (image=quay.io/ceph/ceph:v20, name=inspiring_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:07:07 compute-0 podman[81258]: 2026-02-02 15:07:07.101032291 +0000 UTC m=+0.015739093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:07 compute-0 podman[81294]: 2026-02-02 15:07:07.215326474 +0000 UTC m=+0.038997659 container create c5caa2df987262519eb0a6563794d742060240f28ef1dbeaaefeb7479a0c0e4e (image=quay.io/ceph/ceph:v20, name=intelligent_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:07:07 compute-0 systemd[1]: Started libpod-conmon-c5caa2df987262519eb0a6563794d742060240f28ef1dbeaaefeb7479a0c0e4e.scope.
Feb 02 15:07:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:07 compute-0 podman[81294]: 2026-02-02 15:07:07.266772709 +0000 UTC m=+0.090443914 container init c5caa2df987262519eb0a6563794d742060240f28ef1dbeaaefeb7479a0c0e4e (image=quay.io/ceph/ceph:v20, name=intelligent_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:07:07 compute-0 podman[81294]: 2026-02-02 15:07:07.271044387 +0000 UTC m=+0.094715572 container start c5caa2df987262519eb0a6563794d742060240f28ef1dbeaaefeb7479a0c0e4e (image=quay.io/ceph/ceph:v20, name=intelligent_saha, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:07 compute-0 intelligent_saha[81311]: 167 167
Feb 02 15:07:07 compute-0 systemd[1]: libpod-c5caa2df987262519eb0a6563794d742060240f28ef1dbeaaefeb7479a0c0e4e.scope: Deactivated successfully.
Feb 02 15:07:07 compute-0 podman[81294]: 2026-02-02 15:07:07.274874676 +0000 UTC m=+0.098545861 container attach c5caa2df987262519eb0a6563794d742060240f28ef1dbeaaefeb7479a0c0e4e (image=quay.io/ceph/ceph:v20, name=intelligent_saha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:07:07 compute-0 podman[81294]: 2026-02-02 15:07:07.275032599 +0000 UTC m=+0.098703784 container died c5caa2df987262519eb0a6563794d742060240f28ef1dbeaaefeb7479a0c0e4e (image=quay.io/ceph/ceph:v20, name=intelligent_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f917efe925a0b966ecf266271cdd22c07e25907a41cdfb6fb9cb2afa633f0820-merged.mount: Deactivated successfully.
Feb 02 15:07:07 compute-0 podman[81294]: 2026-02-02 15:07:07.199321635 +0000 UTC m=+0.022992850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:07 compute-0 podman[81294]: 2026-02-02 15:07:07.306602507 +0000 UTC m=+0.130273712 container remove c5caa2df987262519eb0a6563794d742060240f28ef1dbeaaefeb7479a0c0e4e (image=quay.io/ceph/ceph:v20, name=intelligent_saha, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:07:07 compute-0 systemd[1]: libpod-conmon-c5caa2df987262519eb0a6563794d742060240f28ef1dbeaaefeb7479a0c0e4e.scope: Deactivated successfully.
Feb 02 15:07:07 compute-0 sudo[81207]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:07 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.rxryxi (unknown last config time)...
Feb 02 15:07:07 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.rxryxi (unknown last config time)...
Feb 02 15:07:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rxryxi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.rxryxi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 02 15:07:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mgr services"} : dispatch
Feb 02 15:07:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:07 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.rxryxi on compute-0
Feb 02 15:07:07 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.rxryxi on compute-0
Feb 02 15:07:07 compute-0 sudo[81345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:07 compute-0 sudo[81345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:07 compute-0 sudo[81345]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:07 compute-0 sudo[81370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:07 compute-0 sudo[81370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1400680670' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb 02 15:07:07 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'devicehealth'
Feb 02 15:07:07 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'diskprediction_local'
Feb 02 15:07:07 compute-0 podman[81412]: 2026-02-02 15:07:07.743836318 +0000 UTC m=+0.045220782 container create 9b64e47551d827a631a2dd6a0c023bb4225ccbf1e3b09e50d9d61a923ef2c86c (image=quay.io/ceph/ceph:v20, name=zen_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:07:07 compute-0 systemd[1]: Started libpod-conmon-9b64e47551d827a631a2dd6a0c023bb4225ccbf1e3b09e50d9d61a923ef2c86c.scope.
Feb 02 15:07:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:07 compute-0 podman[81412]: 2026-02-02 15:07:07.796633725 +0000 UTC m=+0.098018209 container init 9b64e47551d827a631a2dd6a0c023bb4225ccbf1e3b09e50d9d61a923ef2c86c (image=quay.io/ceph/ceph:v20, name=zen_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:07:07 compute-0 podman[81412]: 2026-02-02 15:07:07.801616929 +0000 UTC m=+0.103001413 container start 9b64e47551d827a631a2dd6a0c023bb4225ccbf1e3b09e50d9d61a923ef2c86c (image=quay.io/ceph/ceph:v20, name=zen_goldstine, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:07 compute-0 zen_goldstine[81428]: 167 167
Feb 02 15:07:07 compute-0 systemd[1]: libpod-9b64e47551d827a631a2dd6a0c023bb4225ccbf1e3b09e50d9d61a923ef2c86c.scope: Deactivated successfully.
Feb 02 15:07:07 compute-0 podman[81412]: 2026-02-02 15:07:07.805696183 +0000 UTC m=+0.107080677 container attach 9b64e47551d827a631a2dd6a0c023bb4225ccbf1e3b09e50d9d61a923ef2c86c (image=quay.io/ceph/ceph:v20, name=zen_goldstine, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:07:07 compute-0 podman[81412]: 2026-02-02 15:07:07.805998891 +0000 UTC m=+0.107383345 container died 9b64e47551d827a631a2dd6a0c023bb4225ccbf1e3b09e50d9d61a923ef2c86c (image=quay.io/ceph/ceph:v20, name=zen_goldstine, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.rxryxi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mgr services"} : dispatch
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1400680670' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb 02 15:07:07 compute-0 podman[81412]: 2026-02-02 15:07:07.72783482 +0000 UTC m=+0.029219324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0458bfad7ec8a063c43404ce39b8b0015534b1bbc1fbf87de5d0857122b9545-merged.mount: Deactivated successfully.
Feb 02 15:07:07 compute-0 podman[81412]: 2026-02-02 15:07:07.841648242 +0000 UTC m=+0.143032686 container remove 9b64e47551d827a631a2dd6a0c023bb4225ccbf1e3b09e50d9d61a923ef2c86c (image=quay.io/ceph/ceph:v20, name=zen_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:07:07 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-pmnvvl[80799]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 02 15:07:07 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-pmnvvl[80799]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 02 15:07:07 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-pmnvvl[80799]:   from numpy import show_config as show_numpy_config
Feb 02 15:07:07 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'influx'
Feb 02 15:07:07 compute-0 systemd[1]: libpod-conmon-9b64e47551d827a631a2dd6a0c023bb4225ccbf1e3b09e50d9d61a923ef2c86c.scope: Deactivated successfully.
Feb 02 15:07:07 compute-0 sudo[81370]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:07 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'insights'
Feb 02 15:07:07 compute-0 sudo[81446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:07 compute-0 sudo[81446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:07 compute-0 sudo[81446]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:07 compute-0 sudo[81471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:07:07 compute-0 sudo[81471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:07 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'iostat'
Feb 02 15:07:08 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'k8sevents'
Feb 02 15:07:08 compute-0 podman[81539]: 2026-02-02 15:07:08.338090318 +0000 UTC m=+0.052995122 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Feb 02 15:07:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:07:08 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1400680670' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 02 15:07:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Feb 02 15:07:08 compute-0 inspiring_sinoussi[81289]: set require_min_compat_client to mimic
Feb 02 15:07:08 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Feb 02 15:07:08 compute-0 systemd[1]: libpod-aa4fd7911eaabb573e4419ca4a67c62d10d054890a437f335f6119dc911e5d30.scope: Deactivated successfully.
Feb 02 15:07:08 compute-0 podman[81258]: 2026-02-02 15:07:08.377135836 +0000 UTC m=+1.291842618 container died aa4fd7911eaabb573e4419ca4a67c62d10d054890a437f335f6119dc911e5d30 (image=quay.io/ceph/ceph:v20, name=inspiring_sinoussi, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2ad32a0187568cd6773e46231846795ad3513d37cfece6825dc7e029efd15ba-merged.mount: Deactivated successfully.
Feb 02 15:07:08 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'localpool'
Feb 02 15:07:08 compute-0 podman[81258]: 2026-02-02 15:07:08.46754662 +0000 UTC m=+1.382253422 container remove aa4fd7911eaabb573e4419ca4a67c62d10d054890a437f335f6119dc911e5d30 (image=quay.io/ceph/ceph:v20, name=inspiring_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:08 compute-0 podman[81539]: 2026-02-02 15:07:08.468489422 +0000 UTC m=+0.183394166 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:08 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'mds_autoscaler'
Feb 02 15:07:08 compute-0 sudo[81254]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:08 compute-0 systemd[1]: libpod-conmon-aa4fd7911eaabb573e4419ca4a67c62d10d054890a437f335f6119dc911e5d30.scope: Deactivated successfully.
Feb 02 15:07:08 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:08 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'mirroring'
Feb 02 15:07:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:08 compute-0 sudo[81471]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:08 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'nfs'
Feb 02 15:07:08 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:08 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:08 compute-0 sudo[81688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrejesgkwasjetukailpyhwmgdcrplze ; /usr/bin/python3'
Feb 02 15:07:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:08 compute-0 ceph-mon[75334]: Reconfiguring mon.compute-0 (unknown last config time)...
Feb 02 15:07:08 compute-0 ceph-mon[75334]: Reconfiguring daemon mon.compute-0 on compute-0
Feb 02 15:07:08 compute-0 ceph-mon[75334]: Reconfiguring mgr.compute-0.rxryxi (unknown last config time)...
Feb 02 15:07:08 compute-0 ceph-mon[75334]: Reconfiguring daemon mgr.compute-0.rxryxi on compute-0
Feb 02 15:07:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1400680670' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 02 15:07:08 compute-0 ceph-mon[75334]: osdmap e3: 0 total, 0 up, 0 in
Feb 02 15:07:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:08 compute-0 sudo[81688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:07:08 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:07:08 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:08 compute-0 sudo[81691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:07:08 compute-0 sudo[81691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:08 compute-0 sudo[81691]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:08 compute-0 python3[81690]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:09 compute-0 podman[81716]: 2026-02-02 15:07:09.009647727 +0000 UTC m=+0.042362017 container create 4131d88d15382cecaade2dfe08ad100a76ad9ea94abef6ecc972941744bcf23b (image=quay.io/ceph/ceph:v20, name=unruffled_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:09 compute-0 systemd[1]: Started libpod-conmon-4131d88d15382cecaade2dfe08ad100a76ad9ea94abef6ecc972941744bcf23b.scope.
Feb 02 15:07:09 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'orchestrator'
Feb 02 15:07:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8ba45c26b620d7e3e22b8d0d5223fbbc763d23b2355e80012dfe79c7ebe80e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8ba45c26b620d7e3e22b8d0d5223fbbc763d23b2355e80012dfe79c7ebe80e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8ba45c26b620d7e3e22b8d0d5223fbbc763d23b2355e80012dfe79c7ebe80e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:09 compute-0 podman[81716]: 2026-02-02 15:07:09.080214263 +0000 UTC m=+0.112928613 container init 4131d88d15382cecaade2dfe08ad100a76ad9ea94abef6ecc972941744bcf23b (image=quay.io/ceph/ceph:v20, name=unruffled_brown, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:07:09 compute-0 podman[81716]: 2026-02-02 15:07:09.084719427 +0000 UTC m=+0.117433707 container start 4131d88d15382cecaade2dfe08ad100a76ad9ea94abef6ecc972941744bcf23b (image=quay.io/ceph/ceph:v20, name=unruffled_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:09 compute-0 podman[81716]: 2026-02-02 15:07:08.992043002 +0000 UTC m=+0.024757302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:09 compute-0 podman[81716]: 2026-02-02 15:07:09.087853009 +0000 UTC m=+0.120567359 container attach 4131d88d15382cecaade2dfe08ad100a76ad9ea94abef6ecc972941744bcf23b (image=quay.io/ceph/ceph:v20, name=unruffled_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:09 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'osd_perf_query'
Feb 02 15:07:09 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'osd_support'
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:07:09 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'pg_autoscaler'
Feb 02 15:07:09 compute-0 sudo[81755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:09 compute-0 sudo[81755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:09 compute-0 sudo[81755]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:09 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'progress'
Feb 02 15:07:09 compute-0 sudo[81780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Feb 02 15:07:09 compute-0 sudo[81780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:09 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'prometheus'
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: [progress INFO root] Writing back 2 completed events
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mon[75334]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:09 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:09 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:09 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 sudo[81780]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: [cephadm INFO root] Added host compute-0
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: [cephadm INFO root] Saving service mon spec with placement compute-0
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Feb 02 15:07:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev cc5db175-1a85-4be5-a6e8-ce2477941b5d (Updating mgr deployment (-1 -> 1))
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.pmnvvl from compute-0 -- ports [8765]
Feb 02 15:07:09 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.pmnvvl from compute-0 -- ports [8765]
Feb 02 15:07:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:09 compute-0 unruffled_brown[81730]: Added host 'compute-0' with addr '192.168.122.100'
Feb 02 15:07:09 compute-0 unruffled_brown[81730]: Scheduled mon update...
Feb 02 15:07:09 compute-0 unruffled_brown[81730]: Scheduled mgr update...
Feb 02 15:07:09 compute-0 unruffled_brown[81730]: Scheduled osd.default_drive_group update...
Feb 02 15:07:09 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'rbd_support'
Feb 02 15:07:09 compute-0 systemd[1]: libpod-4131d88d15382cecaade2dfe08ad100a76ad9ea94abef6ecc972941744bcf23b.scope: Deactivated successfully.
Feb 02 15:07:09 compute-0 podman[81716]: 2026-02-02 15:07:09.977868681 +0000 UTC m=+1.010582991 container died 4131d88d15382cecaade2dfe08ad100a76ad9ea94abef6ecc972941744bcf23b (image=quay.io/ceph/ceph:v20, name=unruffled_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 02 15:07:10 compute-0 sudo[81825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:10 compute-0 sudo[81825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef8ba45c26b620d7e3e22b8d0d5223fbbc763d23b2355e80012dfe79c7ebe80e-merged.mount: Deactivated successfully.
Feb 02 15:07:10 compute-0 sudo[81825]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:10 compute-0 podman[81716]: 2026-02-02 15:07:10.034852883 +0000 UTC m=+1.067567193 container remove 4131d88d15382cecaade2dfe08ad100a76ad9ea94abef6ecc972941744bcf23b (image=quay.io/ceph/ceph:v20, name=unruffled_brown, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:10 compute-0 systemd[1]: libpod-conmon-4131d88d15382cecaade2dfe08ad100a76ad9ea94abef6ecc972941744bcf23b.scope: Deactivated successfully.
Feb 02 15:07:10 compute-0 sudo[81688]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:10 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'rgw'
Feb 02 15:07:10 compute-0 sudo[81863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --name mgr.compute-0.pmnvvl --force --tcp-ports 8765
Feb 02 15:07:10 compute-0 sudo[81863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:10 compute-0 ceph-mgr[80803]: mgr[py] Loading python module 'rook'
Feb 02 15:07:10 compute-0 sudo[81923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdxrgdhwswmzgpbgrpkqqnikrzlcvdjx ; /usr/bin/python3'
Feb 02 15:07:10 compute-0 sudo[81923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:10 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.pmnvvl for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:07:10 compute-0 python3[81927]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:10 compute-0 podman[81956]: 2026-02-02 15:07:10.569431237 +0000 UTC m=+0.080380452 container create fb03b671d6752e9615c228da439628b4979865d17d2f7ffc89298c21bed48440 (image=quay.io/ceph/ceph:v20, name=youthful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:07:10 compute-0 podman[81956]: 2026-02-02 15:07:10.512967047 +0000 UTC m=+0.023916252 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:10 compute-0 systemd[1]: Started libpod-conmon-fb03b671d6752e9615c228da439628b4979865d17d2f7ffc89298c21bed48440.scope.
Feb 02 15:07:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb0c410d50b9693ba326ad0dab394e022a7f63e3d4111908a5ff6e4b66a9366/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb0c410d50b9693ba326ad0dab394e022a7f63e3d4111908a5ff6e4b66a9366/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb0c410d50b9693ba326ad0dab394e022a7f63e3d4111908a5ff6e4b66a9366/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:10 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:10 compute-0 podman[81963]: 2026-02-02 15:07:10.747595911 +0000 UTC m=+0.238789571 container died 51be39d21437174fce5e5e78430c6b90728e0dbcdf37bd042005b759aa04ae2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-pmnvvl, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:10 compute-0 podman[81956]: 2026-02-02 15:07:10.748044622 +0000 UTC m=+0.258993907 container init fb03b671d6752e9615c228da439628b4979865d17d2f7ffc89298c21bed48440 (image=quay.io/ceph/ceph:v20, name=youthful_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:10 compute-0 podman[81956]: 2026-02-02 15:07:10.754211874 +0000 UTC m=+0.265161099 container start fb03b671d6752e9615c228da439628b4979865d17d2f7ffc89298c21bed48440 (image=quay.io/ceph/ceph:v20, name=youthful_shaw, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f32d44015e98c246205cd703d074891c56c2c6b9c77f9409c552386f2ed23a1-merged.mount: Deactivated successfully.
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:10 compute-0 ceph-mon[75334]: Added host compute-0
Feb 02 15:07:10 compute-0 ceph-mon[75334]: Saving service mon spec with placement compute-0
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:10 compute-0 ceph-mon[75334]: Saving service mgr spec with placement compute-0
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:10 compute-0 ceph-mon[75334]: Marking host: compute-0 for OSDSpec preview refresh.
Feb 02 15:07:10 compute-0 ceph-mon[75334]: Saving service osd.default_drive_group spec with placement compute-0
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:10 compute-0 ceph-mon[75334]: Removing daemon mgr.compute-0.pmnvvl from compute-0 -- ports [8765]
Feb 02 15:07:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:10 compute-0 ceph-mon[75334]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:10 compute-0 podman[81963]: 2026-02-02 15:07:10.88862072 +0000 UTC m=+0.379814380 container remove 51be39d21437174fce5e5e78430c6b90728e0dbcdf37bd042005b759aa04ae2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-pmnvvl, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:10 compute-0 bash[81963]: ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-pmnvvl
Feb 02 15:07:10 compute-0 podman[81956]: 2026-02-02 15:07:10.944291793 +0000 UTC m=+0.455241008 container attach fb03b671d6752e9615c228da439628b4979865d17d2f7ffc89298c21bed48440 (image=quay.io/ceph/ceph:v20, name=youthful_shaw, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:10 compute-0 systemd[1]: ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@mgr.compute-0.pmnvvl.service: Main process exited, code=exited, status=143/n/a
Feb 02 15:07:11 compute-0 systemd[1]: ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@mgr.compute-0.pmnvvl.service: Failed with result 'exit-code'.
Feb 02 15:07:11 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.pmnvvl for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:07:11 compute-0 systemd[1]: ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@mgr.compute-0.pmnvvl.service: Consumed 5.803s CPU time, 337.2M memory peak, read 0B from disk, written 912.0K to disk.
Feb 02 15:07:11 compute-0 systemd[1]: Reloading.
Feb 02 15:07:11 compute-0 systemd-sysv-generator[82080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:11 compute-0 systemd-rc-local-generator[82077]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 02 15:07:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/411771089' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 02 15:07:11 compute-0 youthful_shaw[81981]: 
Feb 02 15:07:11 compute-0 youthful_shaw[81981]: {"fsid":"e43470b2-6632-573a-87d3-0f5428ec59e9","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":45,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-02T15:06:23:601132+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-02T15:06:23.603344+0000","services":{}},"progress_events":{"cc5db175-1a85-4be5-a6e8-ce2477941b5d":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Feb 02 15:07:11 compute-0 podman[81956]: 2026-02-02 15:07:11.257151889 +0000 UTC m=+0.768101104 container died fb03b671d6752e9615c228da439628b4979865d17d2f7ffc89298c21bed48440 (image=quay.io/ceph/ceph:v20, name=youthful_shaw, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:11 compute-0 systemd[1]: libpod-fb03b671d6752e9615c228da439628b4979865d17d2f7ffc89298c21bed48440.scope: Deactivated successfully.
Feb 02 15:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceb0c410d50b9693ba326ad0dab394e022a7f63e3d4111908a5ff6e4b66a9366-merged.mount: Deactivated successfully.
Feb 02 15:07:11 compute-0 podman[81956]: 2026-02-02 15:07:11.33443275 +0000 UTC m=+0.845381935 container remove fb03b671d6752e9615c228da439628b4979865d17d2f7ffc89298c21bed48440 (image=quay.io/ceph/ceph:v20, name=youthful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:11 compute-0 systemd[1]: libpod-conmon-fb03b671d6752e9615c228da439628b4979865d17d2f7ffc89298c21bed48440.scope: Deactivated successfully.
Feb 02 15:07:11 compute-0 sudo[81923]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:11 compute-0 sudo[81863]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:11 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.pmnvvl
Feb 02 15:07:11 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.pmnvvl
Feb 02 15:07:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.pmnvvl"} v 0)
Feb 02 15:07:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.pmnvvl"} : dispatch
Feb 02 15:07:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.pmnvvl"}]': finished
Feb 02 15:07:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 15:07:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:11 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev cc5db175-1a85-4be5-a6e8-ce2477941b5d (Updating mgr deployment (-1 -> 1))
Feb 02 15:07:11 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event cc5db175-1a85-4be5-a6e8-ce2477941b5d (Updating mgr deployment (-1 -> 1)) in 1 seconds
Feb 02 15:07:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 02 15:07:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:11 compute-0 sudo[82108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:07:11 compute-0 sudo[82108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:11 compute-0 sudo[82108]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:11 compute-0 sudo[82133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:11 compute-0 sudo[82133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:11 compute-0 sudo[82133]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:11 compute-0 sudo[82158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:07:11 compute-0 sudo[82158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/411771089' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 02 15:07:11 compute-0 ceph-mon[75334]: Removing key for mgr.compute-0.pmnvvl
Feb 02 15:07:11 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.pmnvvl"} : dispatch
Feb 02 15:07:11 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.pmnvvl"}]': finished
Feb 02 15:07:11 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:11 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:12 compute-0 podman[82224]: 2026-02-02 15:07:12.01559548 +0000 UTC m=+0.076716418 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:07:12 compute-0 podman[82224]: 2026-02-02 15:07:12.128204273 +0000 UTC m=+0.189325211 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:07:12 compute-0 sudo[82158]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:07:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:12 compute-0 sudo[82319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:12 compute-0 sudo[82319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:12 compute-0 sudo[82319]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:12 compute-0 sudo[82344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:07:12 compute-0 sudo[82344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:12 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:12 compute-0 podman[82380]: 2026-02-02 15:07:12.882150931 +0000 UTC m=+0.041055966 container create 40225f53e1b1d2c2a9dcd88fab995a214ff27be68538c9f60143532f2844b93b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:12 compute-0 systemd[1]: Started libpod-conmon-40225f53e1b1d2c2a9dcd88fab995a214ff27be68538c9f60143532f2844b93b.scope.
Feb 02 15:07:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:12 compute-0 podman[82380]: 2026-02-02 15:07:12.862262054 +0000 UTC m=+0.021167149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:12 compute-0 podman[82380]: 2026-02-02 15:07:12.96198561 +0000 UTC m=+0.120890655 container init 40225f53e1b1d2c2a9dcd88fab995a214ff27be68538c9f60143532f2844b93b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mcclintock, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:12 compute-0 podman[82380]: 2026-02-02 15:07:12.968008129 +0000 UTC m=+0.126913154 container start 40225f53e1b1d2c2a9dcd88fab995a214ff27be68538c9f60143532f2844b93b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:12 compute-0 podman[82380]: 2026-02-02 15:07:12.971476649 +0000 UTC m=+0.130381674 container attach 40225f53e1b1d2c2a9dcd88fab995a214ff27be68538c9f60143532f2844b93b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:07:12 compute-0 mystifying_mcclintock[82396]: 167 167
Feb 02 15:07:12 compute-0 systemd[1]: libpod-40225f53e1b1d2c2a9dcd88fab995a214ff27be68538c9f60143532f2844b93b.scope: Deactivated successfully.
Feb 02 15:07:12 compute-0 podman[82380]: 2026-02-02 15:07:12.972882942 +0000 UTC m=+0.131787997 container died 40225f53e1b1d2c2a9dcd88fab995a214ff27be68538c9f60143532f2844b93b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mcclintock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa04fe055db4a6fb1c9882fda7f814957055910ae067b06c26b5bb1853d6675e-merged.mount: Deactivated successfully.
Feb 02 15:07:13 compute-0 podman[82380]: 2026-02-02 15:07:13.00538631 +0000 UTC m=+0.164291315 container remove 40225f53e1b1d2c2a9dcd88fab995a214ff27be68538c9f60143532f2844b93b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:07:13 compute-0 systemd[1]: libpod-conmon-40225f53e1b1d2c2a9dcd88fab995a214ff27be68538c9f60143532f2844b93b.scope: Deactivated successfully.
Feb 02 15:07:13 compute-0 podman[82420]: 2026-02-02 15:07:13.151830554 +0000 UTC m=+0.044723521 container create 334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:13 compute-0 systemd[1]: Started libpod-conmon-334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc.scope.
Feb 02 15:07:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd418c3ad7fadfb46574681c5e81e3114583a0df63290194bb4701fa33d3859/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd418c3ad7fadfb46574681c5e81e3114583a0df63290194bb4701fa33d3859/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd418c3ad7fadfb46574681c5e81e3114583a0df63290194bb4701fa33d3859/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd418c3ad7fadfb46574681c5e81e3114583a0df63290194bb4701fa33d3859/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd418c3ad7fadfb46574681c5e81e3114583a0df63290194bb4701fa33d3859/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:13 compute-0 podman[82420]: 2026-02-02 15:07:13.133342887 +0000 UTC m=+0.026235834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:13 compute-0 podman[82420]: 2026-02-02 15:07:13.233102906 +0000 UTC m=+0.125995873 container init 334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:13 compute-0 podman[82420]: 2026-02-02 15:07:13.24717797 +0000 UTC m=+0.140070887 container start 334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hugle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:13 compute-0 podman[82420]: 2026-02-02 15:07:13.251368717 +0000 UTC m=+0.144261684 container attach 334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:07:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:13 compute-0 ceph-mon[75334]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:13 compute-0 thirsty_hugle[82436]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:07:13 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:13 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:13 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3bab3955-37f6-439d-a6d9-c93f1b81f868
Feb 02 15:07:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "3bab3955-37f6-439d-a6d9-c93f1b81f868"} v 0)
Feb 02 15:07:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3722766063' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "3bab3955-37f6-439d-a6d9-c93f1b81f868"} : dispatch
Feb 02 15:07:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Feb 02 15:07:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:07:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3722766063' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3bab3955-37f6-439d-a6d9-c93f1b81f868"}]': finished
Feb 02 15:07:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Feb 02 15:07:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Feb 02 15:07:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 15:07:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 15:07:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3722766063' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "3bab3955-37f6-439d-a6d9-c93f1b81f868"} : dispatch
Feb 02 15:07:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3722766063' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3bab3955-37f6-439d-a6d9-c93f1b81f868"}]': finished
Feb 02 15:07:14 compute-0 ceph-mon[75334]: osdmap e4: 1 total, 0 up, 1 in
Feb 02 15:07:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:14 compute-0 lvm[82528]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:07:14 compute-0 lvm[82528]: VG ceph_vg0 finished
Feb 02 15:07:14 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Feb 02 15:07:14 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb 02 15:07:14 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 02 15:07:14 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:14 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: [progress INFO root] Writing back 3 completed events
Feb 02 15:07:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 02 15:07:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162616002' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]:  stderr: got monmap epoch 1
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]: --> Creating keyring file for osd.0
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 3bab3955-37f6-439d-a6d9-c93f1b81f868 --setuser ceph --setgroup ceph
Feb 02 15:07:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:15 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 02 15:07:15 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 02 15:07:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:15 compute-0 ceph-mon[75334]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:15 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3162616002' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]:  stderr: 2026-02-02T15:07:15.168+0000 7fbd97df78c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]:  stderr: 2026-02-02T15:07:15.199+0000 7fbd97df78c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 02 15:07:15 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d1192b72-b454-486a-9485-4e52faa418e9
Feb 02 15:07:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d1192b72-b454-486a-9485-4e52faa418e9"} v 0)
Feb 02 15:07:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2313717166' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "d1192b72-b454-486a-9485-4e52faa418e9"} : dispatch
Feb 02 15:07:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Feb 02 15:07:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:07:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2313717166' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1192b72-b454-486a-9485-4e52faa418e9"}]': finished
Feb 02 15:07:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Feb 02 15:07:16 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Feb 02 15:07:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 15:07:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:16 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 15:07:16 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:16 compute-0 lvm[83481]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:07:16 compute-0 lvm[83481]: VG ceph_vg1 finished
Feb 02 15:07:16 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 02 15:07:16 compute-0 ceph-mon[75334]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 02 15:07:16 compute-0 ceph-mon[75334]: Cluster is now healthy
Feb 02 15:07:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2313717166' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "d1192b72-b454-486a-9485-4e52faa418e9"} : dispatch
Feb 02 15:07:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2313717166' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d1192b72-b454-486a-9485-4e52faa418e9"}]': finished
Feb 02 15:07:16 compute-0 ceph-mon[75334]: osdmap e5: 2 total, 0 up, 2 in
Feb 02 15:07:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:16 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb 02 15:07:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 02 15:07:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3485498677' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 02 15:07:17 compute-0 thirsty_hugle[82436]:  stderr: got monmap epoch 1
Feb 02 15:07:17 compute-0 thirsty_hugle[82436]: --> Creating keyring file for osd.1
Feb 02 15:07:17 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb 02 15:07:17 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb 02 15:07:17 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid d1192b72-b454-486a-9485-4e52faa418e9 --setuser ceph --setgroup ceph
Feb 02 15:07:17 compute-0 ceph-mon[75334]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3485498677' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]:  stderr: 2026-02-02T15:07:17.324+0000 7fcb5438b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]:  stderr: 2026-02-02T15:07:17.351+0000 7fcb5438b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new aa948d65-9934-4797-913a-22fcbacb9ed9
Feb 02 15:07:18 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "aa948d65-9934-4797-913a-22fcbacb9ed9"} v 0)
Feb 02 15:07:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/216359243' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "aa948d65-9934-4797-913a-22fcbacb9ed9"} : dispatch
Feb 02 15:07:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Feb 02 15:07:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:07:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/216359243' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aa948d65-9934-4797-913a-22fcbacb9ed9"}]': finished
Feb 02 15:07:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Feb 02 15:07:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Feb 02 15:07:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 15:07:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:18 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 15:07:18 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:18 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:18 compute-0 lvm[84435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:07:18 compute-0 lvm[84435]: VG ceph_vg2 finished
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:18 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Feb 02 15:07:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 02 15:07:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264668181' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 02 15:07:19 compute-0 thirsty_hugle[82436]:  stderr: got monmap epoch 1
Feb 02 15:07:19 compute-0 thirsty_hugle[82436]: --> Creating keyring file for osd.2
Feb 02 15:07:19 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Feb 02 15:07:19 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Feb 02 15:07:19 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid aa948d65-9934-4797-913a-22fcbacb9ed9 --setuser ceph --setgroup ceph
Feb 02 15:07:19 compute-0 ceph-mon[75334]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/216359243' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "aa948d65-9934-4797-913a-22fcbacb9ed9"} : dispatch
Feb 02 15:07:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/216359243' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aa948d65-9934-4797-913a-22fcbacb9ed9"}]': finished
Feb 02 15:07:19 compute-0 ceph-mon[75334]: osdmap e6: 3 total, 0 up, 3 in
Feb 02 15:07:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1264668181' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]:  stderr: 2026-02-02T15:07:19.506+0000 7f71af84f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]:  stderr: 2026-02-02T15:07:19.524+0000 7f71af84f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 02 15:07:20 compute-0 thirsty_hugle[82436]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Feb 02 15:07:20 compute-0 systemd[1]: libpod-334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc.scope: Deactivated successfully.
Feb 02 15:07:20 compute-0 systemd[1]: libpod-334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc.scope: Consumed 5.592s CPU time.
Feb 02 15:07:20 compute-0 podman[85360]: 2026-02-02 15:07:20.510499167 +0000 UTC m=+0.023702977 container died 334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Feb 02 15:07:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dd418c3ad7fadfb46574681c5e81e3114583a0df63290194bb4701fa33d3859-merged.mount: Deactivated successfully.
Feb 02 15:07:20 compute-0 podman[85360]: 2026-02-02 15:07:20.549397313 +0000 UTC m=+0.062601043 container remove 334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hugle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:20 compute-0 systemd[1]: libpod-conmon-334a020ffa220b681960f627c24c39963e411c6d8644ce0a8f1754c31e01f2bc.scope: Deactivated successfully.
Feb 02 15:07:20 compute-0 sudo[82344]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:20 compute-0 sudo[85375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:20 compute-0 sudo[85375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:20 compute-0 sudo[85375]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:20 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:20 compute-0 sudo[85400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:07:20 compute-0 sudo[85400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:20 compute-0 podman[85438]: 2026-02-02 15:07:20.954609667 +0000 UTC m=+0.043647786 container create 52e01479b95847b44a9f894e451a6a377c4a4b33d5b161b6d692b42f8e4384dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:07:20 compute-0 systemd[1]: Started libpod-conmon-52e01479b95847b44a9f894e451a6a377c4a4b33d5b161b6d692b42f8e4384dd.scope.
Feb 02 15:07:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:21 compute-0 podman[85438]: 2026-02-02 15:07:20.931464294 +0000 UTC m=+0.020502473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:21 compute-0 podman[85438]: 2026-02-02 15:07:21.041190231 +0000 UTC m=+0.130228400 container init 52e01479b95847b44a9f894e451a6a377c4a4b33d5b161b6d692b42f8e4384dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_morse, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:21 compute-0 podman[85438]: 2026-02-02 15:07:21.050114368 +0000 UTC m=+0.139152457 container start 52e01479b95847b44a9f894e451a6a377c4a4b33d5b161b6d692b42f8e4384dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:21 compute-0 podman[85438]: 2026-02-02 15:07:21.053511655 +0000 UTC m=+0.142549824 container attach 52e01479b95847b44a9f894e451a6a377c4a4b33d5b161b6d692b42f8e4384dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_morse, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:07:21 compute-0 agitated_morse[85454]: 167 167
Feb 02 15:07:21 compute-0 systemd[1]: libpod-52e01479b95847b44a9f894e451a6a377c4a4b33d5b161b6d692b42f8e4384dd.scope: Deactivated successfully.
Feb 02 15:07:21 compute-0 podman[85438]: 2026-02-02 15:07:21.055937032 +0000 UTC m=+0.144975161 container died 52e01479b95847b44a9f894e451a6a377c4a4b33d5b161b6d692b42f8e4384dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e461c70bd462cb6383d707f8ce5ebd512c10218450c8e3491ee5fdc24fc6cf8-merged.mount: Deactivated successfully.
Feb 02 15:07:21 compute-0 podman[85438]: 2026-02-02 15:07:21.097207062 +0000 UTC m=+0.186245151 container remove 52e01479b95847b44a9f894e451a6a377c4a4b33d5b161b6d692b42f8e4384dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_morse, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:21 compute-0 systemd[1]: libpod-conmon-52e01479b95847b44a9f894e451a6a377c4a4b33d5b161b6d692b42f8e4384dd.scope: Deactivated successfully.
Feb 02 15:07:21 compute-0 podman[85477]: 2026-02-02 15:07:21.258625651 +0000 UTC m=+0.048322585 container create 844025df1a42428c5cb27340c41fbe03fa27e28991515641de56cece4def97f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:21 compute-0 systemd[1]: Started libpod-conmon-844025df1a42428c5cb27340c41fbe03fa27e28991515641de56cece4def97f7.scope.
Feb 02 15:07:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb676af71945148f2df4fce0c799c7a8a1fa03442b56cd689485a879f839d66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb676af71945148f2df4fce0c799c7a8a1fa03442b56cd689485a879f839d66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb676af71945148f2df4fce0c799c7a8a1fa03442b56cd689485a879f839d66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb676af71945148f2df4fce0c799c7a8a1fa03442b56cd689485a879f839d66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:21 compute-0 podman[85477]: 2026-02-02 15:07:21.327775443 +0000 UTC m=+0.117472627 container init 844025df1a42428c5cb27340c41fbe03fa27e28991515641de56cece4def97f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:21 compute-0 podman[85477]: 2026-02-02 15:07:21.239451669 +0000 UTC m=+0.029148633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:21 compute-0 podman[85477]: 2026-02-02 15:07:21.336016543 +0000 UTC m=+0.125713477 container start 844025df1a42428c5cb27340c41fbe03fa27e28991515641de56cece4def97f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:07:21 compute-0 podman[85477]: 2026-02-02 15:07:21.340749052 +0000 UTC m=+0.130446016 container attach 844025df1a42428c5cb27340c41fbe03fa27e28991515641de56cece4def97f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:21 compute-0 cranky_shockley[85494]: {
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:     "0": [
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:         {
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "devices": [
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "/dev/loop3"
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             ],
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_name": "ceph_lv0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_size": "21470642176",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "name": "ceph_lv0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "tags": {
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.cluster_name": "ceph",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.crush_device_class": "",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.encrypted": "0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.objectstore": "bluestore",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.osd_id": "0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.type": "block",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.vdo": "0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.with_tpm": "0"
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             },
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "type": "block",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "vg_name": "ceph_vg0"
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:         }
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:     ],
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:     "1": [
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:         {
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "devices": [
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "/dev/loop4"
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             ],
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_name": "ceph_lv1",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_size": "21470642176",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "name": "ceph_lv1",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "tags": {
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.cluster_name": "ceph",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.crush_device_class": "",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.encrypted": "0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.objectstore": "bluestore",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.osd_id": "1",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.type": "block",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.vdo": "0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.with_tpm": "0"
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             },
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "type": "block",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "vg_name": "ceph_vg1"
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:         }
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:     ],
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:     "2": [
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:         {
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "devices": [
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "/dev/loop5"
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             ],
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_name": "ceph_lv2",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_size": "21470642176",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "name": "ceph_lv2",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "tags": {
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.cluster_name": "ceph",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.crush_device_class": "",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.encrypted": "0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.objectstore": "bluestore",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.osd_id": "2",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.type": "block",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.vdo": "0",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:                 "ceph.with_tpm": "0"
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             },
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "type": "block",
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:             "vg_name": "ceph_vg2"
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:         }
Feb 02 15:07:21 compute-0 cranky_shockley[85494]:     ]
Feb 02 15:07:21 compute-0 cranky_shockley[85494]: }
Feb 02 15:07:21 compute-0 systemd[1]: libpod-844025df1a42428c5cb27340c41fbe03fa27e28991515641de56cece4def97f7.scope: Deactivated successfully.
Feb 02 15:07:21 compute-0 podman[85477]: 2026-02-02 15:07:21.585218824 +0000 UTC m=+0.374915758 container died 844025df1a42428c5cb27340c41fbe03fa27e28991515641de56cece4def97f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bb676af71945148f2df4fce0c799c7a8a1fa03442b56cd689485a879f839d66-merged.mount: Deactivated successfully.
Feb 02 15:07:21 compute-0 podman[85477]: 2026-02-02 15:07:21.630805353 +0000 UTC m=+0.420502287 container remove 844025df1a42428c5cb27340c41fbe03fa27e28991515641de56cece4def97f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:21 compute-0 systemd[1]: libpod-conmon-844025df1a42428c5cb27340c41fbe03fa27e28991515641de56cece4def97f7.scope: Deactivated successfully.
Feb 02 15:07:21 compute-0 sudo[85400]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb 02 15:07:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 02 15:07:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:21 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Feb 02 15:07:21 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Feb 02 15:07:21 compute-0 sudo[85516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:21 compute-0 sudo[85516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:21 compute-0 sudo[85516]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:21 compute-0 ceph-mon[75334]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 02 15:07:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:21 compute-0 sudo[85541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:21 compute-0 sudo[85541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:22 compute-0 podman[85607]: 2026-02-02 15:07:22.200099038 +0000 UTC m=+0.051693462 container create ed90337944dddc4f93d49156739118959cbb609eeb16536be578ac498785eb81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Feb 02 15:07:22 compute-0 systemd[1]: Started libpod-conmon-ed90337944dddc4f93d49156739118959cbb609eeb16536be578ac498785eb81.scope.
Feb 02 15:07:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:22 compute-0 podman[85607]: 2026-02-02 15:07:22.177525398 +0000 UTC m=+0.029119902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:22 compute-0 podman[85607]: 2026-02-02 15:07:22.284160084 +0000 UTC m=+0.135754618 container init ed90337944dddc4f93d49156739118959cbb609eeb16536be578ac498785eb81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_diffie, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:22 compute-0 podman[85607]: 2026-02-02 15:07:22.290137802 +0000 UTC m=+0.141732266 container start ed90337944dddc4f93d49156739118959cbb609eeb16536be578ac498785eb81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_diffie, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:22 compute-0 heuristic_diffie[85623]: 167 167
Feb 02 15:07:22 compute-0 podman[85607]: 2026-02-02 15:07:22.29439986 +0000 UTC m=+0.145994324 container attach ed90337944dddc4f93d49156739118959cbb609eeb16536be578ac498785eb81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_diffie, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:22 compute-0 systemd[1]: libpod-ed90337944dddc4f93d49156739118959cbb609eeb16536be578ac498785eb81.scope: Deactivated successfully.
Feb 02 15:07:22 compute-0 podman[85607]: 2026-02-02 15:07:22.295650239 +0000 UTC m=+0.147244703 container died ed90337944dddc4f93d49156739118959cbb609eeb16536be578ac498785eb81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Feb 02 15:07:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-dff780ffbc4c242bc7aff5236f47850b58ae93e183146d6088fe82530294f0ed-merged.mount: Deactivated successfully.
Feb 02 15:07:22 compute-0 podman[85607]: 2026-02-02 15:07:22.335534918 +0000 UTC m=+0.187129342 container remove ed90337944dddc4f93d49156739118959cbb609eeb16536be578ac498785eb81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_diffie, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:07:22 compute-0 systemd[1]: libpod-conmon-ed90337944dddc4f93d49156739118959cbb609eeb16536be578ac498785eb81.scope: Deactivated successfully.
Feb 02 15:07:22 compute-0 podman[85653]: 2026-02-02 15:07:22.525851772 +0000 UTC m=+0.043401211 container create cb734ef344fb5d48ec4efdcf5732977bcaa82f18c26101107729490081c53cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:07:22 compute-0 systemd[1]: Started libpod-conmon-cb734ef344fb5d48ec4efdcf5732977bcaa82f18c26101107729490081c53cae.scope.
Feb 02 15:07:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e555ee151076f8d66238c1587f1eed37493adbbdc5243a9b6aa3f6e5ec8f0dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e555ee151076f8d66238c1587f1eed37493adbbdc5243a9b6aa3f6e5ec8f0dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e555ee151076f8d66238c1587f1eed37493adbbdc5243a9b6aa3f6e5ec8f0dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e555ee151076f8d66238c1587f1eed37493adbbdc5243a9b6aa3f6e5ec8f0dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e555ee151076f8d66238c1587f1eed37493adbbdc5243a9b6aa3f6e5ec8f0dc/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:22 compute-0 podman[85653]: 2026-02-02 15:07:22.597646585 +0000 UTC m=+0.115196084 container init cb734ef344fb5d48ec4efdcf5732977bcaa82f18c26101107729490081c53cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate-test, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:22 compute-0 podman[85653]: 2026-02-02 15:07:22.50492377 +0000 UTC m=+0.022473229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:22 compute-0 podman[85653]: 2026-02-02 15:07:22.605076846 +0000 UTC m=+0.122626255 container start cb734ef344fb5d48ec4efdcf5732977bcaa82f18c26101107729490081c53cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate-test, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:07:22 compute-0 podman[85653]: 2026-02-02 15:07:22.608517786 +0000 UTC m=+0.126067215 container attach cb734ef344fb5d48ec4efdcf5732977bcaa82f18c26101107729490081c53cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:07:22 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:22 compute-0 ceph-mon[75334]: Deploying daemon osd.0 on compute-0
Feb 02 15:07:22 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate-test[85670]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 02 15:07:22 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate-test[85670]:                             [--no-systemd] [--no-tmpfs]
Feb 02 15:07:22 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate-test[85670]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 02 15:07:22 compute-0 systemd[1]: libpod-cb734ef344fb5d48ec4efdcf5732977bcaa82f18c26101107729490081c53cae.scope: Deactivated successfully.
Feb 02 15:07:22 compute-0 podman[85653]: 2026-02-02 15:07:22.777764645 +0000 UTC m=+0.295314094 container died cb734ef344fb5d48ec4efdcf5732977bcaa82f18c26101107729490081c53cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e555ee151076f8d66238c1587f1eed37493adbbdc5243a9b6aa3f6e5ec8f0dc-merged.mount: Deactivated successfully.
Feb 02 15:07:22 compute-0 podman[85653]: 2026-02-02 15:07:22.82053348 +0000 UTC m=+0.338082929 container remove cb734ef344fb5d48ec4efdcf5732977bcaa82f18c26101107729490081c53cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate-test, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:22 compute-0 systemd[1]: libpod-conmon-cb734ef344fb5d48ec4efdcf5732977bcaa82f18c26101107729490081c53cae.scope: Deactivated successfully.
Feb 02 15:07:23 compute-0 systemd[1]: Reloading.
Feb 02 15:07:23 compute-0 systemd-rc-local-generator[85727]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:23 compute-0 systemd-sysv-generator[85730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:23 compute-0 systemd[1]: Reloading.
Feb 02 15:07:23 compute-0 systemd-rc-local-generator[85769]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:23 compute-0 systemd-sysv-generator[85773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:23 compute-0 systemd[1]: Starting Ceph osd.0 for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:07:23 compute-0 podman[85832]: 2026-02-02 15:07:23.724074533 +0000 UTC m=+0.041297742 container create e5b733d1c71abda33f6031b08a5a3b866ce08568858bba5145e3bbad214ce8c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:23 compute-0 ceph-mon[75334]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99da1b13bed6ee0554439d8e1149505f756eef8fe681cc5ac716a282092b775/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99da1b13bed6ee0554439d8e1149505f756eef8fe681cc5ac716a282092b775/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99da1b13bed6ee0554439d8e1149505f756eef8fe681cc5ac716a282092b775/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99da1b13bed6ee0554439d8e1149505f756eef8fe681cc5ac716a282092b775/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99da1b13bed6ee0554439d8e1149505f756eef8fe681cc5ac716a282092b775/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:23 compute-0 podman[85832]: 2026-02-02 15:07:23.707513822 +0000 UTC m=+0.024737041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:23 compute-0 podman[85832]: 2026-02-02 15:07:23.813458152 +0000 UTC m=+0.130681391 container init e5b733d1c71abda33f6031b08a5a3b866ce08568858bba5145e3bbad214ce8c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:23 compute-0 podman[85832]: 2026-02-02 15:07:23.820360791 +0000 UTC m=+0.137584000 container start e5b733d1c71abda33f6031b08a5a3b866ce08568858bba5145e3bbad214ce8c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:07:23 compute-0 podman[85832]: 2026-02-02 15:07:23.824348693 +0000 UTC m=+0.141571992 container attach e5b733d1c71abda33f6031b08a5a3b866ce08568858bba5145e3bbad214ce8c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:23 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:23 compute-0 bash[85832]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:24 compute-0 bash[85832]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:24 compute-0 lvm[85931]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:07:24 compute-0 lvm[85931]: VG ceph_vg0 finished
Feb 02 15:07:24 compute-0 lvm[85934]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:07:24 compute-0 lvm[85934]: VG ceph_vg1 finished
Feb 02 15:07:24 compute-0 lvm[85936]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:07:24 compute-0 lvm[85936]: VG ceph_vg2 finished
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:24 compute-0 bash[85832]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 02 15:07:24 compute-0 bash[85832]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:24 compute-0 bash[85832]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:24 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 02 15:07:24 compute-0 bash[85832]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 02 15:07:24 compute-0 bash[85832]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:24 compute-0 bash[85832]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:24 compute-0 bash[85832]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 02 15:07:24 compute-0 bash[85832]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 02 15:07:24 compute-0 bash[85832]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 02 15:07:24 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate[85848]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 02 15:07:24 compute-0 bash[85832]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 02 15:07:24 compute-0 systemd[1]: libpod-e5b733d1c71abda33f6031b08a5a3b866ce08568858bba5145e3bbad214ce8c2.scope: Deactivated successfully.
Feb 02 15:07:24 compute-0 systemd[1]: libpod-e5b733d1c71abda33f6031b08a5a3b866ce08568858bba5145e3bbad214ce8c2.scope: Consumed 1.304s CPU time.
Feb 02 15:07:24 compute-0 podman[85832]: 2026-02-02 15:07:24.821920523 +0000 UTC m=+1.139143722 container died e5b733d1c71abda33f6031b08a5a3b866ce08568858bba5145e3bbad214ce8c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b99da1b13bed6ee0554439d8e1149505f756eef8fe681cc5ac716a282092b775-merged.mount: Deactivated successfully.
Feb 02 15:07:24 compute-0 podman[85832]: 2026-02-02 15:07:24.865871275 +0000 UTC m=+1.183094504 container remove e5b733d1c71abda33f6031b08a5a3b866ce08568858bba5145e3bbad214ce8c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0-activate, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 15:07:25 compute-0 podman[86096]: 2026-02-02 15:07:25.057678334 +0000 UTC m=+0.051453657 container create 27a84c32fe8867993d2ccc116d9e964f823e4fd06b13263445666ba78b302b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86753d06ac2df336b8d20c0c3d0ef99f110f48d5ac05ece7846e5c821775a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86753d06ac2df336b8d20c0c3d0ef99f110f48d5ac05ece7846e5c821775a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86753d06ac2df336b8d20c0c3d0ef99f110f48d5ac05ece7846e5c821775a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86753d06ac2df336b8d20c0c3d0ef99f110f48d5ac05ece7846e5c821775a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee86753d06ac2df336b8d20c0c3d0ef99f110f48d5ac05ece7846e5c821775a0/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:25 compute-0 podman[86096]: 2026-02-02 15:07:25.116753594 +0000 UTC m=+0.110529017 container init 27a84c32fe8867993d2ccc116d9e964f823e4fd06b13263445666ba78b302b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:07:25 compute-0 podman[86096]: 2026-02-02 15:07:25.127646595 +0000 UTC m=+0.121421948 container start 27a84c32fe8867993d2ccc116d9e964f823e4fd06b13263445666ba78b302b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:07:25 compute-0 podman[86096]: 2026-02-02 15:07:25.036657659 +0000 UTC m=+0.030433022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:25 compute-0 bash[86096]: 27a84c32fe8867993d2ccc116d9e964f823e4fd06b13263445666ba78b302b65
Feb 02 15:07:25 compute-0 systemd[1]: Started Ceph osd.0 for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:07:25 compute-0 ceph-osd[86115]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: pidfile_write: ignore empty --pid-file
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 sudo[85541]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb 02 15:07:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 02 15:07:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:25 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Feb 02 15:07:25 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 sudo[86129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:25 compute-0 sudo[86129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:25 compute-0 sudo[86129]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908400 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 sudo[86160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:25 compute-0 sudo[86160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3908000 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-osd[86115]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Feb 02 15:07:25 compute-0 ceph-osd[86115]: load: jerasure load: lrc 
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-osd[86115]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 02 15:07:25 compute-0 ceph-osd[86115]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad3909c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad459f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad459f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad459f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad459f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount shared_bdev_used = 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: RocksDB version: 7.9.2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Git sha 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: DB SUMMARY
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: DB Session ID:  YSSGXTRX5HC89XOPC3R6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: CURRENT file:  CURRENT
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                         Options.error_if_exists: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.create_if_missing: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                                     Options.env: 0x557ad3799ea0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                                Options.info_log: 0x557ad47ea8a0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                              Options.statistics: (nil)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.use_fsync: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                              Options.db_log_dir: 
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                                 Options.wal_dir: db.wal
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.write_buffer_manager: 0x557ad37feb40
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.unordered_write: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.row_cache: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                              Options.wal_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.two_write_queues: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.wal_compression: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.atomic_flush: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.max_background_jobs: 4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.max_background_compactions: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.max_subcompactions: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.max_open_files: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Compression algorithms supported:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kZSTD supported: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kXpressCompression supported: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kBZip2Compression supported: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kLZ4Compression supported: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kZlibCompression supported: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kSnappyCompression supported: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c9d82c39-6fbd-40d7-8cbf-38a118953668
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044845566768, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044845567991, "job": 1, "event": "recovery_finished"}
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: freelist init
Feb 02 15:07:25 compute-0 ceph-osd[86115]: freelist _read_cfg
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs umount
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad459f800 /var/lib/ceph/osd/ceph-0/block) close
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad459f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad459f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad459f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bdev(0x557ad459f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluefs mount shared_bdev_used = 27262976
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: RocksDB version: 7.9.2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Git sha 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: DB SUMMARY
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: DB Session ID:  YSSGXTRX5HC89XOPC3R7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: CURRENT file:  CURRENT
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                         Options.error_if_exists: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.create_if_missing: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                                     Options.env: 0x557ad3799ce0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                                Options.info_log: 0x557ad47eaa20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                              Options.statistics: (nil)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.use_fsync: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                              Options.db_log_dir: 
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                                 Options.wal_dir: db.wal
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.write_buffer_manager: 0x557ad37feb40
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.unordered_write: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.row_cache: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                              Options.wal_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.two_write_queues: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.wal_compression: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.atomic_flush: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.max_background_jobs: 4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.max_background_compactions: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.max_subcompactions: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.max_open_files: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Compression algorithms supported:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kZSTD supported: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kXpressCompression supported: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kBZip2Compression supported: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kLZ4Compression supported: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kZlibCompression supported: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         kSnappyCompression supported: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eabc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eabc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eabc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eabc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eabc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eabc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eabc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eb0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eb0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557ad47eb0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557ad379da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c9d82c39-6fbd-40d7-8cbf-38a118953668
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044845636239, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044845641497, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044845, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9d82c39-6fbd-40d7-8cbf-38a118953668", "db_session_id": "YSSGXTRX5HC89XOPC3R7", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044845645541, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044845, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9d82c39-6fbd-40d7-8cbf-38a118953668", "db_session_id": "YSSGXTRX5HC89XOPC3R7", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044845649279, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044845, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c9d82c39-6fbd-40d7-8cbf-38a118953668", "db_session_id": "YSSGXTRX5HC89XOPC3R7", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044845650861, "job": 1, "event": "recovery_finished"}
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557ad4a04000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: DB pointer 0x557ad49a4000
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Feb 02 15:07:25 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:07:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379da30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379da30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379da30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 15:07:25 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 02 15:07:25 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 02 15:07:25 compute-0 ceph-osd[86115]: _get_class not permitted to load lua
Feb 02 15:07:25 compute-0 ceph-osd[86115]: _get_class not permitted to load sdk
Feb 02 15:07:25 compute-0 ceph-osd[86115]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 02 15:07:25 compute-0 ceph-osd[86115]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 02 15:07:25 compute-0 ceph-osd[86115]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 02 15:07:25 compute-0 ceph-osd[86115]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 02 15:07:25 compute-0 ceph-osd[86115]: osd.0 0 load_pgs
Feb 02 15:07:25 compute-0 ceph-osd[86115]: osd.0 0 load_pgs opened 0 pgs
Feb 02 15:07:25 compute-0 ceph-osd[86115]: osd.0 0 log_to_monitors true
Feb 02 15:07:25 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0[86111]: 2026-02-02T15:07:25.683+0000 7fb212b528c0 -1 osd.0 0 log_to_monitors true
Feb 02 15:07:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Feb 02 15:07:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb 02 15:07:25 compute-0 ceph-mon[75334]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 02 15:07:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:25 compute-0 ceph-mon[75334]: from='osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb 02 15:07:25 compute-0 podman[86658]: 2026-02-02 15:07:25.769811878 +0000 UTC m=+0.040600706 container create ada2540b3e7010a0207f241120ab6a3bc2c1cfbeae510c63b4763c72f021637e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:07:25 compute-0 systemd[1]: Started libpod-conmon-ada2540b3e7010a0207f241120ab6a3bc2c1cfbeae510c63b4763c72f021637e.scope.
Feb 02 15:07:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:25 compute-0 podman[86658]: 2026-02-02 15:07:25.830805393 +0000 UTC m=+0.101594241 container init ada2540b3e7010a0207f241120ab6a3bc2c1cfbeae510c63b4763c72f021637e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:25 compute-0 podman[86658]: 2026-02-02 15:07:25.838396168 +0000 UTC m=+0.109185006 container start ada2540b3e7010a0207f241120ab6a3bc2c1cfbeae510c63b4763c72f021637e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gould, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:07:25 compute-0 podman[86658]: 2026-02-02 15:07:25.841886588 +0000 UTC m=+0.112675416 container attach ada2540b3e7010a0207f241120ab6a3bc2c1cfbeae510c63b4763c72f021637e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 02 15:07:25 compute-0 affectionate_gould[86675]: 167 167
Feb 02 15:07:25 compute-0 systemd[1]: libpod-ada2540b3e7010a0207f241120ab6a3bc2c1cfbeae510c63b4763c72f021637e.scope: Deactivated successfully.
Feb 02 15:07:25 compute-0 podman[86658]: 2026-02-02 15:07:25.84502421 +0000 UTC m=+0.115813058 container died ada2540b3e7010a0207f241120ab6a3bc2c1cfbeae510c63b4763c72f021637e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gould, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 02 15:07:25 compute-0 podman[86658]: 2026-02-02 15:07:25.751100117 +0000 UTC m=+0.021888965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1dc4165170b3fa0344b0fa76662c0e8be6643e2520fcdd25d67c608d99ba760-merged.mount: Deactivated successfully.
Feb 02 15:07:25 compute-0 podman[86658]: 2026-02-02 15:07:25.881794028 +0000 UTC m=+0.152582866 container remove ada2540b3e7010a0207f241120ab6a3bc2c1cfbeae510c63b4763c72f021637e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gould, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:25 compute-0 systemd[1]: libpod-conmon-ada2540b3e7010a0207f241120ab6a3bc2c1cfbeae510c63b4763c72f021637e.scope: Deactivated successfully.
Feb 02 15:07:26 compute-0 podman[86705]: 2026-02-02 15:07:26.104519618 +0000 UTC m=+0.049643465 container create 8de5f186f855b20f7c0ada78216f91b201a4f8dd7bd97e0dbda5a84bb4b25efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate-test, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:07:26 compute-0 systemd[1]: Started libpod-conmon-8de5f186f855b20f7c0ada78216f91b201a4f8dd7bd97e0dbda5a84bb4b25efe.scope.
Feb 02 15:07:26 compute-0 podman[86705]: 2026-02-02 15:07:26.088306825 +0000 UTC m=+0.033430712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530c74d5bc4f57c68e9427664aa20fd3f41ff4835735216f72e7afaabf5e37fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530c74d5bc4f57c68e9427664aa20fd3f41ff4835735216f72e7afaabf5e37fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530c74d5bc4f57c68e9427664aa20fd3f41ff4835735216f72e7afaabf5e37fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530c74d5bc4f57c68e9427664aa20fd3f41ff4835735216f72e7afaabf5e37fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530c74d5bc4f57c68e9427664aa20fd3f41ff4835735216f72e7afaabf5e37fa/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:26 compute-0 podman[86705]: 2026-02-02 15:07:26.204621234 +0000 UTC m=+0.149745121 container init 8de5f186f855b20f7c0ada78216f91b201a4f8dd7bd97e0dbda5a84bb4b25efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate-test, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:26 compute-0 podman[86705]: 2026-02-02 15:07:26.218644387 +0000 UTC m=+0.163768244 container start 8de5f186f855b20f7c0ada78216f91b201a4f8dd7bd97e0dbda5a84bb4b25efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 15:07:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Feb 02 15:07:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:07:26 compute-0 podman[86705]: 2026-02-02 15:07:26.222557587 +0000 UTC m=+0.167681474 container attach 8de5f186f855b20f7c0ada78216f91b201a4f8dd7bd97e0dbda5a84bb4b25efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:26 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 02 15:07:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Feb 02 15:07:26 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Feb 02 15:07:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 02 15:07:26 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 02 15:07:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 02 15:07:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 15:07:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:26 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 15:07:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:26 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:26 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:26 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate-test[86721]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 02 15:07:26 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate-test[86721]:                             [--no-systemd] [--no-tmpfs]
Feb 02 15:07:26 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate-test[86721]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 02 15:07:26 compute-0 systemd[1]: libpod-8de5f186f855b20f7c0ada78216f91b201a4f8dd7bd97e0dbda5a84bb4b25efe.scope: Deactivated successfully.
Feb 02 15:07:26 compute-0 podman[86705]: 2026-02-02 15:07:26.432051683 +0000 UTC m=+0.377175530 container died 8de5f186f855b20f7c0ada78216f91b201a4f8dd7bd97e0dbda5a84bb4b25efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate-test, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-530c74d5bc4f57c68e9427664aa20fd3f41ff4835735216f72e7afaabf5e37fa-merged.mount: Deactivated successfully.
Feb 02 15:07:26 compute-0 podman[86705]: 2026-02-02 15:07:26.472938835 +0000 UTC m=+0.418062682 container remove 8de5f186f855b20f7c0ada78216f91b201a4f8dd7bd97e0dbda5a84bb4b25efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:07:26 compute-0 systemd[1]: libpod-conmon-8de5f186f855b20f7c0ada78216f91b201a4f8dd7bd97e0dbda5a84bb4b25efe.scope: Deactivated successfully.
Feb 02 15:07:26 compute-0 systemd[1]: Reloading.
Feb 02 15:07:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 02 15:07:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 02 15:07:26 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:26 compute-0 systemd-rc-local-generator[86783]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:26 compute-0 systemd-sysv-generator[86787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:26 compute-0 ceph-mon[75334]: Deploying daemon osd.1 on compute-0
Feb 02 15:07:26 compute-0 ceph-mon[75334]: from='osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 02 15:07:26 compute-0 ceph-mon[75334]: osdmap e7: 3 total, 0 up, 3 in
Feb 02 15:07:26 compute-0 ceph-mon[75334]: from='osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 02 15:07:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:26 compute-0 systemd[1]: Reloading.
Feb 02 15:07:27 compute-0 systemd-rc-local-generator[86824]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:27 compute-0 systemd-sysv-generator[86828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:27 compute-0 systemd[1]: Starting Ceph osd.1 for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:07:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Feb 02 15:07:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:07:27 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 02 15:07:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Feb 02 15:07:27 compute-0 ceph-osd[86115]: osd.0 0 done with init, starting boot process
Feb 02 15:07:27 compute-0 ceph-osd[86115]: osd.0 0 start_boot
Feb 02 15:07:27 compute-0 ceph-osd[86115]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 02 15:07:27 compute-0 ceph-osd[86115]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 02 15:07:27 compute-0 ceph-osd[86115]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 02 15:07:27 compute-0 ceph-osd[86115]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 02 15:07:27 compute-0 ceph-osd[86115]: osd.0 0  bench count 12288000 bsize 4 KiB
Feb 02 15:07:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Feb 02 15:07:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 15:07:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:27 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 15:07:27 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:27 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:27 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3800538028; not ready for session (expect reconnect)
Feb 02 15:07:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 15:07:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:27 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 15:07:27 compute-0 podman[86882]: 2026-02-02 15:07:27.369543738 +0000 UTC m=+0.058018497 container create cb7bcbd9c763b9c9264e0a89449fd6952719af61a620490f185ea9d1a4a8d380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 02 15:07:27 compute-0 podman[86882]: 2026-02-02 15:07:27.334304337 +0000 UTC m=+0.022779106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f34a690b5ce961778c2d3e0dfecc912349bea7753851fa9698a710b4143ed3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f34a690b5ce961778c2d3e0dfecc912349bea7753851fa9698a710b4143ed3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f34a690b5ce961778c2d3e0dfecc912349bea7753851fa9698a710b4143ed3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f34a690b5ce961778c2d3e0dfecc912349bea7753851fa9698a710b4143ed3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f34a690b5ce961778c2d3e0dfecc912349bea7753851fa9698a710b4143ed3/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:27 compute-0 podman[86882]: 2026-02-02 15:07:27.466070442 +0000 UTC m=+0.154545211 container init cb7bcbd9c763b9c9264e0a89449fd6952719af61a620490f185ea9d1a4a8d380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:27 compute-0 podman[86882]: 2026-02-02 15:07:27.474303062 +0000 UTC m=+0.162777841 container start cb7bcbd9c763b9c9264e0a89449fd6952719af61a620490f185ea9d1a4a8d380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:07:27 compute-0 podman[86882]: 2026-02-02 15:07:27.483644557 +0000 UTC m=+0.172119326 container attach cb7bcbd9c763b9c9264e0a89449fd6952719af61a620490f185ea9d1a4a8d380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:07:27 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:27 compute-0 bash[86882]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:27 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:27 compute-0 bash[86882]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:27 compute-0 ceph-mon[75334]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:27 compute-0 ceph-mon[75334]: from='osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 02 15:07:27 compute-0 ceph-mon[75334]: osdmap e8: 3 total, 0 up, 3 in
Feb 02 15:07:27 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:27 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:27 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:27 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:28 compute-0 lvm[86981]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:07:28 compute-0 lvm[86981]: VG ceph_vg0 finished
Feb 02 15:07:28 compute-0 lvm[86984]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:07:28 compute-0 lvm[86984]: VG ceph_vg1 finished
Feb 02 15:07:28 compute-0 lvm[86985]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:07:28 compute-0 lvm[86985]: VG ceph_vg2 finished
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:28 compute-0 bash[86882]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 02 15:07:28 compute-0 bash[86882]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:28 compute-0 bash[86882]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:28 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3800538028; not ready for session (expect reconnect)
Feb 02 15:07:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 15:07:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:28 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 15:07:28 compute-0 bash[86882]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 02 15:07:28 compute-0 bash[86882]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:28 compute-0 bash[86882]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:28 compute-0 bash[86882]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 02 15:07:28 compute-0 bash[86882]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 15:07:28 compute-0 bash[86882]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 02 15:07:28 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate[86898]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 02 15:07:28 compute-0 bash[86882]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 02 15:07:28 compute-0 systemd[1]: libpod-cb7bcbd9c763b9c9264e0a89449fd6952719af61a620490f185ea9d1a4a8d380.scope: Deactivated successfully.
Feb 02 15:07:28 compute-0 systemd[1]: libpod-cb7bcbd9c763b9c9264e0a89449fd6952719af61a620490f185ea9d1a4a8d380.scope: Consumed 1.241s CPU time.
Feb 02 15:07:28 compute-0 podman[87091]: 2026-02-02 15:07:28.52737928 +0000 UTC m=+0.023835290 container died cb7bcbd9c763b9c9264e0a89449fd6952719af61a620490f185ea9d1a4a8d380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9f34a690b5ce961778c2d3e0dfecc912349bea7753851fa9698a710b4143ed3-merged.mount: Deactivated successfully.
Feb 02 15:07:28 compute-0 podman[87091]: 2026-02-02 15:07:28.608809985 +0000 UTC m=+0.105266005 container remove cb7bcbd9c763b9c9264e0a89449fd6952719af61a620490f185ea9d1a4a8d380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1-activate, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:07:28 compute-0 ceph-mgr[75628]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 02 15:07:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:28 compute-0 ceph-mon[75334]: purged_snaps scrub starts
Feb 02 15:07:28 compute-0 ceph-mon[75334]: purged_snaps scrub ok
Feb 02 15:07:28 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:28 compute-0 podman[87151]: 2026-02-02 15:07:28.844773102 +0000 UTC m=+0.050113826 container create 9db978a5e94ba5b27109e315c8ae8589d8c4ce6531647f631057b4a5133ccdc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd09c3d132b7b7fe7f734b0afdda65e8f94c71995fcdd5a724bbabb4a7999c73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd09c3d132b7b7fe7f734b0afdda65e8f94c71995fcdd5a724bbabb4a7999c73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd09c3d132b7b7fe7f734b0afdda65e8f94c71995fcdd5a724bbabb4a7999c73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd09c3d132b7b7fe7f734b0afdda65e8f94c71995fcdd5a724bbabb4a7999c73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd09c3d132b7b7fe7f734b0afdda65e8f94c71995fcdd5a724bbabb4a7999c73/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:28 compute-0 podman[87151]: 2026-02-02 15:07:28.906853011 +0000 UTC m=+0.112193785 container init 9db978a5e94ba5b27109e315c8ae8589d8c4ce6531647f631057b4a5133ccdc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:07:28 compute-0 podman[87151]: 2026-02-02 15:07:28.818173519 +0000 UTC m=+0.023514283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:28 compute-0 podman[87151]: 2026-02-02 15:07:28.913703349 +0000 UTC m=+0.119044083 container start 9db978a5e94ba5b27109e315c8ae8589d8c4ce6531647f631057b4a5133ccdc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:07:28 compute-0 bash[87151]: 9db978a5e94ba5b27109e315c8ae8589d8c4ce6531647f631057b4a5133ccdc8
Feb 02 15:07:28 compute-0 systemd[1]: Started Ceph osd.1 for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:07:28 compute-0 ceph-osd[87170]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:07:28 compute-0 ceph-osd[87170]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 02 15:07:28 compute-0 ceph-osd[87170]: pidfile_write: ignore empty --pid-file
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:28 compute-0 sudo[86160]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:28 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:28 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:28 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb 02 15:07:28 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 02 15:07:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:29 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Feb 02 15:07:29 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 sudo[87185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 sudo[87185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:29 compute-0 sudo[87185]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba400 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 sudo[87215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:29 compute-0 sudo[87215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dba000 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 ceph-osd[87170]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb 02 15:07:29 compute-0 ceph-osd[87170]: load: jerasure load: lrc 
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 ceph-osd[86115]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 49.985 iops: 12796.108 elapsed_sec: 0.234
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 ceph-osd[86115]: log_channel(cluster) log [WRN] : OSD bench result of 12796.107745 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 15:07:29 compute-0 ceph-osd[86115]: osd.0 0 waiting for initial osdmap
Feb 02 15:07:29 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0[86111]: 2026-02-02T15:07:29.197+0000 7fb20ead4640 -1 osd.0 0 waiting for initial osdmap
Feb 02 15:07:29 compute-0 ceph-osd[86115]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb 02 15:07:29 compute-0 ceph-osd[86115]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb 02 15:07:29 compute-0 ceph-osd[86115]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb 02 15:07:29 compute-0 ceph-osd[86115]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Feb 02 15:07:29 compute-0 ceph-osd[86115]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 02 15:07:29 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-0[86111]: 2026-02-02T15:07:29.222+0000 7fb2098d9640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 02 15:07:29 compute-0 ceph-osd[86115]: osd.0 8 set_numa_affinity not setting numa affinity
Feb 02 15:07:29 compute-0 ceph-osd[86115]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Feb 02 15:07:29 compute-0 ceph-osd[87170]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 02 15:07:29 compute-0 ceph-osd[87170]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3800538028; not ready for session (expect reconnect)
Feb 02 15:07:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 15:07:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:29 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d615dbbc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d616a5b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d616a5b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d616a5b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d616a5b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount shared_bdev_used = 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: RocksDB version: 7.9.2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Git sha 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: DB SUMMARY
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: DB Session ID:  4WZU7D2IALQHDYBRU2ZZ
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: CURRENT file:  CURRENT
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                         Options.error_if_exists: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.create_if_missing: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                                     Options.env: 0x55d615c4bea0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                                Options.info_log: 0x55d616ca68a0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                              Options.statistics: (nil)
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.use_fsync: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                              Options.db_log_dir: 
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                                 Options.wal_dir: db.wal
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.write_buffer_manager: 0x55d616b4cb40
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.unordered_write: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.row_cache: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                              Options.wal_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.two_write_queues: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.wal_compression: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.atomic_flush: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.max_background_jobs: 4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.max_background_compactions: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.max_subcompactions: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.max_open_files: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Compression algorithms supported:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kZSTD supported: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kXpressCompression supported: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kBZip2Compression supported: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kLZ4Compression supported: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kZlibCompression supported: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kSnappyCompression supported: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9443f8ea-5503-4319-a31d-5a225f743663
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044849346197, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044849347336, "job": 1, "event": "recovery_finished"}
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: freelist init
Feb 02 15:07:29 compute-0 ceph-osd[87170]: freelist _read_cfg
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs umount
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d616a5b800 /var/lib/ceph/osd/ceph-1/block) close
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d616a5b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d616a5b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d616a5b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bdev(0x55d616a5b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluefs mount shared_bdev_used = 27262976
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: RocksDB version: 7.9.2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Git sha 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: DB SUMMARY
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: DB Session ID:  4WZU7D2IALQHDYBRU2ZY
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: CURRENT file:  CURRENT
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                         Options.error_if_exists: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.create_if_missing: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                                     Options.env: 0x55d615c4bce0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                                Options.info_log: 0x55d616ca6a20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                              Options.statistics: (nil)
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.use_fsync: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                              Options.db_log_dir: 
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                                 Options.wal_dir: db.wal
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.write_buffer_manager: 0x55d616b4cb40
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.unordered_write: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.row_cache: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                              Options.wal_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.two_write_queues: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.wal_compression: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.atomic_flush: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.max_background_jobs: 4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.max_background_compactions: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.max_subcompactions: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.max_open_files: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Compression algorithms supported:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kZSTD supported: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kXpressCompression supported: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kBZip2Compression supported: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kLZ4Compression supported: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kZlibCompression supported: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         kSnappyCompression supported: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca70c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca70c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d616ca70c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d615c4fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9443f8ea-5503-4319-a31d-5a225f743663
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044849408334, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044849414237, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044849, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9443f8ea-5503-4319-a31d-5a225f743663", "db_session_id": "4WZU7D2IALQHDYBRU2ZY", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044849420692, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044849, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9443f8ea-5503-4319-a31d-5a225f743663", "db_session_id": "4WZU7D2IALQHDYBRU2ZY", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044849425258, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044849, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9443f8ea-5503-4319-a31d-5a225f743663", "db_session_id": "4WZU7D2IALQHDYBRU2ZY", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044849426973, "job": 1, "event": "recovery_finished"}
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d616ca8000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: DB pointer 0x55d616e60000
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb 02 15:07:29 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:07:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 15:07:29 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 02 15:07:29 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 02 15:07:29 compute-0 ceph-osd[87170]: _get_class not permitted to load lua
Feb 02 15:07:29 compute-0 ceph-osd[87170]: _get_class not permitted to load sdk
Feb 02 15:07:29 compute-0 ceph-osd[87170]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 02 15:07:29 compute-0 ceph-osd[87170]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 02 15:07:29 compute-0 ceph-osd[87170]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 02 15:07:29 compute-0 ceph-osd[87170]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 02 15:07:29 compute-0 ceph-osd[87170]: osd.1 0 load_pgs
Feb 02 15:07:29 compute-0 ceph-osd[87170]: osd.1 0 load_pgs opened 0 pgs
Feb 02 15:07:29 compute-0 ceph-osd[87170]: osd.1 0 log_to_monitors true
Feb 02 15:07:29 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1[87166]: 2026-02-02T15:07:29.463+0000 7f91fda768c0 -1 osd.1 0 log_to_monitors true
Feb 02 15:07:29 compute-0 podman[87680]: 2026-02-02 15:07:29.467854514 +0000 UTC m=+0.036728468 container create 2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:07:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Feb 02 15:07:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb 02 15:07:29 compute-0 systemd[1]: Started libpod-conmon-2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f.scope.
Feb 02 15:07:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:29 compute-0 podman[87680]: 2026-02-02 15:07:29.529660078 +0000 UTC m=+0.098534032 container init 2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:07:29 compute-0 podman[87680]: 2026-02-02 15:07:29.535202556 +0000 UTC m=+0.104076510 container start 2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:29 compute-0 podman[87680]: 2026-02-02 15:07:29.538073552 +0000 UTC m=+0.106947506 container attach 2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:29 compute-0 systemd[1]: libpod-2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f.scope: Deactivated successfully.
Feb 02 15:07:29 compute-0 confident_curran[87729]: 167 167
Feb 02 15:07:29 compute-0 conmon[87729]: conmon 2beef159ba825d445e49 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f.scope/container/memory.events
Feb 02 15:07:29 compute-0 podman[87680]: 2026-02-02 15:07:29.54190834 +0000 UTC m=+0.110782294 container died 2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:29 compute-0 podman[87680]: 2026-02-02 15:07:29.449170413 +0000 UTC m=+0.018044387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-305b004b9ba090e705b1a0276df4f53c1359a9c209d51eb7aa113d8254d9b300-merged.mount: Deactivated successfully.
Feb 02 15:07:29 compute-0 podman[87680]: 2026-02-02 15:07:29.579813533 +0000 UTC m=+0.148687497 container remove 2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:07:29 compute-0 systemd[1]: libpod-conmon-2beef159ba825d445e49bd53822472ff6a03b1a09b8939ce719c00639420989f.scope: Deactivated successfully.
Feb 02 15:07:29 compute-0 podman[87759]: 2026-02-02 15:07:29.761397406 +0000 UTC m=+0.035883327 container create 8df0ce3cb575fe8fc5108ecbcf7156c710c0917980453d02ac1e45b61d2952fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:07:29 compute-0 systemd[1]: Started libpod-conmon-8df0ce3cb575fe8fc5108ecbcf7156c710c0917980453d02ac1e45b61d2952fd.scope.
Feb 02 15:07:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e56c8e3085db80371b77c3113492a60ba0159f51e784d58c7f33283228784f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e56c8e3085db80371b77c3113492a60ba0159f51e784d58c7f33283228784f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e56c8e3085db80371b77c3113492a60ba0159f51e784d58c7f33283228784f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e56c8e3085db80371b77c3113492a60ba0159f51e784d58c7f33283228784f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e56c8e3085db80371b77c3113492a60ba0159f51e784d58c7f33283228784f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:29 compute-0 ceph-mon[75334]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 02 15:07:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 02 15:07:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:29 compute-0 ceph-mon[75334]: OSD bench result of 12796.107745 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 15:07:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:29 compute-0 ceph-mon[75334]: from='osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb 02 15:07:29 compute-0 podman[87759]: 2026-02-02 15:07:29.823299482 +0000 UTC m=+0.097785453 container init 8df0ce3cb575fe8fc5108ecbcf7156c710c0917980453d02ac1e45b61d2952fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:29 compute-0 podman[87759]: 2026-02-02 15:07:29.744057957 +0000 UTC m=+0.018543858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:29 compute-0 podman[87759]: 2026-02-02 15:07:29.843346994 +0000 UTC m=+0.117832905 container start 8df0ce3cb575fe8fc5108ecbcf7156c710c0917980453d02ac1e45b61d2952fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate-test, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:07:29 compute-0 podman[87759]: 2026-02-02 15:07:29.847613302 +0000 UTC m=+0.122099213 container attach 8df0ce3cb575fe8fc5108ecbcf7156c710c0917980453d02ac1e45b61d2952fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate-test, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:07:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Feb 02 15:07:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:07:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 02 15:07:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Feb 02 15:07:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028] boot
Feb 02 15:07:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Feb 02 15:07:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 02 15:07:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 02 15:07:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 02 15:07:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 02 15:07:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:30 compute-0 ceph-osd[86115]: osd.0 9 state: booting -> active
Feb 02 15:07:30 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:30 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:30 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate-test[87775]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 02 15:07:30 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate-test[87775]:                             [--no-systemd] [--no-tmpfs]
Feb 02 15:07:30 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate-test[87775]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 02 15:07:30 compute-0 systemd[1]: libpod-8df0ce3cb575fe8fc5108ecbcf7156c710c0917980453d02ac1e45b61d2952fd.scope: Deactivated successfully.
Feb 02 15:07:30 compute-0 podman[87759]: 2026-02-02 15:07:30.046115255 +0000 UTC m=+0.320601136 container died 8df0ce3cb575fe8fc5108ecbcf7156c710c0917980453d02ac1e45b61d2952fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate-test, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-59e56c8e3085db80371b77c3113492a60ba0159f51e784d58c7f33283228784f-merged.mount: Deactivated successfully.
Feb 02 15:07:30 compute-0 podman[87759]: 2026-02-02 15:07:30.087401915 +0000 UTC m=+0.361887836 container remove 8df0ce3cb575fe8fc5108ecbcf7156c710c0917980453d02ac1e45b61d2952fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:07:30 compute-0 systemd[1]: libpod-conmon-8df0ce3cb575fe8fc5108ecbcf7156c710c0917980453d02ac1e45b61d2952fd.scope: Deactivated successfully.
Feb 02 15:07:30 compute-0 systemd[1]: Reloading.
Feb 02 15:07:30 compute-0 systemd-sysv-generator[87835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:30 compute-0 systemd-rc-local-generator[87831]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 02 15:07:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 02 15:07:30 compute-0 systemd[1]: Reloading.
Feb 02 15:07:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:30 compute-0 systemd-rc-local-generator[87872]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:07:30 compute-0 systemd-sysv-generator[87875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:07:30 compute-0 ceph-mgr[75628]: [devicehealth INFO root] creating mgr pool
Feb 02 15:07:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Feb 02 15:07:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb 02 15:07:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 02 15:07:30 compute-0 systemd[1]: Starting Ceph osd.2 for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:07:30 compute-0 ceph-mon[75334]: Deploying daemon osd.2 on compute-0
Feb 02 15:07:30 compute-0 ceph-mon[75334]: from='osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 02 15:07:30 compute-0 ceph-mon[75334]: osd.0 [v2:192.168.122.100:6802/3800538028,v1:192.168.122.100:6803/3800538028] boot
Feb 02 15:07:30 compute-0 ceph-mon[75334]: osdmap e9: 3 total, 1 up, 3 in
Feb 02 15:07:30 compute-0 ceph-mon[75334]: from='osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 02 15:07:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 02 15:07:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 02 15:07:31 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 02 15:07:31 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Feb 02 15:07:31 compute-0 ceph-osd[87170]: osd.1 0 done with init, starting boot process
Feb 02 15:07:31 compute-0 ceph-osd[87170]: osd.1 0 start_boot
Feb 02 15:07:31 compute-0 ceph-osd[87170]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 02 15:07:31 compute-0 ceph-osd[87170]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 02 15:07:31 compute-0 ceph-osd[87170]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 02 15:07:31 compute-0 ceph-osd[87170]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 02 15:07:31 compute-0 ceph-osd[87170]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb 02 15:07:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Feb 02 15:07:31 compute-0 ceph-osd[86115]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 02 15:07:31 compute-0 ceph-osd[86115]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb 02 15:07:31 compute-0 ceph-osd[86115]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Feb 02 15:07:31 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb 02 15:07:31 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:31 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:31 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1738766481; not ready for session (expect reconnect)
Feb 02 15:07:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:31 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:31 compute-0 podman[87937]: 2026-02-02 15:07:31.048244469 +0000 UTC m=+0.049606984 container create 444f516e814e6bb20d9db885a9fea75267ec90f5bbe4a53123bf2d7ba725ece4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75769712ce3ef8b2349238432ea90053dffac19fcf537b5deff427f730e26f5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75769712ce3ef8b2349238432ea90053dffac19fcf537b5deff427f730e26f5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75769712ce3ef8b2349238432ea90053dffac19fcf537b5deff427f730e26f5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75769712ce3ef8b2349238432ea90053dffac19fcf537b5deff427f730e26f5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75769712ce3ef8b2349238432ea90053dffac19fcf537b5deff427f730e26f5c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:31 compute-0 podman[87937]: 2026-02-02 15:07:31.024679947 +0000 UTC m=+0.026042492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:31 compute-0 podman[87937]: 2026-02-02 15:07:31.142693595 +0000 UTC m=+0.144056120 container init 444f516e814e6bb20d9db885a9fea75267ec90f5bbe4a53123bf2d7ba725ece4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:31 compute-0 podman[87937]: 2026-02-02 15:07:31.154312232 +0000 UTC m=+0.155674737 container start 444f516e814e6bb20d9db885a9fea75267ec90f5bbe4a53123bf2d7ba725ece4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:07:31 compute-0 podman[87937]: 2026-02-02 15:07:31.158753904 +0000 UTC m=+0.160116409 container attach 444f516e814e6bb20d9db885a9fea75267ec90f5bbe4a53123bf2d7ba725ece4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:31 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:31 compute-0 bash[87937]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:31 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:31 compute-0 bash[87937]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:31 compute-0 lvm[88033]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:07:31 compute-0 lvm[88033]: VG ceph_vg0 finished
Feb 02 15:07:31 compute-0 lvm[88036]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:07:31 compute-0 lvm[88036]: VG ceph_vg1 finished
Feb 02 15:07:31 compute-0 lvm[88038]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:07:31 compute-0 lvm[88038]: VG ceph_vg2 finished
Feb 02 15:07:31 compute-0 ceph-mon[75334]: pgmap v24: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 02 15:07:31 compute-0 ceph-mon[75334]: from='osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 02 15:07:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 02 15:07:31 compute-0 ceph-mon[75334]: osdmap e10: 3 total, 1 up, 3 in
Feb 02 15:07:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb 02 15:07:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:31 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 02 15:07:31 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:31 compute-0 bash[87937]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 02 15:07:31 compute-0 bash[87937]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:31 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:31 compute-0 bash[87937]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 02 15:07:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Feb 02 15:07:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 02 15:07:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Feb 02 15:07:32 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1738766481; not ready for session (expect reconnect)
Feb 02 15:07:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Feb 02 15:07:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:32 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:32 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:32 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 02 15:07:32 compute-0 bash[87937]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 02 15:07:32 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 02 15:07:32 compute-0 bash[87937]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 02 15:07:32 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 bash[87937]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 bash[87937]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 02 15:07:32 compute-0 bash[87937]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 02 15:07:32 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 02 15:07:32 compute-0 bash[87937]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 02 15:07:32 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate[87950]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 02 15:07:32 compute-0 bash[87937]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 02 15:07:32 compute-0 systemd[1]: libpod-444f516e814e6bb20d9db885a9fea75267ec90f5bbe4a53123bf2d7ba725ece4.scope: Deactivated successfully.
Feb 02 15:07:32 compute-0 systemd[1]: libpod-444f516e814e6bb20d9db885a9fea75267ec90f5bbe4a53123bf2d7ba725ece4.scope: Consumed 1.233s CPU time.
Feb 02 15:07:32 compute-0 podman[87937]: 2026-02-02 15:07:32.193645605 +0000 UTC m=+1.195008130 container died 444f516e814e6bb20d9db885a9fea75267ec90f5bbe4a53123bf2d7ba725ece4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-75769712ce3ef8b2349238432ea90053dffac19fcf537b5deff427f730e26f5c-merged.mount: Deactivated successfully.
Feb 02 15:07:32 compute-0 podman[87937]: 2026-02-02 15:07:32.279362649 +0000 UTC m=+1.280725174 container remove 444f516e814e6bb20d9db885a9fea75267ec90f5bbe4a53123bf2d7ba725ece4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2-activate, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:07:32 compute-0 podman[88208]: 2026-02-02 15:07:32.446486098 +0000 UTC m=+0.046787648 container create f9df3643bc3a12b53c491c3361d8a3c3a266ffdf96606623ba4454f4b59eaebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9364ca6cc5648709065bff976f7139400dc4fda157a5c357e80ae8b323d9751f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9364ca6cc5648709065bff976f7139400dc4fda157a5c357e80ae8b323d9751f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9364ca6cc5648709065bff976f7139400dc4fda157a5c357e80ae8b323d9751f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9364ca6cc5648709065bff976f7139400dc4fda157a5c357e80ae8b323d9751f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9364ca6cc5648709065bff976f7139400dc4fda157a5c357e80ae8b323d9751f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:32 compute-0 podman[88208]: 2026-02-02 15:07:32.416805195 +0000 UTC m=+0.017106755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:32 compute-0 podman[88208]: 2026-02-02 15:07:32.530976654 +0000 UTC m=+0.131278204 container init f9df3643bc3a12b53c491c3361d8a3c3a266ffdf96606623ba4454f4b59eaebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:07:32 compute-0 podman[88208]: 2026-02-02 15:07:32.538846726 +0000 UTC m=+0.139148256 container start f9df3643bc3a12b53c491c3361d8a3c3a266ffdf96606623ba4454f4b59eaebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:07:32 compute-0 bash[88208]: f9df3643bc3a12b53c491c3361d8a3c3a266ffdf96606623ba4454f4b59eaebd
Feb 02 15:07:32 compute-0 systemd[1]: Started Ceph osd.2 for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:07:32 compute-0 ceph-osd[88227]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:07:32 compute-0 ceph-osd[88227]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: pidfile_write: ignore empty --pid-file
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 sudo[87215]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 sudo[88241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:32 compute-0 sudo[88241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:32 compute-0 sudo[88241]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v27: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16400 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef16000 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 sudo[88272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:07:32 compute-0 sudo[88272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:32 compute-0 ceph-osd[88227]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Feb 02 15:07:32 compute-0 ceph-osd[88227]: load: jerasure load: lrc 
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 ceph-osd[88227]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 02 15:07:32 compute-0 ceph-osd[88227]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcef17c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcfbb7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcfbb7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcfbb7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcfbb7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs mount
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs mount shared_bdev_used = 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: RocksDB version: 7.9.2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Git sha 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: DB SUMMARY
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: DB Session ID:  J9BL3NQF62UXN1XUSI98
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: CURRENT file:  CURRENT
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                         Options.error_if_exists: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.create_if_missing: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                                     Options.env: 0x559dceda7ea0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                                Options.info_log: 0x559dcfe028a0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                              Options.statistics: (nil)
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.use_fsync: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                              Options.db_log_dir: 
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                                 Options.wal_dir: db.wal
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.write_buffer_manager: 0x559dcfca8b40
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.unordered_write: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.row_cache: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                              Options.wal_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.two_write_queues: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.wal_compression: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.atomic_flush: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.max_background_jobs: 4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.max_background_compactions: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.max_subcompactions: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.max_open_files: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Compression algorithms supported:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         kZSTD supported: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         kXpressCompression supported: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         kBZip2Compression supported: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         kLZ4Compression supported: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         kZlibCompression supported: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         kSnappyCompression supported: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe02c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3d0f2051-4b69-4e1c-953d-a03c22afd332
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044852982031, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044852982963, "job": 1, "event": "recovery_finished"}
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: freelist init
Feb 02 15:07:32 compute-0 ceph-osd[88227]: freelist _read_cfg
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 02 15:07:32 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bluefs umount
Feb 02 15:07:32 compute-0 ceph-osd[88227]: bdev(0x559dcfbb7800 /var/lib/ceph/osd/ceph-2/block) close
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bdev(0x559dcfbb7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bdev(0x559dcfbb7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bdev(0x559dcfbb7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bdev(0x559dcfbb7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluefs mount
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluefs mount shared_bdev_used = 27262976
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: RocksDB version: 7.9.2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Git sha 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: DB SUMMARY
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: DB Session ID:  J9BL3NQF62UXN1XUSI99
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: CURRENT file:  CURRENT
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: IDENTITY file:  IDENTITY
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                         Options.error_if_exists: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.create_if_missing: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                         Options.paranoid_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                                     Options.env: 0x559dceda7d50
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                                Options.info_log: 0x559dcfe03b00
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_file_opening_threads: 16
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                              Options.statistics: (nil)
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.use_fsync: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.max_log_file_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                         Options.allow_fallocate: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.use_direct_reads: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.create_missing_column_families: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                              Options.db_log_dir: 
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                                 Options.wal_dir: db.wal
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.advise_random_on_open: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.write_buffer_manager: 0x559dcfca9900
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                            Options.rate_limiter: (nil)
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.unordered_write: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.row_cache: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                              Options.wal_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.allow_ingest_behind: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.two_write_queues: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.manual_wal_flush: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.wal_compression: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.atomic_flush: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.log_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.allow_data_in_errors: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.db_host_id: __hostname__
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.max_background_jobs: 4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.max_background_compactions: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.max_subcompactions: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.max_open_files: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.bytes_per_sync: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.max_background_flushes: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Compression algorithms supported:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         kZSTD supported: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         kXpressCompression supported: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         kBZip2Compression supported: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         kLZ4Compression supported: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         kZlibCompression supported: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         kLZ4HCCompression supported: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         kSnappyCompression supported: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1738766481; not ready for session (expect reconnect)
Feb 02 15:07:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedaba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-mon[75334]: purged_snaps scrub starts
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-mon[75334]: purged_snaps scrub ok
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-mon[75334]: osdmap e11: 3 total, 1 up, 3 in
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-mon[75334]: pgmap v27: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:           Options.merge_operator: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.compaction_filter_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.sst_partitioner_factory: None
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559dcfe80300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559dcedab4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.write_buffer_size: 16777216
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.max_write_buffer_number: 64
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.compression: LZ4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.num_levels: 7
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.level: 32767
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.compression_opts.strategy: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                  Options.compression_opts.enabled: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.arena_block_size: 1048576
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.disable_auto_compactions: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.inplace_update_support: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.bloom_locality: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                    Options.max_successive_merges: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.paranoid_file_checks: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.force_consistency_checks: 1
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.report_bg_io_stats: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                               Options.ttl: 2592000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                       Options.enable_blob_files: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                           Options.min_blob_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                          Options.blob_file_size: 268435456
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb:                Options.blob_file_starting_level: 0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3d0f2051-4b69-4e1c-953d-a03c22afd332
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044853031801, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 02 15:07:33 compute-0 ceph-osd[87170]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 43.588 iops: 11158.557 elapsed_sec: 0.269
Feb 02 15:07:33 compute-0 ceph-osd[87170]: log_channel(cluster) log [WRN] : OSD bench result of 11158.557172 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 15:07:33 compute-0 ceph-osd[87170]: osd.1 0 waiting for initial osdmap
Feb 02 15:07:33 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1[87166]: 2026-02-02T15:07:33.032+0000 7f91fa20a640 -1 osd.1 0 waiting for initial osdmap
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044853035139, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044853, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3d0f2051-4b69-4e1c-953d-a03c22afd332", "db_session_id": "J9BL3NQF62UXN1XUSI99", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:07:33 compute-0 ceph-osd[87170]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 02 15:07:33 compute-0 ceph-osd[87170]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb 02 15:07:33 compute-0 ceph-osd[87170]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 02 15:07:33 compute-0 ceph-osd[87170]: osd.1 11 check_osdmap_features require_osd_release unknown -> tentacle
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044853048970, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044853, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3d0f2051-4b69-4e1c-953d-a03c22afd332", "db_session_id": "J9BL3NQF62UXN1XUSI99", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044853053894, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044853, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3d0f2051-4b69-4e1c-953d-a03c22afd332", "db_session_id": "J9BL3NQF62UXN1XUSI99", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770044853055123, "job": 1, "event": "recovery_finished"}
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 02 15:07:33 compute-0 podman[88522]: 2026-02-02 15:07:33.060952353 +0000 UTC m=+0.047858833 container create 9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:33 compute-0 ceph-osd[87170]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 02 15:07:33 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-1[87166]: 2026-02-02T15:07:33.060+0000 7f91f47fd640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 02 15:07:33 compute-0 ceph-osd[87170]: osd.1 11 set_numa_affinity not setting numa affinity
Feb 02 15:07:33 compute-0 ceph-osd[87170]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559dcfe05c00
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: DB pointer 0x559dcffbc000
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Feb 02 15:07:33 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Feb 02 15:07:33 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 02 15:07:33 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 02 15:07:33 compute-0 ceph-osd[88227]: _get_class not permitted to load lua
Feb 02 15:07:33 compute-0 ceph-osd[88227]: _get_class not permitted to load sdk
Feb 02 15:07:33 compute-0 ceph-osd[88227]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 02 15:07:33 compute-0 ceph-osd[88227]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 02 15:07:33 compute-0 ceph-osd[88227]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:07:33 compute-0 ceph-osd[88227]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 02 15:07:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 2 writes, 3 keys, 2 commit groups, 1.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 1 writes, 0 syncs, 1.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2 writes, 3 keys, 2 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 1 writes, 0 syncs, 1.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000572 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000572 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000572 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000572 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000572 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000572 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000572 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedab4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedab4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedab4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000572 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000572 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 15:07:33 compute-0 ceph-osd[88227]: osd.2 0 load_pgs
Feb 02 15:07:33 compute-0 ceph-osd[88227]: osd.2 0 load_pgs opened 0 pgs
Feb 02 15:07:33 compute-0 ceph-osd[88227]: osd.2 0 log_to_monitors true
Feb 02 15:07:33 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2[88223]: 2026-02-02T15:07:33.084+0000 7f3b20af08c0 -1 osd.2 0 log_to_monitors true
Feb 02 15:07:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Feb 02 15:07:33 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb 02 15:07:33 compute-0 systemd[1]: Started libpod-conmon-9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5.scope.
Feb 02 15:07:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:33 compute-0 podman[88522]: 2026-02-02 15:07:33.043177273 +0000 UTC m=+0.030083773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:33 compute-0 podman[88522]: 2026-02-02 15:07:33.139737868 +0000 UTC m=+0.126644368 container init 9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:07:33 compute-0 podman[88522]: 2026-02-02 15:07:33.1458883 +0000 UTC m=+0.132794780 container start 9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:07:33 compute-0 podman[88522]: 2026-02-02 15:07:33.148898569 +0000 UTC m=+0.135805049 container attach 9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:07:33 compute-0 compassionate_goldstine[88753]: 167 167
Feb 02 15:07:33 compute-0 systemd[1]: libpod-9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5.scope: Deactivated successfully.
Feb 02 15:07:33 compute-0 conmon[88753]: conmon 9751a459066f6350b063 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5.scope/container/memory.events
Feb 02 15:07:33 compute-0 podman[88522]: 2026-02-02 15:07:33.152695216 +0000 UTC m=+0.139601696 container died 9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:07:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1322067f25887cf6ccb8597ae98142ea1d277847e422ea0cabb1c6572448b78b-merged.mount: Deactivated successfully.
Feb 02 15:07:33 compute-0 podman[88522]: 2026-02-02 15:07:33.182472323 +0000 UTC m=+0.169378803 container remove 9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:07:33 compute-0 systemd[1]: libpod-conmon-9751a459066f6350b063ba26274445e0c95adb8910b7032a3d73b749605a6bc5.scope: Deactivated successfully.
Feb 02 15:07:33 compute-0 podman[88777]: 2026-02-02 15:07:33.291234197 +0000 UTC m=+0.043621655 container create ad2c502327ad34fc2053715d65548ff0526dc7d80b70c30cf507787bcf1c0c79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_faraday, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:33 compute-0 systemd[1]: Started libpod-conmon-ad2c502327ad34fc2053715d65548ff0526dc7d80b70c30cf507787bcf1c0c79.scope.
Feb 02 15:07:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96dfbe37948de8c226669a6c1c33a1cc9e30e73d24e58e214bedc5ad1cafe1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96dfbe37948de8c226669a6c1c33a1cc9e30e73d24e58e214bedc5ad1cafe1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96dfbe37948de8c226669a6c1c33a1cc9e30e73d24e58e214bedc5ad1cafe1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96dfbe37948de8c226669a6c1c33a1cc9e30e73d24e58e214bedc5ad1cafe1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:33 compute-0 podman[88777]: 2026-02-02 15:07:33.352062599 +0000 UTC m=+0.104450077 container init ad2c502327ad34fc2053715d65548ff0526dc7d80b70c30cf507787bcf1c0c79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:07:33 compute-0 podman[88777]: 2026-02-02 15:07:33.357135756 +0000 UTC m=+0.109523214 container start ad2c502327ad34fc2053715d65548ff0526dc7d80b70c30cf507787bcf1c0c79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:07:33 compute-0 podman[88777]: 2026-02-02 15:07:33.360188716 +0000 UTC m=+0.112576274 container attach ad2c502327ad34fc2053715d65548ff0526dc7d80b70c30cf507787bcf1c0c79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:33 compute-0 podman[88777]: 2026-02-02 15:07:33.267797388 +0000 UTC m=+0.020184896 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:33 compute-0 lvm[88868]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:07:33 compute-0 lvm[88868]: VG ceph_vg0 finished
Feb 02 15:07:33 compute-0 lvm[88870]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:07:33 compute-0 lvm[88870]: VG ceph_vg1 finished
Feb 02 15:07:33 compute-0 lvm[88871]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:07:33 compute-0 lvm[88871]: VG ceph_vg2 finished
Feb 02 15:07:34 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1738766481; not ready for session (expect reconnect)
Feb 02 15:07:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:34 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 02 15:07:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Feb 02 15:07:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 02 15:07:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Feb 02 15:07:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481] boot
Feb 02 15:07:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Feb 02 15:07:34 compute-0 ceph-osd[87170]: osd.1 12 state: booting -> active
Feb 02 15:07:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 02 15:07:34 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 02 15:07:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 02 15:07:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 02 15:07:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:34 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:34 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:34 compute-0 ceph-mon[75334]: OSD bench result of 11158.557172 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 15:07:34 compute-0 ceph-mon[75334]: from='osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb 02 15:07:34 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:34 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 02 15:07:34 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 02 15:07:34 compute-0 zealous_faraday[88793]: {}
Feb 02 15:07:34 compute-0 systemd[1]: libpod-ad2c502327ad34fc2053715d65548ff0526dc7d80b70c30cf507787bcf1c0c79.scope: Deactivated successfully.
Feb 02 15:07:34 compute-0 podman[88777]: 2026-02-02 15:07:34.115409693 +0000 UTC m=+0.867797161 container died ad2c502327ad34fc2053715d65548ff0526dc7d80b70c30cf507787bcf1c0c79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b96dfbe37948de8c226669a6c1c33a1cc9e30e73d24e58e214bedc5ad1cafe1a-merged.mount: Deactivated successfully.
Feb 02 15:07:34 compute-0 podman[88777]: 2026-02-02 15:07:34.158646939 +0000 UTC m=+0.911034407 container remove ad2c502327ad34fc2053715d65548ff0526dc7d80b70c30cf507787bcf1c0c79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:07:34 compute-0 systemd[1]: libpod-conmon-ad2c502327ad34fc2053715d65548ff0526dc7d80b70c30cf507787bcf1c0c79.scope: Deactivated successfully.
Feb 02 15:07:34 compute-0 sudo[88272]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:34 compute-0 sudo[88885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:07:34 compute-0 sudo[88885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:34 compute-0 sudo[88885]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:34 compute-0 sudo[88910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:34 compute-0 sudo[88910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:34 compute-0 sudo[88910]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:34 compute-0 sudo[88935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:07:34 compute-0 sudo[88935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v29: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 02 15:07:34 compute-0 podman[89004]: 2026-02-02 15:07:34.714831641 +0000 UTC m=+0.041808504 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:07:34 compute-0 podman[89004]: 2026-02-02 15:07:34.789855799 +0000 UTC m=+0.116832652 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Feb 02 15:07:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 02 15:07:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Feb 02 15:07:35 compute-0 ceph-osd[88227]: osd.2 0 done with init, starting boot process
Feb 02 15:07:35 compute-0 ceph-osd[88227]: osd.2 0 start_boot
Feb 02 15:07:35 compute-0 ceph-osd[88227]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 02 15:07:35 compute-0 ceph-osd[88227]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 02 15:07:35 compute-0 ceph-osd[88227]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 02 15:07:35 compute-0 ceph-osd[88227]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 02 15:07:35 compute-0 ceph-osd[88227]: osd.2 0  bench count 12288000 bsize 4 KiB
Feb 02 15:07:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Feb 02 15:07:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:35 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:35 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2220542249; not ready for session (expect reconnect)
Feb 02 15:07:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:35 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:35 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:35 compute-0 ceph-mon[75334]: from='osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 02 15:07:35 compute-0 ceph-mon[75334]: osd.1 [v2:192.168.122.100:6806/1738766481,v1:192.168.122.100:6807/1738766481] boot
Feb 02 15:07:35 compute-0 ceph-mon[75334]: osdmap e12: 3 total, 2 up, 3 in
Feb 02 15:07:35 compute-0 ceph-mon[75334]: from='osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 02 15:07:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 02 15:07:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:35 compute-0 ceph-mon[75334]: pgmap v29: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 02 15:07:35 compute-0 ceph-mon[75334]: from='osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 02 15:07:35 compute-0 ceph-mon[75334]: osdmap e13: 3 total, 2 up, 3 in
Feb 02 15:07:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:35 compute-0 ceph-mgr[75628]: [devicehealth INFO root] creating main.db for devicehealth
Feb 02 15:07:35 compute-0 sudo[88935]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:35 compute-0 ceph-mgr[75628]: [devicehealth INFO root] Check health
Feb 02 15:07:35 compute-0 ceph-mgr[75628]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Feb 02 15:07:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 02 15:07:35 compute-0 sudo[89166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:35 compute-0 sudo[89166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:35 compute-0 sudo[89166]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:35 compute-0 sudo[89190]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Feb 02 15:07:35 compute-0 sudo[89190]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 02 15:07:35 compute-0 sudo[89190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Feb 02 15:07:35 compute-0 sudo[89190]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 02 15:07:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 02 15:07:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 02 15:07:35 compute-0 sudo[89194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- inventory --format=json-pretty --filter-for-batch
Feb 02 15:07:35 compute-0 sudo[89194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:35 compute-0 podman[89230]: 2026-02-02 15:07:35.581446184 +0000 UTC m=+0.053619536 container create 6d9070532c1b7f45888b1cac8fddd014f1d09b4ad74724eaf0b9bdd14750454d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:07:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:35 compute-0 systemd[1]: Started libpod-conmon-6d9070532c1b7f45888b1cac8fddd014f1d09b4ad74724eaf0b9bdd14750454d.scope.
Feb 02 15:07:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:35 compute-0 podman[89230]: 2026-02-02 15:07:35.561480734 +0000 UTC m=+0.033654116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:35 compute-0 podman[89230]: 2026-02-02 15:07:35.660351832 +0000 UTC m=+0.132525214 container init 6d9070532c1b7f45888b1cac8fddd014f1d09b4ad74724eaf0b9bdd14750454d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_blackwell, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:07:35 compute-0 podman[89230]: 2026-02-02 15:07:35.669625065 +0000 UTC m=+0.141798427 container start 6d9070532c1b7f45888b1cac8fddd014f1d09b4ad74724eaf0b9bdd14750454d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:35 compute-0 dazzling_blackwell[89246]: 167 167
Feb 02 15:07:35 compute-0 systemd[1]: libpod-6d9070532c1b7f45888b1cac8fddd014f1d09b4ad74724eaf0b9bdd14750454d.scope: Deactivated successfully.
Feb 02 15:07:35 compute-0 podman[89230]: 2026-02-02 15:07:35.674773854 +0000 UTC m=+0.146947236 container attach 6d9070532c1b7f45888b1cac8fddd014f1d09b4ad74724eaf0b9bdd14750454d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_blackwell, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 02 15:07:35 compute-0 podman[89230]: 2026-02-02 15:07:35.675092601 +0000 UTC m=+0.147265953 container died 6d9070532c1b7f45888b1cac8fddd014f1d09b4ad74724eaf0b9bdd14750454d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-50c30f24b1273fbf49670612fd28b099bf7be85e874c3cf336e7045d210262a3-merged.mount: Deactivated successfully.
Feb 02 15:07:35 compute-0 podman[89230]: 2026-02-02 15:07:35.748236236 +0000 UTC m=+0.220409598 container remove 6d9070532c1b7f45888b1cac8fddd014f1d09b4ad74724eaf0b9bdd14750454d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:35 compute-0 systemd[1]: libpod-conmon-6d9070532c1b7f45888b1cac8fddd014f1d09b4ad74724eaf0b9bdd14750454d.scope: Deactivated successfully.
Feb 02 15:07:35 compute-0 podman[89271]: 2026-02-02 15:07:35.885365615 +0000 UTC m=+0.054360464 container create a9730ee3162d129a48a6ff3014721e1ec9b4c987729dc1faa13600626685abab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:35 compute-0 systemd[1]: Started libpod-conmon-a9730ee3162d129a48a6ff3014721e1ec9b4c987729dc1faa13600626685abab.scope.
Feb 02 15:07:35 compute-0 podman[89271]: 2026-02-02 15:07:35.85302423 +0000 UTC m=+0.022019129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8840ddd922cf5e796b4b650ba5195910bf0654989f2663395e0412016f5e6500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8840ddd922cf5e796b4b650ba5195910bf0654989f2663395e0412016f5e6500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8840ddd922cf5e796b4b650ba5195910bf0654989f2663395e0412016f5e6500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8840ddd922cf5e796b4b650ba5195910bf0654989f2663395e0412016f5e6500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:36 compute-0 podman[89271]: 2026-02-02 15:07:36.006139666 +0000 UTC m=+0.175134495 container init a9730ee3162d129a48a6ff3014721e1ec9b4c987729dc1faa13600626685abab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shirley, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:07:36 compute-0 podman[89271]: 2026-02-02 15:07:36.012040043 +0000 UTC m=+0.181034852 container start a9730ee3162d129a48a6ff3014721e1ec9b4c987729dc1faa13600626685abab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:36 compute-0 podman[89271]: 2026-02-02 15:07:36.021589343 +0000 UTC m=+0.190584182 container attach a9730ee3162d129a48a6ff3014721e1ec9b4c987729dc1faa13600626685abab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Feb 02 15:07:36 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2220542249; not ready for session (expect reconnect)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:36 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:36 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:36 compute-0 ceph-mon[75334]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 02 15:07:36 compute-0 ceph-mon[75334]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 02 15:07:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 02 15:07:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:36 compute-0 crazy_shirley[89287]: [
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:     {
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         "available": false,
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         "being_replaced": false,
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         "ceph_device_lvm": false,
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         "lsm_data": {},
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         "lvs": [],
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         "path": "/dev/sr0",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         "rejected_reasons": [
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "Has a FileSystem",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "Insufficient space (<5GB)"
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         ],
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         "sys_api": {
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "actuators": null,
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "device_nodes": [
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:                 "sr0"
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             ],
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "devname": "sr0",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "human_readable_size": "482.00 KB",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "id_bus": "ata",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "model": "QEMU DVD-ROM",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "nr_requests": "2",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "parent": "/dev/sr0",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "partitions": {},
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "path": "/dev/sr0",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "removable": "1",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "rev": "2.5+",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "ro": "0",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "rotational": "1",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "sas_address": "",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "sas_device_handle": "",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "scheduler_mode": "mq-deadline",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "sectors": 0,
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "sectorsize": "2048",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "size": 493568.0,
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "support_discard": "2048",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "type": "disk",
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:             "vendor": "QEMU"
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:         }
Feb 02 15:07:36 compute-0 crazy_shirley[89287]:     }
Feb 02 15:07:36 compute-0 crazy_shirley[89287]: ]
Feb 02 15:07:36 compute-0 systemd[1]: libpod-a9730ee3162d129a48a6ff3014721e1ec9b4c987729dc1faa13600626685abab.scope: Deactivated successfully.
Feb 02 15:07:36 compute-0 podman[90021]: 2026-02-02 15:07:36.587063108 +0000 UTC m=+0.026002129 container died a9730ee3162d129a48a6ff3014721e1ec9b4c987729dc1faa13600626685abab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 15:07:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8840ddd922cf5e796b4b650ba5195910bf0654989f2663395e0412016f5e6500-merged.mount: Deactivated successfully.
Feb 02 15:07:36 compute-0 podman[90021]: 2026-02-02 15:07:36.698871414 +0000 UTC m=+0.137810435 container remove a9730ee3162d129a48a6ff3014721e1ec9b4c987729dc1faa13600626685abab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shirley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:07:36 compute-0 systemd[1]: libpod-conmon-a9730ee3162d129a48a6ff3014721e1ec9b4c987729dc1faa13600626685abab.scope: Deactivated successfully.
Feb 02 15:07:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 02 15:07:36 compute-0 sudo[89194]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 02 15:07:36 compute-0 ceph-mgr[75628]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43686k
Feb 02 15:07:36 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43686k
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb 02 15:07:36 compute-0 ceph-mgr[75628]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb 02 15:07:36 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:07:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:07:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:36 compute-0 sudo[90036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:36 compute-0 sudo[90036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:36 compute-0 sudo[90036]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:36 compute-0 sudo[90061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:07:36 compute-0 sudo[90061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 34.191 iops: 8752.829 elapsed_sec: 0.343
Feb 02 15:07:37 compute-0 ceph-osd[88227]: log_channel(cluster) log [WRN] : OSD bench result of 8752.829078 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 15:07:37 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2[88223]: 2026-02-02T15:07:37.035+0000 7f3b1ca72640 -1 osd.2 0 waiting for initial osdmap
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 0 waiting for initial osdmap
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 14 check_osdmap_features require_osd_release unknown -> tentacle
Feb 02 15:07:37 compute-0 ceph-mgr[75628]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2220542249; not ready for session (expect reconnect)
Feb 02 15:07:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:37 compute-0 ceph-mgr[75628]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 02 15:07:37 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-osd-2[88223]: 2026-02-02T15:07:37.058+0000 7f3b17877640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 14 set_numa_affinity not setting numa affinity
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Feb 02 15:07:37 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.rxryxi(active, since 54s)
Feb 02 15:07:37 compute-0 ceph-mon[75334]: purged_snaps scrub starts
Feb 02 15:07:37 compute-0 ceph-mon[75334]: purged_snaps scrub ok
Feb 02 15:07:37 compute-0 ceph-mon[75334]: osdmap e14: 3 total, 2 up, 3 in
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:37 compute-0 ceph-mon[75334]: pgmap v32: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 02 15:07:37 compute-0 ceph-mon[75334]: Adjusting osd_memory_target on compute-0 to 43686k
Feb 02 15:07:37 compute-0 ceph-mon[75334]: Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:07:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:37 compute-0 podman[90098]: 2026-02-02 15:07:37.278110427 +0000 UTC m=+0.061846405 container create f09289770f2d1211d1a9d5dc8d0e0fcd857012af074cc2eec5dcb50850f5c412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:07:37 compute-0 systemd[1]: Started libpod-conmon-f09289770f2d1211d1a9d5dc8d0e0fcd857012af074cc2eec5dcb50850f5c412.scope.
Feb 02 15:07:37 compute-0 podman[90098]: 2026-02-02 15:07:37.25042503 +0000 UTC m=+0.034161038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:37 compute-0 podman[90098]: 2026-02-02 15:07:37.367157209 +0000 UTC m=+0.150893197 container init f09289770f2d1211d1a9d5dc8d0e0fcd857012af074cc2eec5dcb50850f5c412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_antonelli, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:37 compute-0 podman[90098]: 2026-02-02 15:07:37.377257582 +0000 UTC m=+0.160993540 container start f09289770f2d1211d1a9d5dc8d0e0fcd857012af074cc2eec5dcb50850f5c412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:07:37 compute-0 podman[90098]: 2026-02-02 15:07:37.380994487 +0000 UTC m=+0.164730455 container attach f09289770f2d1211d1a9d5dc8d0e0fcd857012af074cc2eec5dcb50850f5c412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 15:07:37 compute-0 flamboyant_antonelli[90114]: 167 167
Feb 02 15:07:37 compute-0 systemd[1]: libpod-f09289770f2d1211d1a9d5dc8d0e0fcd857012af074cc2eec5dcb50850f5c412.scope: Deactivated successfully.
Feb 02 15:07:37 compute-0 podman[90098]: 2026-02-02 15:07:37.382212345 +0000 UTC m=+0.165948303 container died f09289770f2d1211d1a9d5dc8d0e0fcd857012af074cc2eec5dcb50850f5c412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ee763c4bd278197322b8b1c8e8c23b9db174aa3ba813f56ed3867318ed587c2-merged.mount: Deactivated successfully.
Feb 02 15:07:37 compute-0 podman[90098]: 2026-02-02 15:07:37.424469928 +0000 UTC m=+0.208205866 container remove f09289770f2d1211d1a9d5dc8d0e0fcd857012af074cc2eec5dcb50850f5c412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:07:37 compute-0 systemd[1]: libpod-conmon-f09289770f2d1211d1a9d5dc8d0e0fcd857012af074cc2eec5dcb50850f5c412.scope: Deactivated successfully.
Feb 02 15:07:37 compute-0 podman[90138]: 2026-02-02 15:07:37.564858793 +0000 UTC m=+0.049659615 container create 74421a7b369dcd84400f9d5a41affd2180f9dac65586bb94ff0d16e7be450aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:07:37 compute-0 systemd[1]: Started libpod-conmon-74421a7b369dcd84400f9d5a41affd2180f9dac65586bb94ff0d16e7be450aa4.scope.
Feb 02 15:07:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7087735326ec395b1f861f950daa3db7d2e57cdebb95f3c9cd9831e9f54d69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7087735326ec395b1f861f950daa3db7d2e57cdebb95f3c9cd9831e9f54d69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7087735326ec395b1f861f950daa3db7d2e57cdebb95f3c9cd9831e9f54d69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7087735326ec395b1f861f950daa3db7d2e57cdebb95f3c9cd9831e9f54d69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7087735326ec395b1f861f950daa3db7d2e57cdebb95f3c9cd9831e9f54d69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:37 compute-0 podman[90138]: 2026-02-02 15:07:37.636932403 +0000 UTC m=+0.121733255 container init 74421a7b369dcd84400f9d5a41affd2180f9dac65586bb94ff0d16e7be450aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:07:37 compute-0 podman[90138]: 2026-02-02 15:07:37.548831563 +0000 UTC m=+0.033632405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:37 compute-0 podman[90138]: 2026-02-02 15:07:37.647298102 +0000 UTC m=+0.132098934 container start 74421a7b369dcd84400f9d5a41affd2180f9dac65586bb94ff0d16e7be450aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wilbur, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:07:37 compute-0 podman[90138]: 2026-02-02 15:07:37.650943126 +0000 UTC m=+0.135744048 container attach 74421a7b369dcd84400f9d5a41affd2180f9dac65586bb94ff0d16e7be450aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wilbur, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Feb 02 15:07:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Feb 02 15:07:37 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249] boot
Feb 02 15:07:37 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Feb 02 15:07:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 02 15:07:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:37 compute-0 ceph-osd[88227]: osd.2 15 state: booting -> active
Feb 02 15:07:38 compute-0 ceph-mon[75334]: OSD bench result of 8752.829078 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 02 15:07:38 compute-0 ceph-mon[75334]: mgrmap e9: compute-0.rxryxi(active, since 54s)
Feb 02 15:07:38 compute-0 ceph-mon[75334]: osd.2 [v2:192.168.122.100:6810/2220542249,v1:192.168.122.100:6811/2220542249] boot
Feb 02 15:07:38 compute-0 ceph-mon[75334]: osdmap e15: 3 total, 3 up, 3 in
Feb 02 15:07:38 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 02 15:07:38 compute-0 nice_wilbur[90154]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:07:38 compute-0 nice_wilbur[90154]: --> All data devices are unavailable
Feb 02 15:07:38 compute-0 systemd[1]: libpod-74421a7b369dcd84400f9d5a41affd2180f9dac65586bb94ff0d16e7be450aa4.scope: Deactivated successfully.
Feb 02 15:07:38 compute-0 podman[90138]: 2026-02-02 15:07:38.151807564 +0000 UTC m=+0.636608416 container died 74421a7b369dcd84400f9d5a41affd2180f9dac65586bb94ff0d16e7be450aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wilbur, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 15:07:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d7087735326ec395b1f861f950daa3db7d2e57cdebb95f3c9cd9831e9f54d69-merged.mount: Deactivated successfully.
Feb 02 15:07:38 compute-0 podman[90138]: 2026-02-02 15:07:38.20503151 +0000 UTC m=+0.689832362 container remove 74421a7b369dcd84400f9d5a41affd2180f9dac65586bb94ff0d16e7be450aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_wilbur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:07:38 compute-0 systemd[1]: libpod-conmon-74421a7b369dcd84400f9d5a41affd2180f9dac65586bb94ff0d16e7be450aa4.scope: Deactivated successfully.
Feb 02 15:07:38 compute-0 sudo[90061]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:38 compute-0 sudo[90186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:38 compute-0 sudo[90186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:38 compute-0 sudo[90186]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:38 compute-0 sudo[90211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:07:38 compute-0 sudo[90211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:38 compute-0 podman[90250]: 2026-02-02 15:07:38.66389023 +0000 UTC m=+0.046899172 container create 40f1298a798db8555898fb48ba5249cb90e39337d2a8dbdddd495982567f3061 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldberg, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:07:38 compute-0 systemd[1]: Started libpod-conmon-40f1298a798db8555898fb48ba5249cb90e39337d2a8dbdddd495982567f3061.scope.
Feb 02 15:07:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 creating+peering; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:38 compute-0 podman[90250]: 2026-02-02 15:07:38.644918482 +0000 UTC m=+0.027927464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:38 compute-0 podman[90250]: 2026-02-02 15:07:38.743520924 +0000 UTC m=+0.126529866 container init 40f1298a798db8555898fb48ba5249cb90e39337d2a8dbdddd495982567f3061 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:38 compute-0 podman[90250]: 2026-02-02 15:07:38.750042274 +0000 UTC m=+0.133051226 container start 40f1298a798db8555898fb48ba5249cb90e39337d2a8dbdddd495982567f3061 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldberg, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:38 compute-0 strange_goldberg[90265]: 167 167
Feb 02 15:07:38 compute-0 systemd[1]: libpod-40f1298a798db8555898fb48ba5249cb90e39337d2a8dbdddd495982567f3061.scope: Deactivated successfully.
Feb 02 15:07:38 compute-0 podman[90250]: 2026-02-02 15:07:38.75421571 +0000 UTC m=+0.137224682 container attach 40f1298a798db8555898fb48ba5249cb90e39337d2a8dbdddd495982567f3061 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 15:07:38 compute-0 podman[90250]: 2026-02-02 15:07:38.755187793 +0000 UTC m=+0.138196735 container died 40f1298a798db8555898fb48ba5249cb90e39337d2a8dbdddd495982567f3061 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c37c777603abcbab0e4dfac0f619f253e17287dbe5767005a05da6da4ee1cb8e-merged.mount: Deactivated successfully.
Feb 02 15:07:38 compute-0 podman[90250]: 2026-02-02 15:07:38.790316672 +0000 UTC m=+0.173325614 container remove 40f1298a798db8555898fb48ba5249cb90e39337d2a8dbdddd495982567f3061 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:38 compute-0 systemd[1]: libpod-conmon-40f1298a798db8555898fb48ba5249cb90e39337d2a8dbdddd495982567f3061.scope: Deactivated successfully.
Feb 02 15:07:38 compute-0 podman[90288]: 2026-02-02 15:07:38.921535665 +0000 UTC m=+0.042298746 container create cec22ac25eee6788fcbe8865d1c02decc8c8d90ebf5e3ef2e5b348a74a2f5f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moser, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:38 compute-0 systemd[1]: Started libpod-conmon-cec22ac25eee6788fcbe8865d1c02decc8c8d90ebf5e3ef2e5b348a74a2f5f27.scope.
Feb 02 15:07:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bec7f0488602745daf36215efbe2a3028b65a3e565d12d854c98b0ecd5d45f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bec7f0488602745daf36215efbe2a3028b65a3e565d12d854c98b0ecd5d45f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bec7f0488602745daf36215efbe2a3028b65a3e565d12d854c98b0ecd5d45f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bec7f0488602745daf36215efbe2a3028b65a3e565d12d854c98b0ecd5d45f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:38 compute-0 podman[90288]: 2026-02-02 15:07:38.904947703 +0000 UTC m=+0.025710814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:39 compute-0 podman[90288]: 2026-02-02 15:07:39.017492995 +0000 UTC m=+0.138256166 container init cec22ac25eee6788fcbe8865d1c02decc8c8d90ebf5e3ef2e5b348a74a2f5f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 02 15:07:39 compute-0 podman[90288]: 2026-02-02 15:07:39.023558925 +0000 UTC m=+0.144322006 container start cec22ac25eee6788fcbe8865d1c02decc8c8d90ebf5e3ef2e5b348a74a2f5f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:07:39 compute-0 podman[90288]: 2026-02-02 15:07:39.02728877 +0000 UTC m=+0.148051891 container attach cec22ac25eee6788fcbe8865d1c02decc8c8d90ebf5e3ef2e5b348a74a2f5f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moser, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Feb 02 15:07:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Feb 02 15:07:39 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Feb 02 15:07:39 compute-0 ceph-mon[75334]: pgmap v34: 1 pgs: 1 creating+peering; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:39 compute-0 exciting_moser[90304]: {
Feb 02 15:07:39 compute-0 exciting_moser[90304]:     "0": [
Feb 02 15:07:39 compute-0 exciting_moser[90304]:         {
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "devices": [
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "/dev/loop3"
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             ],
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_name": "ceph_lv0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_size": "21470642176",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "name": "ceph_lv0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "tags": {
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.cluster_name": "ceph",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.crush_device_class": "",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.encrypted": "0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.objectstore": "bluestore",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.osd_id": "0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.type": "block",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.vdo": "0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.with_tpm": "0"
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             },
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "type": "block",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "vg_name": "ceph_vg0"
Feb 02 15:07:39 compute-0 exciting_moser[90304]:         }
Feb 02 15:07:39 compute-0 exciting_moser[90304]:     ],
Feb 02 15:07:39 compute-0 exciting_moser[90304]:     "1": [
Feb 02 15:07:39 compute-0 exciting_moser[90304]:         {
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "devices": [
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "/dev/loop4"
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             ],
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_name": "ceph_lv1",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_size": "21470642176",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "name": "ceph_lv1",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "tags": {
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.cluster_name": "ceph",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.crush_device_class": "",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.encrypted": "0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.objectstore": "bluestore",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.osd_id": "1",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.type": "block",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.vdo": "0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.with_tpm": "0"
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             },
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "type": "block",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "vg_name": "ceph_vg1"
Feb 02 15:07:39 compute-0 exciting_moser[90304]:         }
Feb 02 15:07:39 compute-0 exciting_moser[90304]:     ],
Feb 02 15:07:39 compute-0 exciting_moser[90304]:     "2": [
Feb 02 15:07:39 compute-0 exciting_moser[90304]:         {
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "devices": [
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "/dev/loop5"
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             ],
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_name": "ceph_lv2",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_size": "21470642176",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "name": "ceph_lv2",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "tags": {
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.cluster_name": "ceph",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.crush_device_class": "",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.encrypted": "0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.objectstore": "bluestore",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.osd_id": "2",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.type": "block",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.vdo": "0",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:                 "ceph.with_tpm": "0"
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             },
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "type": "block",
Feb 02 15:07:39 compute-0 exciting_moser[90304]:             "vg_name": "ceph_vg2"
Feb 02 15:07:39 compute-0 exciting_moser[90304]:         }
Feb 02 15:07:39 compute-0 exciting_moser[90304]:     ]
Feb 02 15:07:39 compute-0 exciting_moser[90304]: }
Feb 02 15:07:39 compute-0 systemd[1]: libpod-cec22ac25eee6788fcbe8865d1c02decc8c8d90ebf5e3ef2e5b348a74a2f5f27.scope: Deactivated successfully.
Feb 02 15:07:39 compute-0 podman[90288]: 2026-02-02 15:07:39.314135278 +0000 UTC m=+0.434898359 container died cec22ac25eee6788fcbe8865d1c02decc8c8d90ebf5e3ef2e5b348a74a2f5f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 15:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-22bec7f0488602745daf36215efbe2a3028b65a3e565d12d854c98b0ecd5d45f-merged.mount: Deactivated successfully.
Feb 02 15:07:39 compute-0 podman[90288]: 2026-02-02 15:07:39.365047841 +0000 UTC m=+0.485810962 container remove cec22ac25eee6788fcbe8865d1c02decc8c8d90ebf5e3ef2e5b348a74a2f5f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:07:39 compute-0 systemd[1]: libpod-conmon-cec22ac25eee6788fcbe8865d1c02decc8c8d90ebf5e3ef2e5b348a74a2f5f27.scope: Deactivated successfully.
Feb 02 15:07:39 compute-0 sudo[90211]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:39 compute-0 sudo[90327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:39 compute-0 sudo[90327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:39 compute-0 sudo[90327]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:39 compute-0 sudo[90352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:07:39 compute-0 sudo[90352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:39 compute-0 podman[90389]: 2026-02-02 15:07:39.826123662 +0000 UTC m=+0.048444427 container create d74f4141627d6969c72e70fa82e8508c033f1b9d1112aefbe5c742aec9a52620 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ishizaka, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:39 compute-0 systemd[1]: Started libpod-conmon-d74f4141627d6969c72e70fa82e8508c033f1b9d1112aefbe5c742aec9a52620.scope.
Feb 02 15:07:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:39 compute-0 podman[90389]: 2026-02-02 15:07:39.806852398 +0000 UTC m=+0.029173153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:40 compute-0 podman[90389]: 2026-02-02 15:07:40.02399125 +0000 UTC m=+0.246312065 container init d74f4141627d6969c72e70fa82e8508c033f1b9d1112aefbe5c742aec9a52620 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ishizaka, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:40 compute-0 podman[90389]: 2026-02-02 15:07:40.03266235 +0000 UTC m=+0.254983115 container start d74f4141627d6969c72e70fa82e8508c033f1b9d1112aefbe5c742aec9a52620 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:40 compute-0 gracious_ishizaka[90405]: 167 167
Feb 02 15:07:40 compute-0 systemd[1]: libpod-d74f4141627d6969c72e70fa82e8508c033f1b9d1112aefbe5c742aec9a52620.scope: Deactivated successfully.
Feb 02 15:07:40 compute-0 podman[90389]: 2026-02-02 15:07:40.057498632 +0000 UTC m=+0.279819387 container attach d74f4141627d6969c72e70fa82e8508c033f1b9d1112aefbe5c742aec9a52620 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ishizaka, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:40 compute-0 podman[90389]: 2026-02-02 15:07:40.059244242 +0000 UTC m=+0.281565007 container died d74f4141627d6969c72e70fa82e8508c033f1b9d1112aefbe5c742aec9a52620 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ishizaka, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-31fb3309919e0d7d60ca348f7bfc696dc5a8d109ceb171f8312a4b646b23288e-merged.mount: Deactivated successfully.
Feb 02 15:07:40 compute-0 podman[90389]: 2026-02-02 15:07:40.09823309 +0000 UTC m=+0.320553855 container remove d74f4141627d6969c72e70fa82e8508c033f1b9d1112aefbe5c742aec9a52620 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ishizaka, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:07:40 compute-0 systemd[1]: libpod-conmon-d74f4141627d6969c72e70fa82e8508c033f1b9d1112aefbe5c742aec9a52620.scope: Deactivated successfully.
Feb 02 15:07:40 compute-0 ceph-mon[75334]: osdmap e16: 3 total, 3 up, 3 in
Feb 02 15:07:40 compute-0 podman[90431]: 2026-02-02 15:07:40.228360308 +0000 UTC m=+0.040239998 container create 5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:07:40 compute-0 systemd[1]: Started libpod-conmon-5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2.scope.
Feb 02 15:07:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b804c9bdd0e108625943cc358b47f0087221e309bd22b3a5987bd74d9c2c839a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b804c9bdd0e108625943cc358b47f0087221e309bd22b3a5987bd74d9c2c839a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b804c9bdd0e108625943cc358b47f0087221e309bd22b3a5987bd74d9c2c839a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b804c9bdd0e108625943cc358b47f0087221e309bd22b3a5987bd74d9c2c839a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:40 compute-0 podman[90431]: 2026-02-02 15:07:40.206641818 +0000 UTC m=+0.018521488 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:07:40 compute-0 podman[90431]: 2026-02-02 15:07:40.322775612 +0000 UTC m=+0.134655292 container init 5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamarr, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:40 compute-0 podman[90431]: 2026-02-02 15:07:40.32830826 +0000 UTC m=+0.140187910 container start 5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:07:40 compute-0 podman[90431]: 2026-02-02 15:07:40.331814331 +0000 UTC m=+0.143694001 container attach 5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamarr, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:40 compute-0 lvm[90526]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:07:40 compute-0 lvm[90526]: VG ceph_vg0 finished
Feb 02 15:07:40 compute-0 lvm[90527]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:07:40 compute-0 lvm[90527]: VG ceph_vg1 finished
Feb 02 15:07:41 compute-0 lvm[90529]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:07:41 compute-0 lvm[90529]: VG ceph_vg2 finished
Feb 02 15:07:41 compute-0 ceph-mon[75334]: pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:41 compute-0 charming_lamarr[90448]: {}
Feb 02 15:07:41 compute-0 systemd[1]: libpod-5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2.scope: Deactivated successfully.
Feb 02 15:07:41 compute-0 systemd[1]: libpod-5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2.scope: Consumed 1.220s CPU time.
Feb 02 15:07:41 compute-0 podman[90431]: 2026-02-02 15:07:41.174863851 +0000 UTC m=+0.986743541 container died 5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b804c9bdd0e108625943cc358b47f0087221e309bd22b3a5987bd74d9c2c839a-merged.mount: Deactivated successfully.
Feb 02 15:07:41 compute-0 podman[90431]: 2026-02-02 15:07:41.222679523 +0000 UTC m=+1.034559173 container remove 5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamarr, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:41 compute-0 systemd[1]: libpod-conmon-5313ddd92a909e3a26eb8f20b3c366d02335473f050cf92c7a827ba75d0753e2.scope: Deactivated successfully.
Feb 02 15:07:41 compute-0 sudo[90352]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:07:41 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:07:41 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:41 compute-0 sudo[90542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:07:41 compute-0 sudo[90542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:41 compute-0 sudo[90542]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:41 compute-0 sudo[90590]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpofbnfzqkpdmdydyucjdbpwzedpoayp ; /usr/bin/python3'
Feb 02 15:07:41 compute-0 sudo[90590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:41 compute-0 python3[90592]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:41 compute-0 podman[90594]: 2026-02-02 15:07:41.632694248 +0000 UTC m=+0.045742866 container create 95714cf14521c2df0f0cc608ccb2a316941746dfdfcacb4ecdd0773f0b6756e8 (image=quay.io/ceph/ceph:v20, name=gifted_hertz, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:41 compute-0 systemd[1]: Started libpod-conmon-95714cf14521c2df0f0cc608ccb2a316941746dfdfcacb4ecdd0773f0b6756e8.scope.
Feb 02 15:07:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cde15ec138a44d72905b00b7be47a3e95af6c3346cdba9110749b30ad860fe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cde15ec138a44d72905b00b7be47a3e95af6c3346cdba9110749b30ad860fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cde15ec138a44d72905b00b7be47a3e95af6c3346cdba9110749b30ad860fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:41 compute-0 podman[90594]: 2026-02-02 15:07:41.70446247 +0000 UTC m=+0.117511098 container init 95714cf14521c2df0f0cc608ccb2a316941746dfdfcacb4ecdd0773f0b6756e8 (image=quay.io/ceph/ceph:v20, name=gifted_hertz, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 02 15:07:41 compute-0 podman[90594]: 2026-02-02 15:07:41.611868508 +0000 UTC m=+0.024917196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:41 compute-0 podman[90594]: 2026-02-02 15:07:41.710054809 +0000 UTC m=+0.123103427 container start 95714cf14521c2df0f0cc608ccb2a316941746dfdfcacb4ecdd0773f0b6756e8 (image=quay.io/ceph/ceph:v20, name=gifted_hertz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:07:41 compute-0 podman[90594]: 2026-02-02 15:07:41.715245239 +0000 UTC m=+0.128293897 container attach 95714cf14521c2df0f0cc608ccb2a316941746dfdfcacb4ecdd0773f0b6756e8 (image=quay.io/ceph/ceph:v20, name=gifted_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb 02 15:07:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 02 15:07:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160596242' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 02 15:07:42 compute-0 gifted_hertz[90610]: 
Feb 02 15:07:42 compute-0 gifted_hertz[90610]: {"fsid":"e43470b2-6632-573a-87d3-0f5428ec59e9","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":76,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1770044857,"num_in_osds":3,"osd_in_since":1770044838,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":83468288,"bytes_avail":64328458240,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2026-02-02T15:06:23:601132+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-02T15:06:23.603344+0000","services":{}},"progress_events":{}}
Feb 02 15:07:42 compute-0 systemd[1]: libpod-95714cf14521c2df0f0cc608ccb2a316941746dfdfcacb4ecdd0773f0b6756e8.scope: Deactivated successfully.
Feb 02 15:07:42 compute-0 podman[90594]: 2026-02-02 15:07:42.213002446 +0000 UTC m=+0.626051064 container died 95714cf14521c2df0f0cc608ccb2a316941746dfdfcacb4ecdd0773f0b6756e8 (image=quay.io/ceph/ceph:v20, name=gifted_hertz, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0cde15ec138a44d72905b00b7be47a3e95af6c3346cdba9110749b30ad860fe-merged.mount: Deactivated successfully.
Feb 02 15:07:42 compute-0 podman[90594]: 2026-02-02 15:07:42.255027514 +0000 UTC m=+0.668076132 container remove 95714cf14521c2df0f0cc608ccb2a316941746dfdfcacb4ecdd0773f0b6756e8 (image=quay.io/ceph/ceph:v20, name=gifted_hertz, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:42 compute-0 systemd[1]: libpod-conmon-95714cf14521c2df0f0cc608ccb2a316941746dfdfcacb4ecdd0773f0b6756e8.scope: Deactivated successfully.
Feb 02 15:07:42 compute-0 sudo[90590]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:42 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4160596242' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 02 15:07:42 compute-0 sudo[90670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqsnztfhvkbtxfuspoqxlgxghgrhthbw ; /usr/bin/python3'
Feb 02 15:07:42 compute-0 sudo[90670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:42 compute-0 python3[90672]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:07:42
Feb 02 15:07:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:07:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:07:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.mgr']
Feb 02 15:07:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:07:42 compute-0 podman[90673]: 2026-02-02 15:07:42.766132207 +0000 UTC m=+0.040888783 container create 497b90da4d4c2fe81f02483edd08b785e4f532bc2f3f9a53ebf99715d105c8c9 (image=quay.io/ceph/ceph:v20, name=busy_pare, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:42 compute-0 systemd[1]: Started libpod-conmon-497b90da4d4c2fe81f02483edd08b785e4f532bc2f3f9a53ebf99715d105c8c9.scope.
Feb 02 15:07:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3850a7b8918020822d942bedcd5c4411e90a079054680d47449b7957ef081d7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3850a7b8918020822d942bedcd5c4411e90a079054680d47449b7957ef081d7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:42 compute-0 podman[90673]: 2026-02-02 15:07:42.745645935 +0000 UTC m=+0.020402531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:42 compute-0 podman[90673]: 2026-02-02 15:07:42.871835072 +0000 UTC m=+0.146591678 container init 497b90da4d4c2fe81f02483edd08b785e4f532bc2f3f9a53ebf99715d105c8c9 (image=quay.io/ceph/ceph:v20, name=busy_pare, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:07:42 compute-0 podman[90673]: 2026-02-02 15:07:42.879245062 +0000 UTC m=+0.154001638 container start 497b90da4d4c2fe81f02483edd08b785e4f532bc2f3f9a53ebf99715d105c8c9 (image=quay.io/ceph/ceph:v20, name=busy_pare, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:07:42 compute-0 podman[90673]: 2026-02-02 15:07:42.882977809 +0000 UTC m=+0.157734415 container attach 497b90da4d4c2fe81f02483edd08b785e4f532bc2f3f9a53ebf99715d105c8c9 (image=quay.io/ceph/ceph:v20, name=busy_pare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:07:43 compute-0 ceph-mon[75334]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 15:07:43 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2982988105' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Feb 02 15:07:44 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2982988105' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2982988105' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Feb 02 15:07:44 compute-0 busy_pare[90688]: pool 'vms' created
Feb 02 15:07:44 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Feb 02 15:07:44 compute-0 systemd[1]: libpod-497b90da4d4c2fe81f02483edd08b785e4f532bc2f3f9a53ebf99715d105c8c9.scope: Deactivated successfully.
Feb 02 15:07:44 compute-0 podman[90673]: 2026-02-02 15:07:44.334786732 +0000 UTC m=+1.609543328 container died 497b90da4d4c2fe81f02483edd08b785e4f532bc2f3f9a53ebf99715d105c8c9 (image=quay.io/ceph/ceph:v20, name=busy_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3850a7b8918020822d942bedcd5c4411e90a079054680d47449b7957ef081d7f-merged.mount: Deactivated successfully.
Feb 02 15:07:44 compute-0 podman[90673]: 2026-02-02 15:07:44.371482917 +0000 UTC m=+1.646239493 container remove 497b90da4d4c2fe81f02483edd08b785e4f532bc2f3f9a53ebf99715d105c8c9 (image=quay.io/ceph/ceph:v20, name=busy_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:44 compute-0 systemd[1]: libpod-conmon-497b90da4d4c2fe81f02483edd08b785e4f532bc2f3f9a53ebf99715d105c8c9.scope: Deactivated successfully.
Feb 02 15:07:44 compute-0 sudo[90670]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:44 compute-0 sudo[90751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oufvzqigtukmrhzauexibqnfghssztvl ; /usr/bin/python3'
Feb 02 15:07:44 compute-0 sudo[90751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:44 compute-0 python3[90753]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 15:07:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Feb 02 15:07:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:07:44 compute-0 podman[90754]: 2026-02-02 15:07:44.707560529 +0000 UTC m=+0.043374971 container create 78af40fe3abec9e6c4e4645b54b0380bffe18324fe327f8475b79aa5ed21cf64 (image=quay.io/ceph/ceph:v20, name=agitated_lumiere, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v39: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:44 compute-0 systemd[1]: Started libpod-conmon-78af40fe3abec9e6c4e4645b54b0380bffe18324fe327f8475b79aa5ed21cf64.scope.
Feb 02 15:07:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c49dad6e3a50d98817e791730e5716bee4060b3070d021e52dbf07b2c9ba6e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c49dad6e3a50d98817e791730e5716bee4060b3070d021e52dbf07b2c9ba6e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:44 compute-0 podman[90754]: 2026-02-02 15:07:44.688906079 +0000 UTC m=+0.024720521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:44 compute-0 podman[90754]: 2026-02-02 15:07:44.790576741 +0000 UTC m=+0.126391163 container init 78af40fe3abec9e6c4e4645b54b0380bffe18324fe327f8475b79aa5ed21cf64 (image=quay.io/ceph/ceph:v20, name=agitated_lumiere, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 15:07:44 compute-0 podman[90754]: 2026-02-02 15:07:44.79919342 +0000 UTC m=+0.135007822 container start 78af40fe3abec9e6c4e4645b54b0380bffe18324fe327f8475b79aa5ed21cf64 (image=quay.io/ceph/ceph:v20, name=agitated_lumiere, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:44 compute-0 podman[90754]: 2026-02-02 15:07:44.802468535 +0000 UTC m=+0.138282987 container attach 78af40fe3abec9e6c4e4645b54b0380bffe18324fe327f8475b79aa5ed21cf64 (image=quay.io/ceph/ceph:v20, name=agitated_lumiere, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:45 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 15:07:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2835279944' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Feb 02 15:07:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:07:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2835279944' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Feb 02 15:07:45 compute-0 agitated_lumiere[90769]: pool 'volumes' created
Feb 02 15:07:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2982988105' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:45 compute-0 ceph-mon[75334]: osdmap e17: 3 total, 3 up, 3 in
Feb 02 15:07:45 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:07:45 compute-0 ceph-mon[75334]: pgmap v39: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2835279944' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:45 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Feb 02 15:07:45 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:45 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev adc4cbb2-d0e4-4805-9e80-acae8f886aad (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 02 15:07:45 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev adc4cbb2-d0e4-4805-9e80-acae8f886aad (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 02 15:07:45 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event adc4cbb2-d0e4-4805-9e80-acae8f886aad (PG autoscaler increasing pool 2 PGs from 1 to 32) in 0 seconds
Feb 02 15:07:45 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:45 compute-0 systemd[1]: libpod-78af40fe3abec9e6c4e4645b54b0380bffe18324fe327f8475b79aa5ed21cf64.scope: Deactivated successfully.
Feb 02 15:07:45 compute-0 podman[90754]: 2026-02-02 15:07:45.354507061 +0000 UTC m=+0.690321483 container died 78af40fe3abec9e6c4e4645b54b0380bffe18324fe327f8475b79aa5ed21cf64 (image=quay.io/ceph/ceph:v20, name=agitated_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-87c49dad6e3a50d98817e791730e5716bee4060b3070d021e52dbf07b2c9ba6e-merged.mount: Deactivated successfully.
Feb 02 15:07:45 compute-0 podman[90754]: 2026-02-02 15:07:45.387550093 +0000 UTC m=+0.723364515 container remove 78af40fe3abec9e6c4e4645b54b0380bffe18324fe327f8475b79aa5ed21cf64 (image=quay.io/ceph/ceph:v20, name=agitated_lumiere, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:07:45 compute-0 systemd[1]: libpod-conmon-78af40fe3abec9e6c4e4645b54b0380bffe18324fe327f8475b79aa5ed21cf64.scope: Deactivated successfully.
Feb 02 15:07:45 compute-0 sudo[90751]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:45 compute-0 sudo[90831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnbifmiznvzoanxjrqqsqgtrroblnzib ; /usr/bin/python3'
Feb 02 15:07:45 compute-0 sudo[90831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:45 compute-0 python3[90833]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:45 compute-0 podman[90834]: 2026-02-02 15:07:45.751147248 +0000 UTC m=+0.050139465 container create b68b082d665ff698920dd177f9c310854c6a9979deb76d7b75651de518caa39d (image=quay.io/ceph/ceph:v20, name=frosty_galileo, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:07:45 compute-0 systemd[1]: Started libpod-conmon-b68b082d665ff698920dd177f9c310854c6a9979deb76d7b75651de518caa39d.scope.
Feb 02 15:07:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04750908923fd517309d8474b323ce16ce7891d7dd1b5a54bd3f96ec2d58cc06/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04750908923fd517309d8474b323ce16ce7891d7dd1b5a54bd3f96ec2d58cc06/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:45 compute-0 podman[90834]: 2026-02-02 15:07:45.729466779 +0000 UTC m=+0.028458976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:45 compute-0 podman[90834]: 2026-02-02 15:07:45.846904494 +0000 UTC m=+0.145896761 container init b68b082d665ff698920dd177f9c310854c6a9979deb76d7b75651de518caa39d (image=quay.io/ceph/ceph:v20, name=frosty_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:45 compute-0 podman[90834]: 2026-02-02 15:07:45.85278455 +0000 UTC m=+0.151776767 container start b68b082d665ff698920dd177f9c310854c6a9979deb76d7b75651de518caa39d (image=quay.io/ceph/ceph:v20, name=frosty_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:45 compute-0 podman[90834]: 2026-02-02 15:07:45.856670959 +0000 UTC m=+0.155663226 container attach b68b082d665ff698920dd177f9c310854c6a9979deb76d7b75651de518caa39d (image=quay.io/ceph/ceph:v20, name=frosty_galileo, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:07:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 15:07:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4074794776' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Feb 02 15:07:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4074794776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Feb 02 15:07:46 compute-0 frosty_galileo[90849]: pool 'backups' created
Feb 02 15:07:46 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Feb 02 15:07:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:07:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2835279944' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:46 compute-0 ceph-mon[75334]: osdmap e18: 3 total, 3 up, 3 in
Feb 02 15:07:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4074794776' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:46 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:46 compute-0 systemd[1]: libpod-b68b082d665ff698920dd177f9c310854c6a9979deb76d7b75651de518caa39d.scope: Deactivated successfully.
Feb 02 15:07:46 compute-0 podman[90834]: 2026-02-02 15:07:46.355925629 +0000 UTC m=+0.654917836 container died b68b082d665ff698920dd177f9c310854c6a9979deb76d7b75651de518caa39d (image=quay.io/ceph/ceph:v20, name=frosty_galileo, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-04750908923fd517309d8474b323ce16ce7891d7dd1b5a54bd3f96ec2d58cc06-merged.mount: Deactivated successfully.
Feb 02 15:07:46 compute-0 podman[90834]: 2026-02-02 15:07:46.399441552 +0000 UTC m=+0.698433729 container remove b68b082d665ff698920dd177f9c310854c6a9979deb76d7b75651de518caa39d (image=quay.io/ceph/ceph:v20, name=frosty_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:07:46 compute-0 systemd[1]: libpod-conmon-b68b082d665ff698920dd177f9c310854c6a9979deb76d7b75651de518caa39d.scope: Deactivated successfully.
Feb 02 15:07:46 compute-0 sudo[90831]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:46 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:46 compute-0 sudo[90911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwgqosmfzqfjxbsnwfpoouknzjkmevfm ; /usr/bin/python3'
Feb 02 15:07:46 compute-0 sudo[90911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:46 compute-0 python3[90913]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v42: 4 pgs: 1 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:46 compute-0 podman[90914]: 2026-02-02 15:07:46.758416451 +0000 UTC m=+0.037972985 container create 8ef52c316f921c791b35c841a6c1509e3abc6f137e57bd94de8469eb743b7c33 (image=quay.io/ceph/ceph:v20, name=interesting_euler, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 15:07:46 compute-0 systemd[1]: Started libpod-conmon-8ef52c316f921c791b35c841a6c1509e3abc6f137e57bd94de8469eb743b7c33.scope.
Feb 02 15:07:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625e9db2e6489f51ba5e0e0dc8f1ee6c7369fe223cb0ed5ce8082f9980b69737/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625e9db2e6489f51ba5e0e0dc8f1ee6c7369fe223cb0ed5ce8082f9980b69737/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:46 compute-0 podman[90914]: 2026-02-02 15:07:46.742830212 +0000 UTC m=+0.022386786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:46 compute-0 podman[90914]: 2026-02-02 15:07:46.844296479 +0000 UTC m=+0.123853043 container init 8ef52c316f921c791b35c841a6c1509e3abc6f137e57bd94de8469eb743b7c33 (image=quay.io/ceph/ceph:v20, name=interesting_euler, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:46 compute-0 podman[90914]: 2026-02-02 15:07:46.852502718 +0000 UTC m=+0.132059292 container start 8ef52c316f921c791b35c841a6c1509e3abc6f137e57bd94de8469eb743b7c33 (image=quay.io/ceph/ceph:v20, name=interesting_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 02 15:07:46 compute-0 podman[90914]: 2026-02-02 15:07:46.856892219 +0000 UTC m=+0.136448783 container attach 8ef52c316f921c791b35c841a6c1509e3abc6f137e57bd94de8469eb743b7c33 (image=quay.io/ceph/ceph:v20, name=interesting_euler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:07:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 15:07:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1024279554' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Feb 02 15:07:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1024279554' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Feb 02 15:07:47 compute-0 interesting_euler[90929]: pool 'images' created
Feb 02 15:07:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Feb 02 15:07:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4074794776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:47 compute-0 ceph-mon[75334]: osdmap e19: 3 total, 3 up, 3 in
Feb 02 15:07:47 compute-0 ceph-mon[75334]: pgmap v42: 4 pgs: 1 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1024279554' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:47 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:47 compute-0 systemd[1]: libpod-8ef52c316f921c791b35c841a6c1509e3abc6f137e57bd94de8469eb743b7c33.scope: Deactivated successfully.
Feb 02 15:07:47 compute-0 podman[90914]: 2026-02-02 15:07:47.369527958 +0000 UTC m=+0.649084502 container died 8ef52c316f921c791b35c841a6c1509e3abc6f137e57bd94de8469eb743b7c33 (image=quay.io/ceph/ceph:v20, name=interesting_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-625e9db2e6489f51ba5e0e0dc8f1ee6c7369fe223cb0ed5ce8082f9980b69737-merged.mount: Deactivated successfully.
Feb 02 15:07:47 compute-0 podman[90914]: 2026-02-02 15:07:47.405996488 +0000 UTC m=+0.685553032 container remove 8ef52c316f921c791b35c841a6c1509e3abc6f137e57bd94de8469eb743b7c33 (image=quay.io/ceph/ceph:v20, name=interesting_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:47 compute-0 systemd[1]: libpod-conmon-8ef52c316f921c791b35c841a6c1509e3abc6f137e57bd94de8469eb743b7c33.scope: Deactivated successfully.
Feb 02 15:07:47 compute-0 sudo[90911]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:47 compute-0 sudo[90990]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crseclazguaypuncxfnauoyarrvkkbpf ; /usr/bin/python3'
Feb 02 15:07:47 compute-0 sudo[90990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:47 compute-0 python3[90992]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:47 compute-0 podman[90993]: 2026-02-02 15:07:47.735116519 +0000 UTC m=+0.042622162 container create 667027e6362a45a8b5c6e234a959ca8b25bc2d2c16bd7ed0aa8c8a821c5355a5 (image=quay.io/ceph/ceph:v20, name=jolly_panini, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:47 compute-0 systemd[1]: Started libpod-conmon-667027e6362a45a8b5c6e234a959ca8b25bc2d2c16bd7ed0aa8c8a821c5355a5.scope.
Feb 02 15:07:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6da8ec4bb2c53cf70ce886203ad4c77fa3c4028d2e8bc969a4ca9f370f75299/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6da8ec4bb2c53cf70ce886203ad4c77fa3c4028d2e8bc969a4ca9f370f75299/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:47 compute-0 podman[90993]: 2026-02-02 15:07:47.716601663 +0000 UTC m=+0.024107316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:47 compute-0 podman[90993]: 2026-02-02 15:07:47.824056578 +0000 UTC m=+0.131562291 container init 667027e6362a45a8b5c6e234a959ca8b25bc2d2c16bd7ed0aa8c8a821c5355a5 (image=quay.io/ceph/ceph:v20, name=jolly_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:47 compute-0 podman[90993]: 2026-02-02 15:07:47.831152572 +0000 UTC m=+0.138658215 container start 667027e6362a45a8b5c6e234a959ca8b25bc2d2c16bd7ed0aa8c8a821c5355a5 (image=quay.io/ceph/ceph:v20, name=jolly_panini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:47 compute-0 podman[90993]: 2026-02-02 15:07:47.834586051 +0000 UTC m=+0.142091764 container attach 667027e6362a45a8b5c6e234a959ca8b25bc2d2c16bd7ed0aa8c8a821c5355a5 (image=quay.io/ceph/ceph:v20, name=jolly_panini, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:07:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 15:07:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1726678753' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:48 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Feb 02 15:07:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1726678753' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Feb 02 15:07:48 compute-0 jolly_panini[91009]: pool 'cephfs.cephfs.meta' created
Feb 02 15:07:48 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Feb 02 15:07:48 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:48 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1024279554' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:48 compute-0 ceph-mon[75334]: osdmap e20: 3 total, 3 up, 3 in
Feb 02 15:07:48 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1726678753' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:48 compute-0 systemd[1]: libpod-667027e6362a45a8b5c6e234a959ca8b25bc2d2c16bd7ed0aa8c8a821c5355a5.scope: Deactivated successfully.
Feb 02 15:07:48 compute-0 podman[90993]: 2026-02-02 15:07:48.391045249 +0000 UTC m=+0.698550932 container died 667027e6362a45a8b5c6e234a959ca8b25bc2d2c16bd7ed0aa8c8a821c5355a5 (image=quay.io/ceph/ceph:v20, name=jolly_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:48 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6da8ec4bb2c53cf70ce886203ad4c77fa3c4028d2e8bc969a4ca9f370f75299-merged.mount: Deactivated successfully.
Feb 02 15:07:48 compute-0 podman[90993]: 2026-02-02 15:07:48.433368304 +0000 UTC m=+0.740873987 container remove 667027e6362a45a8b5c6e234a959ca8b25bc2d2c16bd7ed0aa8c8a821c5355a5 (image=quay.io/ceph/ceph:v20, name=jolly_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:48 compute-0 systemd[1]: libpod-conmon-667027e6362a45a8b5c6e234a959ca8b25bc2d2c16bd7ed0aa8c8a821c5355a5.scope: Deactivated successfully.
Feb 02 15:07:48 compute-0 sudo[90990]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:48 compute-0 sudo[91070]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqvbyahdjpdgmfafbewvogoysbxnefja ; /usr/bin/python3'
Feb 02 15:07:48 compute-0 sudo[91070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:48 compute-0 python3[91072]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v45: 6 pgs: 2 active+clean, 4 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 15:07:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:07:48 compute-0 podman[91073]: 2026-02-02 15:07:48.780532311 +0000 UTC m=+0.053235187 container create 31835647e06df4b73eb55860064f5e5b01d65982a51c18883de1cf171617e41c (image=quay.io/ceph/ceph:v20, name=relaxed_banzai, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:07:48 compute-0 systemd[1]: Started libpod-conmon-31835647e06df4b73eb55860064f5e5b01d65982a51c18883de1cf171617e41c.scope.
Feb 02 15:07:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95878d4ea9539ab70d0e7495c5f21157d37b6ca835856937a8f2a540fc6be04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95878d4ea9539ab70d0e7495c5f21157d37b6ca835856937a8f2a540fc6be04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:48 compute-0 podman[91073]: 2026-02-02 15:07:48.846987983 +0000 UTC m=+0.119690859 container init 31835647e06df4b73eb55860064f5e5b01d65982a51c18883de1cf171617e41c (image=quay.io/ceph/ceph:v20, name=relaxed_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:48 compute-0 podman[91073]: 2026-02-02 15:07:48.750889479 +0000 UTC m=+0.023592365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:48 compute-0 podman[91073]: 2026-02-02 15:07:48.856786368 +0000 UTC m=+0.129489254 container start 31835647e06df4b73eb55860064f5e5b01d65982a51c18883de1cf171617e41c (image=quay.io/ceph/ceph:v20, name=relaxed_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:07:48 compute-0 podman[91073]: 2026-02-02 15:07:48.860262618 +0000 UTC m=+0.132965484 container attach 31835647e06df4b73eb55860064f5e5b01d65982a51c18883de1cf171617e41c (image=quay.io/ceph/ceph:v20, name=relaxed_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:07:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 02 15:07:49 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/552543380' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Feb 02 15:07:49 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:07:49 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/552543380' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Feb 02 15:07:49 compute-0 relaxed_banzai[91088]: pool 'cephfs.cephfs.data' created
Feb 02 15:07:49 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Feb 02 15:07:49 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1726678753' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:49 compute-0 ceph-mon[75334]: osdmap e21: 3 total, 3 up, 3 in
Feb 02 15:07:49 compute-0 ceph-mon[75334]: pgmap v45: 6 pgs: 2 active+clean, 4 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:49 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:07:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/552543380' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 02 15:07:49 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:07:49 compute-0 systemd[1]: libpod-31835647e06df4b73eb55860064f5e5b01d65982a51c18883de1cf171617e41c.scope: Deactivated successfully.
Feb 02 15:07:49 compute-0 podman[91073]: 2026-02-02 15:07:49.394306279 +0000 UTC m=+0.667009135 container died 31835647e06df4b73eb55860064f5e5b01d65982a51c18883de1cf171617e41c (image=quay.io/ceph/ceph:v20, name=relaxed_banzai, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b95878d4ea9539ab70d0e7495c5f21157d37b6ca835856937a8f2a540fc6be04-merged.mount: Deactivated successfully.
Feb 02 15:07:49 compute-0 podman[91073]: 2026-02-02 15:07:49.431924476 +0000 UTC m=+0.704627332 container remove 31835647e06df4b73eb55860064f5e5b01d65982a51c18883de1cf171617e41c (image=quay.io/ceph/ceph:v20, name=relaxed_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:07:49 compute-0 systemd[1]: libpod-conmon-31835647e06df4b73eb55860064f5e5b01d65982a51c18883de1cf171617e41c.scope: Deactivated successfully.
Feb 02 15:07:49 compute-0 sudo[91070]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:49 compute-0 sudo[91151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgugcpvbloaiejlcmolysucagbaodoot ; /usr/bin/python3'
Feb 02 15:07:49 compute-0 sudo[91151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 22 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=22 pruub=11.646620750s) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active pruub 28.262592316s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:49 compute-0 ceph-mgr[75628]: [progress INFO root] Writing back 4 completed events
Feb 02 15:07:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 22 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=22 pruub=11.646620750s) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown pruub 28.262592316s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 15:07:49 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:49 compute-0 python3[91153]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:49 compute-0 podman[91154]: 2026-02-02 15:07:49.787157809 +0000 UTC m=+0.046098223 container create 518bfb3384720fc38edf61f731f6ac5a9989abf707458e3ce5570f1751bf03cc (image=quay.io/ceph/ceph:v20, name=exciting_faraday, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:07:49 compute-0 systemd[1]: Started libpod-conmon-518bfb3384720fc38edf61f731f6ac5a9989abf707458e3ce5570f1751bf03cc.scope.
Feb 02 15:07:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ae0f65a581370530fdca29d14b9da8c8ba9d3f51970c5eb8aa7f41cdda2f08/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ae0f65a581370530fdca29d14b9da8c8ba9d3f51970c5eb8aa7f41cdda2f08/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:49 compute-0 podman[91154]: 2026-02-02 15:07:49.851280747 +0000 UTC m=+0.110221261 container init 518bfb3384720fc38edf61f731f6ac5a9989abf707458e3ce5570f1751bf03cc (image=quay.io/ceph/ceph:v20, name=exciting_faraday, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:07:49 compute-0 podman[91154]: 2026-02-02 15:07:49.856402275 +0000 UTC m=+0.115342689 container start 518bfb3384720fc38edf61f731f6ac5a9989abf707458e3ce5570f1751bf03cc (image=quay.io/ceph/ceph:v20, name=exciting_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:49 compute-0 podman[91154]: 2026-02-02 15:07:49.860423687 +0000 UTC m=+0.119364171 container attach 518bfb3384720fc38edf61f731f6ac5a9989abf707458e3ce5570f1751bf03cc (image=quay.io/ceph/ceph:v20, name=exciting_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 15:07:49 compute-0 podman[91154]: 2026-02-02 15:07:49.770989657 +0000 UTC m=+0.029930101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Feb 02 15:07:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1341899087' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb 02 15:07:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Feb 02 15:07:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1341899087' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 02 15:07:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Feb 02 15:07:50 compute-0 exciting_faraday[91169]: enabled application 'rbd' on pool 'vms'
Feb 02 15:07:50 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1f( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1d( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1c( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.b( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1e( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.9( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.8( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.6( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.a( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.5( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.4( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.3( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.7( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.2( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.c( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.d( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.e( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.10( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.11( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.12( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.f( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.13( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.14( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.15( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.16( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.17( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.19( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1a( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1b( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.18( empty local-lis/les=17/18 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:50 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1c( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.9( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.8( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.a( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1e( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.6( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.4( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.5( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.0( empty local-lis/les=22/23 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.7( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.2( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.3( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.c( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.11( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.10( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.12( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.e( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.13( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.16( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.15( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.17( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.19( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1a( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.1b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.14( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 23 pg[2.18( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=17/17 les/c/f=18/18/0 sis=22) [2] r=0 lpr=22 pi=[17,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/552543380' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 02 15:07:50 compute-0 ceph-mon[75334]: osdmap e22: 3 total, 3 up, 3 in
Feb 02 15:07:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:07:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1341899087' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb 02 15:07:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1341899087' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 02 15:07:50 compute-0 ceph-mon[75334]: osdmap e23: 3 total, 3 up, 3 in
Feb 02 15:07:50 compute-0 systemd[1]: libpod-518bfb3384720fc38edf61f731f6ac5a9989abf707458e3ce5570f1751bf03cc.scope: Deactivated successfully.
Feb 02 15:07:50 compute-0 podman[91154]: 2026-02-02 15:07:50.403292982 +0000 UTC m=+0.662233396 container died 518bfb3384720fc38edf61f731f6ac5a9989abf707458e3ce5570f1751bf03cc (image=quay.io/ceph/ceph:v20, name=exciting_faraday, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8ae0f65a581370530fdca29d14b9da8c8ba9d3f51970c5eb8aa7f41cdda2f08-merged.mount: Deactivated successfully.
Feb 02 15:07:50 compute-0 podman[91154]: 2026-02-02 15:07:50.459232481 +0000 UTC m=+0.718172935 container remove 518bfb3384720fc38edf61f731f6ac5a9989abf707458e3ce5570f1751bf03cc (image=quay.io/ceph/ceph:v20, name=exciting_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:07:50 compute-0 systemd[1]: libpod-conmon-518bfb3384720fc38edf61f731f6ac5a9989abf707458e3ce5570f1751bf03cc.scope: Deactivated successfully.
Feb 02 15:07:50 compute-0 sudo[91151]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:50 compute-0 sudo[91231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyheyhgkvmcoieqnziphtpifiuailwaz ; /usr/bin/python3'
Feb 02 15:07:50 compute-0 sudo[91231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v48: 38 pgs: 1 creating+peering, 5 active+clean, 32 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:50 compute-0 python3[91233]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:50 compute-0 podman[91234]: 2026-02-02 15:07:50.814256139 +0000 UTC m=+0.033688257 container create 46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a (image=quay.io/ceph/ceph:v20, name=loving_hopper, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:50 compute-0 systemd[1]: Started libpod-conmon-46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a.scope.
Feb 02 15:07:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97bd2e42dad97682d2dfcd44e9638cb88912dbe0a6e3ea6a46d4ccea5849c48f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97bd2e42dad97682d2dfcd44e9638cb88912dbe0a6e3ea6a46d4ccea5849c48f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:50 compute-0 podman[91234]: 2026-02-02 15:07:50.798160138 +0000 UTC m=+0.017592276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:50 compute-0 podman[91234]: 2026-02-02 15:07:50.895187123 +0000 UTC m=+0.114619301 container init 46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a (image=quay.io/ceph/ceph:v20, name=loving_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:07:50 compute-0 podman[91234]: 2026-02-02 15:07:50.899626376 +0000 UTC m=+0.119058524 container start 46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a (image=quay.io/ceph/ceph:v20, name=loving_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:50 compute-0 podman[91234]: 2026-02-02 15:07:50.903656298 +0000 UTC m=+0.123088526 container attach 46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a (image=quay.io/ceph/ceph:v20, name=loving_hopper, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Feb 02 15:07:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1905887127' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb 02 15:07:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Feb 02 15:07:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1905887127' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 02 15:07:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Feb 02 15:07:51 compute-0 loving_hopper[91249]: enabled application 'rbd' on pool 'volumes'
Feb 02 15:07:51 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Feb 02 15:07:51 compute-0 systemd[1]: libpod-46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a.scope: Deactivated successfully.
Feb 02 15:07:51 compute-0 conmon[91249]: conmon 46de3c6fe11c8d9bd73a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a.scope/container/memory.events
Feb 02 15:07:51 compute-0 ceph-mon[75334]: pgmap v48: 38 pgs: 1 creating+peering, 5 active+clean, 32 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:51 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1905887127' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb 02 15:07:51 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1905887127' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 02 15:07:51 compute-0 ceph-mon[75334]: osdmap e24: 3 total, 3 up, 3 in
Feb 02 15:07:51 compute-0 podman[91274]: 2026-02-02 15:07:51.449264347 +0000 UTC m=+0.032989841 container died 46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a (image=quay.io/ceph/ceph:v20, name=loving_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-97bd2e42dad97682d2dfcd44e9638cb88912dbe0a6e3ea6a46d4ccea5849c48f-merged.mount: Deactivated successfully.
Feb 02 15:07:51 compute-0 podman[91274]: 2026-02-02 15:07:51.488983052 +0000 UTC m=+0.072708506 container remove 46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a (image=quay.io/ceph/ceph:v20, name=loving_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:07:51 compute-0 systemd[1]: libpod-conmon-46de3c6fe11c8d9bd73a82dc814337b7671e8af99f6c1cc266b2003f59635f6a.scope: Deactivated successfully.
Feb 02 15:07:51 compute-0 sudo[91231]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:51 compute-0 sudo[91312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uieavfztqcvfrrobukqcqzgwswcdvebf ; /usr/bin/python3'
Feb 02 15:07:51 compute-0 sudo[91312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:51 compute-0 python3[91314]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:51 compute-0 podman[91315]: 2026-02-02 15:07:51.872774273 +0000 UTC m=+0.048431357 container create 8762ed70f9deb0272ad6475dc685592ad8bcfeb7d23c9db3bab9b8e6d31b0dfc (image=quay.io/ceph/ceph:v20, name=romantic_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:07:51 compute-0 systemd[1]: Started libpod-conmon-8762ed70f9deb0272ad6475dc685592ad8bcfeb7d23c9db3bab9b8e6d31b0dfc.scope.
Feb 02 15:07:51 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752e82fd6f1d5eadcce2a798139f9c6d1903e0f256a756230e66ff387fb2a3f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752e82fd6f1d5eadcce2a798139f9c6d1903e0f256a756230e66ff387fb2a3f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:51 compute-0 podman[91315]: 2026-02-02 15:07:51.853532799 +0000 UTC m=+0.029189973 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:51 compute-0 podman[91315]: 2026-02-02 15:07:51.953482521 +0000 UTC m=+0.129139655 container init 8762ed70f9deb0272ad6475dc685592ad8bcfeb7d23c9db3bab9b8e6d31b0dfc (image=quay.io/ceph/ceph:v20, name=romantic_napier, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 02 15:07:51 compute-0 podman[91315]: 2026-02-02 15:07:51.959293935 +0000 UTC m=+0.134951039 container start 8762ed70f9deb0272ad6475dc685592ad8bcfeb7d23c9db3bab9b8e6d31b0dfc (image=quay.io/ceph/ceph:v20, name=romantic_napier, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:07:51 compute-0 podman[91315]: 2026-02-02 15:07:51.963943993 +0000 UTC m=+0.139601097 container attach 8762ed70f9deb0272ad6475dc685592ad8bcfeb7d23c9db3bab9b8e6d31b0dfc (image=quay.io/ceph/ceph:v20, name=romantic_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:07:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Feb 02 15:07:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3384251253' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb 02 15:07:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Feb 02 15:07:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3384251253' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb 02 15:07:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3384251253' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 02 15:07:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Feb 02 15:07:52 compute-0 romantic_napier[91330]: enabled application 'rbd' on pool 'backups'
Feb 02 15:07:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Feb 02 15:07:52 compute-0 systemd[1]: libpod-8762ed70f9deb0272ad6475dc685592ad8bcfeb7d23c9db3bab9b8e6d31b0dfc.scope: Deactivated successfully.
Feb 02 15:07:52 compute-0 podman[91315]: 2026-02-02 15:07:52.472017507 +0000 UTC m=+0.647674621 container died 8762ed70f9deb0272ad6475dc685592ad8bcfeb7d23c9db3bab9b8e6d31b0dfc (image=quay.io/ceph/ceph:v20, name=romantic_napier, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-752e82fd6f1d5eadcce2a798139f9c6d1903e0f256a756230e66ff387fb2a3f5-merged.mount: Deactivated successfully.
Feb 02 15:07:52 compute-0 podman[91315]: 2026-02-02 15:07:52.521844944 +0000 UTC m=+0.697502028 container remove 8762ed70f9deb0272ad6475dc685592ad8bcfeb7d23c9db3bab9b8e6d31b0dfc (image=quay.io/ceph/ceph:v20, name=romantic_napier, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:07:52 compute-0 systemd[1]: libpod-conmon-8762ed70f9deb0272ad6475dc685592ad8bcfeb7d23c9db3bab9b8e6d31b0dfc.scope: Deactivated successfully.
Feb 02 15:07:52 compute-0 sudo[91312]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:52 compute-0 sudo[91391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adinfvmqwbyfidtclryzdkehkuaiivcm ; /usr/bin/python3'
Feb 02 15:07:52 compute-0 sudo[91391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v51: 38 pgs: 1 creating+peering, 5 active+clean, 32 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:52 compute-0 python3[91393]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:52 compute-0 podman[91394]: 2026-02-02 15:07:52.869240186 +0000 UTC m=+0.039985121 container create 2389cc492768fa166b7c97e35f367170da6106b62b49cb323e8ee726a26f08a8 (image=quay.io/ceph/ceph:v20, name=dreamy_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:07:52 compute-0 systemd[1]: Started libpod-conmon-2389cc492768fa166b7c97e35f367170da6106b62b49cb323e8ee726a26f08a8.scope.
Feb 02 15:07:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8a5d102e4fdad5a3b8dd59d4a77c49103754f27f3162189ba5d516efd8cef8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8a5d102e4fdad5a3b8dd59d4a77c49103754f27f3162189ba5d516efd8cef8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:52 compute-0 podman[91394]: 2026-02-02 15:07:52.937974339 +0000 UTC m=+0.108719274 container init 2389cc492768fa166b7c97e35f367170da6106b62b49cb323e8ee726a26f08a8 (image=quay.io/ceph/ceph:v20, name=dreamy_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:52 compute-0 podman[91394]: 2026-02-02 15:07:52.94232591 +0000 UTC m=+0.113070835 container start 2389cc492768fa166b7c97e35f367170da6106b62b49cb323e8ee726a26f08a8 (image=quay.io/ceph/ceph:v20, name=dreamy_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:07:52 compute-0 podman[91394]: 2026-02-02 15:07:52.946413354 +0000 UTC m=+0.117158289 container attach 2389cc492768fa166b7c97e35f367170da6106b62b49cb323e8ee726a26f08a8 (image=quay.io/ceph/ceph:v20, name=dreamy_mayer, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:07:52 compute-0 podman[91394]: 2026-02-02 15:07:52.852089431 +0000 UTC m=+0.022834356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Feb 02 15:07:53 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1102466026' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb 02 15:07:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Feb 02 15:07:53 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1102466026' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 02 15:07:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Feb 02 15:07:53 compute-0 dreamy_mayer[91409]: enabled application 'rbd' on pool 'images'
Feb 02 15:07:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3384251253' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 02 15:07:53 compute-0 ceph-mon[75334]: osdmap e25: 3 total, 3 up, 3 in
Feb 02 15:07:53 compute-0 ceph-mon[75334]: pgmap v51: 38 pgs: 1 creating+peering, 5 active+clean, 32 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1102466026' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb 02 15:07:53 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Feb 02 15:07:53 compute-0 systemd[1]: libpod-2389cc492768fa166b7c97e35f367170da6106b62b49cb323e8ee726a26f08a8.scope: Deactivated successfully.
Feb 02 15:07:53 compute-0 podman[91394]: 2026-02-02 15:07:53.472916092 +0000 UTC m=+0.643661037 container died 2389cc492768fa166b7c97e35f367170da6106b62b49cb323e8ee726a26f08a8 (image=quay.io/ceph/ceph:v20, name=dreamy_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f8a5d102e4fdad5a3b8dd59d4a77c49103754f27f3162189ba5d516efd8cef8-merged.mount: Deactivated successfully.
Feb 02 15:07:53 compute-0 podman[91394]: 2026-02-02 15:07:53.526547358 +0000 UTC m=+0.697292273 container remove 2389cc492768fa166b7c97e35f367170da6106b62b49cb323e8ee726a26f08a8 (image=quay.io/ceph/ceph:v20, name=dreamy_mayer, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:07:53 compute-0 systemd[1]: libpod-conmon-2389cc492768fa166b7c97e35f367170da6106b62b49cb323e8ee726a26f08a8.scope: Deactivated successfully.
Feb 02 15:07:53 compute-0 sudo[91391]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:53 compute-0 sudo[91471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxznfmnbrpwixvqodeqyapjbohmxohma ; /usr/bin/python3'
Feb 02 15:07:53 compute-0 sudo[91471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:53 compute-0 python3[91473]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:53 compute-0 podman[91474]: 2026-02-02 15:07:53.881896434 +0000 UTC m=+0.043905243 container create 1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada (image=quay.io/ceph/ceph:v20, name=peaceful_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:53 compute-0 systemd[1]: Started libpod-conmon-1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada.scope.
Feb 02 15:07:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c85f7c589071572cb438b0d48ce0ad2f855584a375edc3272c5f7c8dc82e7e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c85f7c589071572cb438b0d48ce0ad2f855584a375edc3272c5f7c8dc82e7e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:53 compute-0 podman[91474]: 2026-02-02 15:07:53.865339782 +0000 UTC m=+0.027348601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:53 compute-0 podman[91474]: 2026-02-02 15:07:53.963103714 +0000 UTC m=+0.125112533 container init 1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada (image=quay.io/ceph/ceph:v20, name=peaceful_cerf, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:07:53 compute-0 podman[91474]: 2026-02-02 15:07:53.972128682 +0000 UTC m=+0.134137501 container start 1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada (image=quay.io/ceph/ceph:v20, name=peaceful_cerf, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:53 compute-0 podman[91474]: 2026-02-02 15:07:53.975829347 +0000 UTC m=+0.137838196 container attach 1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada (image=quay.io/ceph/ceph:v20, name=peaceful_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:54 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Feb 02 15:07:54 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Feb 02 15:07:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Feb 02 15:07:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1747855249' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb 02 15:07:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Feb 02 15:07:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1747855249' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 02 15:07:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Feb 02 15:07:54 compute-0 peaceful_cerf[91489]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Feb 02 15:07:54 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Feb 02 15:07:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1102466026' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 02 15:07:54 compute-0 ceph-mon[75334]: osdmap e26: 3 total, 3 up, 3 in
Feb 02 15:07:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1747855249' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb 02 15:07:54 compute-0 systemd[1]: libpod-1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada.scope: Deactivated successfully.
Feb 02 15:07:54 compute-0 conmon[91489]: conmon 1b089e0669ed445738e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada.scope/container/memory.events
Feb 02 15:07:54 compute-0 podman[91474]: 2026-02-02 15:07:54.491823323 +0000 UTC m=+0.653832132 container died 1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada (image=quay.io/ceph/ceph:v20, name=peaceful_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-83c85f7c589071572cb438b0d48ce0ad2f855584a375edc3272c5f7c8dc82e7e-merged.mount: Deactivated successfully.
Feb 02 15:07:54 compute-0 podman[91474]: 2026-02-02 15:07:54.533697777 +0000 UTC m=+0.695706586 container remove 1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada (image=quay.io/ceph/ceph:v20, name=peaceful_cerf, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:07:54 compute-0 systemd[1]: libpod-conmon-1b089e0669ed445738e6b1fb67e38644c79f25828aa044a9994096ff84b54ada.scope: Deactivated successfully.
Feb 02 15:07:54 compute-0 sudo[91471]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:54 compute-0 sudo[91551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rngqltvzwhsifzkuqnrifameoxmraqwj ; /usr/bin/python3'
Feb 02 15:07:54 compute-0 sudo[91551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v54: 38 pgs: 1 creating+peering, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:54 compute-0 python3[91553]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:54 compute-0 podman[91554]: 2026-02-02 15:07:54.923085407 +0000 UTC m=+0.057464635 container create 0d66694032161c4fd54bebe5c0d0f8d9f73239c60bfd8558a9f59eee5f687367 (image=quay.io/ceph/ceph:v20, name=gallant_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:07:54 compute-0 systemd[1]: Started libpod-conmon-0d66694032161c4fd54bebe5c0d0f8d9f73239c60bfd8558a9f59eee5f687367.scope.
Feb 02 15:07:54 compute-0 podman[91554]: 2026-02-02 15:07:54.888635074 +0000 UTC m=+0.023014312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84f8b5a3767c0ea3e45fffff9a821ca910f5de9889dcc59565e8191fe0c3125/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84f8b5a3767c0ea3e45fffff9a821ca910f5de9889dcc59565e8191fe0c3125/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:55 compute-0 podman[91554]: 2026-02-02 15:07:55.024492983 +0000 UTC m=+0.158872211 container init 0d66694032161c4fd54bebe5c0d0f8d9f73239c60bfd8558a9f59eee5f687367 (image=quay.io/ceph/ceph:v20, name=gallant_goldberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:07:55 compute-0 podman[91554]: 2026-02-02 15:07:55.032355924 +0000 UTC m=+0.166735172 container start 0d66694032161c4fd54bebe5c0d0f8d9f73239c60bfd8558a9f59eee5f687367 (image=quay.io/ceph/ceph:v20, name=gallant_goldberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:07:55 compute-0 podman[91554]: 2026-02-02 15:07:55.039941509 +0000 UTC m=+0.174320797 container attach 0d66694032161c4fd54bebe5c0d0f8d9f73239c60bfd8558a9f59eee5f687367 (image=quay.io/ceph/ceph:v20, name=gallant_goldberg, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb 02 15:07:55 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb 02 15:07:55 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb 02 15:07:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Feb 02 15:07:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/376607701' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb 02 15:07:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Feb 02 15:07:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/376607701' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 02 15:07:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Feb 02 15:07:55 compute-0 gallant_goldberg[91569]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Feb 02 15:07:55 compute-0 ceph-mon[75334]: 2.1c scrub starts
Feb 02 15:07:55 compute-0 ceph-mon[75334]: 2.1c scrub ok
Feb 02 15:07:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1747855249' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 02 15:07:55 compute-0 ceph-mon[75334]: osdmap e27: 3 total, 3 up, 3 in
Feb 02 15:07:55 compute-0 ceph-mon[75334]: pgmap v54: 38 pgs: 1 creating+peering, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/376607701' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb 02 15:07:55 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Feb 02 15:07:55 compute-0 systemd[1]: libpod-0d66694032161c4fd54bebe5c0d0f8d9f73239c60bfd8558a9f59eee5f687367.scope: Deactivated successfully.
Feb 02 15:07:55 compute-0 podman[91554]: 2026-02-02 15:07:55.503001505 +0000 UTC m=+0.637380723 container died 0d66694032161c4fd54bebe5c0d0f8d9f73239c60bfd8558a9f59eee5f687367 (image=quay.io/ceph/ceph:v20, name=gallant_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f84f8b5a3767c0ea3e45fffff9a821ca910f5de9889dcc59565e8191fe0c3125-merged.mount: Deactivated successfully.
Feb 02 15:07:55 compute-0 podman[91554]: 2026-02-02 15:07:55.59435336 +0000 UTC m=+0.728732598 container remove 0d66694032161c4fd54bebe5c0d0f8d9f73239c60bfd8558a9f59eee5f687367 (image=quay.io/ceph/ceph:v20, name=gallant_goldberg, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:07:55 compute-0 sudo[91551]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:07:55 compute-0 systemd[1]: libpod-conmon-0d66694032161c4fd54bebe5c0d0f8d9f73239c60bfd8558a9f59eee5f687367.scope: Deactivated successfully.
Feb 02 15:07:56 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.b scrub starts
Feb 02 15:07:56 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.b scrub ok
Feb 02 15:07:56 compute-0 python3[91681]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 15:07:56 compute-0 ceph-mon[75334]: 2.1f scrub starts
Feb 02 15:07:56 compute-0 ceph-mon[75334]: 2.1f scrub ok
Feb 02 15:07:56 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/376607701' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 02 15:07:56 compute-0 ceph-mon[75334]: osdmap e28: 3 total, 3 up, 3 in
Feb 02 15:07:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v56: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 15:07:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:07:56 compute-0 python3[91752]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770044876.1491585-36669-218871810881997/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:07:57 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb 02 15:07:57 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb 02 15:07:57 compute-0 sudo[91852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abxmwdlbwvbbnuenukxogljopmguwkpb ; /usr/bin/python3'
Feb 02 15:07:57 compute-0 sudo[91852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:57 compute-0 python3[91854]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 15:07:57 compute-0 sudo[91852]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Feb 02 15:07:57 compute-0 ceph-mon[75334]: 2.b scrub starts
Feb 02 15:07:57 compute-0 ceph-mon[75334]: 2.b scrub ok
Feb 02 15:07:57 compute-0 ceph-mon[75334]: pgmap v56: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:07:57 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:07:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Feb 02 15:07:57 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.1f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849727631s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.312442780s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.1d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849996567s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.312721252s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.1c( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849651337s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.312442780s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.a( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849988937s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.312767029s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.1f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849672318s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.312442780s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.1d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849939346s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.312721252s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.a( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849928856s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.312767029s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.1c( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849602699s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.312442780s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.6( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854506493s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.317588806s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.8( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849648476s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.312744141s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.9( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849633217s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.312747955s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.6( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854487419s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.317588806s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.8( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849614143s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.312744141s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.9( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849607468s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.312747955s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.5( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854380608s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.317623138s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.5( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854365349s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.317623138s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.4( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854253769s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.317588806s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.3( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854636192s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.317989349s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.2( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854607582s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.317970276s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.4( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854232788s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.317588806s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.2( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854592323s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.317970276s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.3( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854614258s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.317989349s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.849267006s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.312709808s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.7( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854442596s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.317951202s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.7( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854389191s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.317951202s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854462624s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318058014s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.d( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854446411s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318058014s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854525566s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318210602s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.f( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854514122s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318210602s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.11( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854297638s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318077087s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.11( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854282379s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318077087s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.13( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854320526s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318187714s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.13( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854298592s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318187714s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.848813057s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.312709808s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.15( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854213715s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318248749s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.16( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854187012s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318229675s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.15( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854190826s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318248749s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.16( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854159355s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318229675s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.17( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854158401s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318267822s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.17( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854137421s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318267822s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.18( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854168892s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318370819s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.19( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854049683s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318286896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.19( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854026794s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318286896s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.18( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854114532s) [0] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318370819s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.1b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854051590s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 active pruub 33.318325043s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:07:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 29 pg[2.1b( empty local-lis/les=22/23 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29 pruub=8.854016304s) [1] r=-1 lpr=29 pi=[22,29)/1 crt=0'0 unknown NOTIFY pruub 33.318325043s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:07:57 compute-0 sudo[91927]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgpuouhwuqbbghgmcdgtgjciwapyaaen ; /usr/bin/python3'
Feb 02 15:07:57 compute-0 sudo[91927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.16( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.7( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.b( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.f( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.2( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.17( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.5( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.3( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.8( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.1d( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.11( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.1c( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 29 pg[2.18( empty local-lis/les=0/0 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:07:57 compute-0 python3[91929]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770044877.0697608-36683-222697749256129/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=5e0fc797a3a6de53c06f97849ce2ab726576157b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:07:57 compute-0 sudo[91927]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:57 compute-0 sudo[91977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lphtuckdusncmzzfopvemzndpmilkkcs ; /usr/bin/python3'
Feb 02 15:07:57 compute-0 sudo[91977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:58 compute-0 python3[91979]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:58 compute-0 podman[91980]: 2026-02-02 15:07:58.178483227 +0000 UTC m=+0.065038118 container create 43a9deb25bd29f520ac9e818d11e46a2bffc39581fc6e51580365102d793a35f (image=quay.io/ceph/ceph:v20, name=sad_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:07:58 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Feb 02 15:07:58 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Feb 02 15:07:58 compute-0 systemd[1]: Started libpod-conmon-43a9deb25bd29f520ac9e818d11e46a2bffc39581fc6e51580365102d793a35f.scope.
Feb 02 15:07:58 compute-0 podman[91980]: 2026-02-02 15:07:58.135673772 +0000 UTC m=+0.022228723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:58 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4cc3094ef2b3141d47404995b77b721ec7d7a6a97c0082879797552e5ce38/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4cc3094ef2b3141d47404995b77b721ec7d7a6a97c0082879797552e5ce38/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4cc3094ef2b3141d47404995b77b721ec7d7a6a97c0082879797552e5ce38/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:58 compute-0 podman[91980]: 2026-02-02 15:07:58.284413618 +0000 UTC m=+0.170968499 container init 43a9deb25bd29f520ac9e818d11e46a2bffc39581fc6e51580365102d793a35f (image=quay.io/ceph/ceph:v20, name=sad_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:58 compute-0 podman[91980]: 2026-02-02 15:07:58.288347129 +0000 UTC m=+0.174901990 container start 43a9deb25bd29f520ac9e818d11e46a2bffc39581fc6e51580365102d793a35f (image=quay.io/ceph/ceph:v20, name=sad_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:07:58 compute-0 podman[91980]: 2026-02-02 15:07:58.300693273 +0000 UTC m=+0.187248184 container attach 43a9deb25bd29f520ac9e818d11e46a2bffc39581fc6e51580365102d793a35f (image=quay.io/ceph/ceph:v20, name=sad_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Feb 02 15:07:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Feb 02 15:07:58 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Feb 02 15:07:58 compute-0 ceph-mon[75334]: 2.8 scrub starts
Feb 02 15:07:58 compute-0 ceph-mon[75334]: 2.8 scrub ok
Feb 02 15:07:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:07:58 compute-0 ceph-mon[75334]: osdmap e29: 3 total, 3 up, 3 in
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.5( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.f( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.b( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.17( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.7( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.3( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.2( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.16( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.8( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.11( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.1c( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.18( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 30 pg[2.1d( empty local-lis/les=29/30 n=0 ec=22/17 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:07:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 02 15:07:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/727300281' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 02 15:07:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v59: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/727300281' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 02 15:07:58 compute-0 sad_montalcini[91995]: 
Feb 02 15:07:58 compute-0 sad_montalcini[91995]: [global]
Feb 02 15:07:58 compute-0 sad_montalcini[91995]:         fsid = e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:07:58 compute-0 sad_montalcini[91995]:         mon_host = 192.168.122.100
Feb 02 15:07:58 compute-0 sad_montalcini[91995]:         rgw_keystone_api_version = 3
Feb 02 15:07:58 compute-0 systemd[1]: libpod-43a9deb25bd29f520ac9e818d11e46a2bffc39581fc6e51580365102d793a35f.scope: Deactivated successfully.
Feb 02 15:07:58 compute-0 podman[91980]: 2026-02-02 15:07:58.773551705 +0000 UTC m=+0.660106616 container died 43a9deb25bd29f520ac9e818d11e46a2bffc39581fc6e51580365102d793a35f (image=quay.io/ceph/ceph:v20, name=sad_montalcini, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:07:58 compute-0 sudo[92020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:07:58 compute-0 sudo[92020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:58 compute-0 sudo[92020]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:58 compute-0 sudo[92057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:07:58 compute-0 sudo[92057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f4cc3094ef2b3141d47404995b77b721ec7d7a6a97c0082879797552e5ce38-merged.mount: Deactivated successfully.
Feb 02 15:07:58 compute-0 podman[91980]: 2026-02-02 15:07:58.944805001 +0000 UTC m=+0.831359882 container remove 43a9deb25bd29f520ac9e818d11e46a2bffc39581fc6e51580365102d793a35f (image=quay.io/ceph/ceph:v20, name=sad_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:07:58 compute-0 systemd[1]: libpod-conmon-43a9deb25bd29f520ac9e818d11e46a2bffc39581fc6e51580365102d793a35f.scope: Deactivated successfully.
Feb 02 15:07:58 compute-0 sudo[91977]: pam_unix(sudo:session): session closed for user root
Feb 02 15:07:59 compute-0 sudo[92129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoagjmsefgeogvzbbdampdozorweamwd ; /usr/bin/python3'
Feb 02 15:07:59 compute-0 sudo[92129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:07:59 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb 02 15:07:59 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb 02 15:07:59 compute-0 python3[92136]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:07:59 compute-0 podman[92153]: 2026-02-02 15:07:59.303478693 +0000 UTC m=+0.074166939 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:07:59 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb 02 15:07:59 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb 02 15:07:59 compute-0 podman[92167]: 2026-02-02 15:07:59.346251528 +0000 UTC m=+0.054896395 container create 4d22bc331fc839c5bb726c3d97cbc4443a6ec93a877f8d9de3aee71f1f7df4dd (image=quay.io/ceph/ceph:v20, name=agitated_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:07:59 compute-0 systemd[1]: Started libpod-conmon-4d22bc331fc839c5bb726c3d97cbc4443a6ec93a877f8d9de3aee71f1f7df4dd.scope.
Feb 02 15:07:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee2a08b29bc659b99f60ec3ebc60990d7f92497ee4e6ca07812336efabe58986/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee2a08b29bc659b99f60ec3ebc60990d7f92497ee4e6ca07812336efabe58986/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee2a08b29bc659b99f60ec3ebc60990d7f92497ee4e6ca07812336efabe58986/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:07:59 compute-0 podman[92153]: 2026-02-02 15:07:59.408087632 +0000 UTC m=+0.178775908 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:07:59 compute-0 podman[92167]: 2026-02-02 15:07:59.322111302 +0000 UTC m=+0.030756149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:07:59 compute-0 podman[92167]: 2026-02-02 15:07:59.426531118 +0000 UTC m=+0.135175965 container init 4d22bc331fc839c5bb726c3d97cbc4443a6ec93a877f8d9de3aee71f1f7df4dd (image=quay.io/ceph/ceph:v20, name=agitated_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:07:59 compute-0 podman[92167]: 2026-02-02 15:07:59.435475213 +0000 UTC m=+0.144120050 container start 4d22bc331fc839c5bb726c3d97cbc4443a6ec93a877f8d9de3aee71f1f7df4dd (image=quay.io/ceph/ceph:v20, name=agitated_brattain, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:07:59 compute-0 podman[92167]: 2026-02-02 15:07:59.448111084 +0000 UTC m=+0.156755921 container attach 4d22bc331fc839c5bb726c3d97cbc4443a6ec93a877f8d9de3aee71f1f7df4dd (image=quay.io/ceph/ceph:v20, name=agitated_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:07:59 compute-0 ceph-mon[75334]: 2.1e scrub starts
Feb 02 15:07:59 compute-0 ceph-mon[75334]: 2.1e scrub ok
Feb 02 15:07:59 compute-0 ceph-mon[75334]: osdmap e30: 3 total, 3 up, 3 in
Feb 02 15:07:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/727300281' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 02 15:07:59 compute-0 ceph-mon[75334]: pgmap v59: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:07:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/727300281' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 02 15:07:59 compute-0 ceph-mon[75334]: 2.6 scrub starts
Feb 02 15:07:59 compute-0 ceph-mon[75334]: 2.6 scrub ok
Feb 02 15:07:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Feb 02 15:07:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4115407499' entity='client.admin' 
Feb 02 15:07:59 compute-0 agitated_brattain[92186]: set ssl_option
Feb 02 15:08:00 compute-0 systemd[1]: libpod-4d22bc331fc839c5bb726c3d97cbc4443a6ec93a877f8d9de3aee71f1f7df4dd.scope: Deactivated successfully.
Feb 02 15:08:00 compute-0 podman[92167]: 2026-02-02 15:08:00.001934982 +0000 UTC m=+0.710579819 container died 4d22bc331fc839c5bb726c3d97cbc4443a6ec93a877f8d9de3aee71f1f7df4dd (image=quay.io/ceph/ceph:v20, name=agitated_brattain, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:00 compute-0 sudo[92057]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee2a08b29bc659b99f60ec3ebc60990d7f92497ee4e6ca07812336efabe58986-merged.mount: Deactivated successfully.
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:00 compute-0 podman[92167]: 2026-02-02 15:08:00.051560765 +0000 UTC m=+0.760205592 container remove 4d22bc331fc839c5bb726c3d97cbc4443a6ec93a877f8d9de3aee71f1f7df4dd (image=quay.io/ceph/ceph:v20, name=agitated_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:00 compute-0 systemd[1]: libpod-conmon-4d22bc331fc839c5bb726c3d97cbc4443a6ec93a877f8d9de3aee71f1f7df4dd.scope: Deactivated successfully.
Feb 02 15:08:00 compute-0 sudo[92129]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:00 compute-0 sudo[92354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:00 compute-0 sudo[92354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:00 compute-0 sudo[92354]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:00 compute-0 sudo[92379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:08:00 compute-0 sudo[92379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:00 compute-0 sudo[92427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atwnrifhlzgloxfbgperwjqdjtzronpl ; /usr/bin/python3'
Feb 02 15:08:00 compute-0 sudo[92427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:00 compute-0 python3[92429]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:00 compute-0 podman[92440]: 2026-02-02 15:08:00.416107062 +0000 UTC m=+0.043973053 container create c545f9f18244b7409ac76ac7e7894b3c77933526af5c13f10aeafd910de185ce (image=quay.io/ceph/ceph:v20, name=compassionate_jones, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:08:00 compute-0 systemd[1]: Started libpod-conmon-c545f9f18244b7409ac76ac7e7894b3c77933526af5c13f10aeafd910de185ce.scope.
Feb 02 15:08:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5017bc5f45f5c6084d85d56510a92a9ffb40f936347f1abef8ee317ff6d7ec9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5017bc5f45f5c6084d85d56510a92a9ffb40f936347f1abef8ee317ff6d7ec9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5017bc5f45f5c6084d85d56510a92a9ffb40f936347f1abef8ee317ff6d7ec9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:00 compute-0 podman[92440]: 2026-02-02 15:08:00.478742475 +0000 UTC m=+0.106608536 container init c545f9f18244b7409ac76ac7e7894b3c77933526af5c13f10aeafd910de185ce (image=quay.io/ceph/ceph:v20, name=compassionate_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:08:00 compute-0 podman[92440]: 2026-02-02 15:08:00.487687612 +0000 UTC m=+0.115553593 container start c545f9f18244b7409ac76ac7e7894b3c77933526af5c13f10aeafd910de185ce (image=quay.io/ceph/ceph:v20, name=compassionate_jones, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:00 compute-0 podman[92440]: 2026-02-02 15:08:00.491591121 +0000 UTC m=+0.119457182 container attach c545f9f18244b7409ac76ac7e7894b3c77933526af5c13f10aeafd910de185ce (image=quay.io/ceph/ceph:v20, name=compassionate_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:08:00 compute-0 podman[92440]: 2026-02-02 15:08:00.396136012 +0000 UTC m=+0.024002043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:00 compute-0 sudo[92379]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:08:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:08:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:08:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:08:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:00 compute-0 sudo[92499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:00 compute-0 sudo[92499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:00 compute-0 sudo[92499]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v60: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:00 compute-0 sudo[92524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:08:00 compute-0 sudo[92524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:00 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:08:00 compute-0 ceph-mgr[75628]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Feb 02 15:08:00 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb 02 15:08:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 15:08:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:00 compute-0 compassionate_jones[92462]: Scheduled rgw.rgw update...
Feb 02 15:08:00 compute-0 systemd[1]: libpod-c545f9f18244b7409ac76ac7e7894b3c77933526af5c13f10aeafd910de185ce.scope: Deactivated successfully.
Feb 02 15:08:00 compute-0 podman[92440]: 2026-02-02 15:08:00.945620701 +0000 UTC m=+0.573486722 container died c545f9f18244b7409ac76ac7e7894b3c77933526af5c13f10aeafd910de185ce (image=quay.io/ceph/ceph:v20, name=compassionate_jones, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5017bc5f45f5c6084d85d56510a92a9ffb40f936347f1abef8ee317ff6d7ec9-merged.mount: Deactivated successfully.
Feb 02 15:08:00 compute-0 ceph-mon[75334]: 2.1 scrub starts
Feb 02 15:08:00 compute-0 ceph-mon[75334]: 2.1 scrub ok
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4115407499' entity='client.admin' 
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:00 compute-0 ceph-mon[75334]: pgmap v60: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:00 compute-0 podman[92440]: 2026-02-02 15:08:00.989599803 +0000 UTC m=+0.617465794 container remove c545f9f18244b7409ac76ac7e7894b3c77933526af5c13f10aeafd910de185ce (image=quay.io/ceph/ceph:v20, name=compassionate_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:00 compute-0 systemd[1]: libpod-conmon-c545f9f18244b7409ac76ac7e7894b3c77933526af5c13f10aeafd910de185ce.scope: Deactivated successfully.
Feb 02 15:08:01 compute-0 podman[92562]: 2026-02-02 15:08:01.007374713 +0000 UTC m=+0.051845705 container create a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_napier, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:01 compute-0 sudo[92427]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:01 compute-0 systemd[1]: Started libpod-conmon-a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871.scope.
Feb 02 15:08:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:01 compute-0 podman[92562]: 2026-02-02 15:08:01.064216742 +0000 UTC m=+0.108687824 container init a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_napier, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:08:01 compute-0 podman[92562]: 2026-02-02 15:08:01.069014083 +0000 UTC m=+0.113485115 container start a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:01 compute-0 quirky_napier[92590]: 167 167
Feb 02 15:08:01 compute-0 systemd[1]: libpod-a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871.scope: Deactivated successfully.
Feb 02 15:08:01 compute-0 conmon[92590]: conmon a0de1bf5dc6c03bdba9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871.scope/container/memory.events
Feb 02 15:08:01 compute-0 podman[92562]: 2026-02-02 15:08:01.073192509 +0000 UTC m=+0.117663541 container attach a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_napier, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:01 compute-0 podman[92562]: 2026-02-02 15:08:01.07412751 +0000 UTC m=+0.118598542 container died a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_napier, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:01 compute-0 podman[92562]: 2026-02-02 15:08:00.989510771 +0000 UTC m=+0.033981773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-68471500b11569afc30cc5410938d093f6dd2888862c5d079b84220d65c13424-merged.mount: Deactivated successfully.
Feb 02 15:08:01 compute-0 podman[92562]: 2026-02-02 15:08:01.115385611 +0000 UTC m=+0.159856613 container remove a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_napier, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 15:08:01 compute-0 systemd[1]: libpod-conmon-a0de1bf5dc6c03bdba9cf63472f41a9c487414b1b02c729a60988e68c8dd3871.scope: Deactivated successfully.
Feb 02 15:08:01 compute-0 podman[92614]: 2026-02-02 15:08:01.247872562 +0000 UTC m=+0.053054732 container create 636f773b040251bddd2abbe5fced839d076ff8c3e48896d9a9be5a848a1fcecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:08:01 compute-0 systemd[1]: Started libpod-conmon-636f773b040251bddd2abbe5fced839d076ff8c3e48896d9a9be5a848a1fcecb.scope.
Feb 02 15:08:01 compute-0 podman[92614]: 2026-02-02 15:08:01.219786026 +0000 UTC m=+0.024968226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4cdc3fef64094648c9d45e4ec4eee2504d0511e3cb1814b32503c27c4d6cb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4cdc3fef64094648c9d45e4ec4eee2504d0511e3cb1814b32503c27c4d6cb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4cdc3fef64094648c9d45e4ec4eee2504d0511e3cb1814b32503c27c4d6cb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4cdc3fef64094648c9d45e4ec4eee2504d0511e3cb1814b32503c27c4d6cb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4cdc3fef64094648c9d45e4ec4eee2504d0511e3cb1814b32503c27c4d6cb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:01 compute-0 podman[92614]: 2026-02-02 15:08:01.369786792 +0000 UTC m=+0.174969032 container init 636f773b040251bddd2abbe5fced839d076ff8c3e48896d9a9be5a848a1fcecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:01 compute-0 podman[92614]: 2026-02-02 15:08:01.375484503 +0000 UTC m=+0.180666693 container start 636f773b040251bddd2abbe5fced839d076ff8c3e48896d9a9be5a848a1fcecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:08:01 compute-0 podman[92614]: 2026-02-02 15:08:01.379482484 +0000 UTC m=+0.184664684 container attach 636f773b040251bddd2abbe5fced839d076ff8c3e48896d9a9be5a848a1fcecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:08:01 compute-0 sharp_hypatia[92632]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:08:01 compute-0 sharp_hypatia[92632]: --> All data devices are unavailable
Feb 02 15:08:01 compute-0 systemd[1]: libpod-636f773b040251bddd2abbe5fced839d076ff8c3e48896d9a9be5a848a1fcecb.scope: Deactivated successfully.
Feb 02 15:08:01 compute-0 podman[92614]: 2026-02-02 15:08:01.824508517 +0000 UTC m=+0.629690707 container died 636f773b040251bddd2abbe5fced839d076ff8c3e48896d9a9be5a848a1fcecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hypatia, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd4cdc3fef64094648c9d45e4ec4eee2504d0511e3cb1814b32503c27c4d6cb7-merged.mount: Deactivated successfully.
Feb 02 15:08:01 compute-0 podman[92614]: 2026-02-02 15:08:01.912834781 +0000 UTC m=+0.718016941 container remove 636f773b040251bddd2abbe5fced839d076ff8c3e48896d9a9be5a848a1fcecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hypatia, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:01 compute-0 systemd[1]: libpod-conmon-636f773b040251bddd2abbe5fced839d076ff8c3e48896d9a9be5a848a1fcecb.scope: Deactivated successfully.
Feb 02 15:08:01 compute-0 sudo[92524]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:01 compute-0 python3[92739]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 15:08:02 compute-0 sudo[92741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:02 compute-0 sudo[92741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:02 compute-0 sudo[92741]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:02 compute-0 sudo[92769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:08:02 compute-0 sudo[92769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:02 compute-0 python3[92861]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770044881.7318761-36724-117807305282545/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:08:02 compute-0 podman[92874]: 2026-02-02 15:08:02.330504394 +0000 UTC m=+0.049191089 container create 318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:08:02 compute-0 systemd[1]: Started libpod-conmon-318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15.scope.
Feb 02 15:08:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:02 compute-0 podman[92874]: 2026-02-02 15:08:02.393468973 +0000 UTC m=+0.112155668 container init 318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:08:02 compute-0 podman[92874]: 2026-02-02 15:08:02.397855925 +0000 UTC m=+0.116542570 container start 318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_raman, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:02 compute-0 podman[92874]: 2026-02-02 15:08:02.306678481 +0000 UTC m=+0.025365236 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:02 compute-0 podman[92874]: 2026-02-02 15:08:02.401547742 +0000 UTC m=+0.120234437 container attach 318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_raman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:02 compute-0 systemd[1]: libpod-318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15.scope: Deactivated successfully.
Feb 02 15:08:02 compute-0 gallant_raman[92915]: 167 167
Feb 02 15:08:02 compute-0 conmon[92915]: conmon 318ba5d10c4560f597dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15.scope/container/memory.events
Feb 02 15:08:02 compute-0 podman[92874]: 2026-02-02 15:08:02.403178707 +0000 UTC m=+0.121865402 container died 318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8df808ddecf6133f145230a0fa9a131b1383a239202d24b82a76a7905956e0c-merged.mount: Deactivated successfully.
Feb 02 15:08:02 compute-0 podman[92874]: 2026-02-02 15:08:02.445838947 +0000 UTC m=+0.164525642 container remove 318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:08:02 compute-0 systemd[1]: libpod-conmon-318ba5d10c4560f597ddd3117faeceabed69e9b5d8fdf45ca005773776578e15.scope: Deactivated successfully.
Feb 02 15:08:02 compute-0 podman[92939]: 2026-02-02 15:08:02.570568089 +0000 UTC m=+0.038220307 container create 03521e2e2bb9fdd4a107f32a494ea5297be19cec47963fb92db960561c347419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:02 compute-0 systemd[1]: Started libpod-conmon-03521e2e2bb9fdd4a107f32a494ea5297be19cec47963fb92db960561c347419.scope.
Feb 02 15:08:02 compute-0 sudo[92979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gssrybarvyjachaqvqabelqdfjzbjavl ; /usr/bin/python3'
Feb 02 15:08:02 compute-0 sudo[92979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d0fdbbf59cd385530259396adc3c3118f117b384348842b90b30daaa8d7508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d0fdbbf59cd385530259396adc3c3118f117b384348842b90b30daaa8d7508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d0fdbbf59cd385530259396adc3c3118f117b384348842b90b30daaa8d7508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d0fdbbf59cd385530259396adc3c3118f117b384348842b90b30daaa8d7508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:02 compute-0 podman[92939]: 2026-02-02 15:08:02.554179513 +0000 UTC m=+0.021831711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:02 compute-0 ceph-mon[75334]: from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:08:02 compute-0 ceph-mon[75334]: Saving service rgw.rgw spec with placement compute-0
Feb 02 15:08:02 compute-0 podman[92939]: 2026-02-02 15:08:02.669342853 +0000 UTC m=+0.136995051 container init 03521e2e2bb9fdd4a107f32a494ea5297be19cec47963fb92db960561c347419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_taussig, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:02 compute-0 podman[92939]: 2026-02-02 15:08:02.6762806 +0000 UTC m=+0.143932778 container start 03521e2e2bb9fdd4a107f32a494ea5297be19cec47963fb92db960561c347419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_taussig, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:02 compute-0 podman[92939]: 2026-02-02 15:08:02.697391385 +0000 UTC m=+0.165043583 container attach 03521e2e2bb9fdd4a107f32a494ea5297be19cec47963fb92db960561c347419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_taussig, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v61: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:02 compute-0 python3[92983]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:02 compute-0 podman[92987]: 2026-02-02 15:08:02.800188554 +0000 UTC m=+0.037105694 container create cde0a210f3e4e34779c4448fc73d081956b48ddb9bbf85897dc70b0baefdca38 (image=quay.io/ceph/ceph:v20, name=heuristic_ptolemy, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb 02 15:08:02 compute-0 systemd[1]: Started libpod-conmon-cde0a210f3e4e34779c4448fc73d081956b48ddb9bbf85897dc70b0baefdca38.scope.
Feb 02 15:08:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7b77710c926064da8de98f3f12120d35e7a70d9928fe4ec0a08f4b147da942/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7b77710c926064da8de98f3f12120d35e7a70d9928fe4ec0a08f4b147da942/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7b77710c926064da8de98f3f12120d35e7a70d9928fe4ec0a08f4b147da942/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:02 compute-0 podman[92987]: 2026-02-02 15:08:02.785250719 +0000 UTC m=+0.022167879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:02 compute-0 podman[92987]: 2026-02-02 15:08:02.885611946 +0000 UTC m=+0.122529106 container init cde0a210f3e4e34779c4448fc73d081956b48ddb9bbf85897dc70b0baefdca38 (image=quay.io/ceph/ceph:v20, name=heuristic_ptolemy, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:08:02 compute-0 podman[92987]: 2026-02-02 15:08:02.891889269 +0000 UTC m=+0.128806429 container start cde0a210f3e4e34779c4448fc73d081956b48ddb9bbf85897dc70b0baefdca38 (image=quay.io/ceph/ceph:v20, name=heuristic_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:08:02 compute-0 podman[92987]: 2026-02-02 15:08:02.895902664 +0000 UTC m=+0.132819814 container attach cde0a210f3e4e34779c4448fc73d081956b48ddb9bbf85897dc70b0baefdca38 (image=quay.io/ceph/ceph:v20, name=heuristic_ptolemy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:08:02 compute-0 youthful_taussig[92981]: {
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:     "0": [
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:         {
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "devices": [
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "/dev/loop3"
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             ],
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_name": "ceph_lv0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_size": "21470642176",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "name": "ceph_lv0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "tags": {
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.crush_device_class": "",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.encrypted": "0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.osd_id": "0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.type": "block",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.vdo": "0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.with_tpm": "0"
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             },
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "type": "block",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "vg_name": "ceph_vg0"
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:         }
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:     ],
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:     "1": [
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:         {
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "devices": [
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "/dev/loop4"
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             ],
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_name": "ceph_lv1",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_size": "21470642176",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "name": "ceph_lv1",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "tags": {
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.crush_device_class": "",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.encrypted": "0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.osd_id": "1",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.type": "block",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.vdo": "0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.with_tpm": "0"
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             },
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "type": "block",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "vg_name": "ceph_vg1"
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:         }
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:     ],
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:     "2": [
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:         {
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "devices": [
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "/dev/loop5"
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             ],
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_name": "ceph_lv2",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_size": "21470642176",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "name": "ceph_lv2",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "tags": {
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.crush_device_class": "",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.encrypted": "0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.osd_id": "2",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.type": "block",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.vdo": "0",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:                 "ceph.with_tpm": "0"
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             },
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "type": "block",
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:             "vg_name": "ceph_vg2"
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:         }
Feb 02 15:08:02 compute-0 youthful_taussig[92981]:     ]
Feb 02 15:08:02 compute-0 youthful_taussig[92981]: }
Feb 02 15:08:02 compute-0 systemd[1]: libpod-03521e2e2bb9fdd4a107f32a494ea5297be19cec47963fb92db960561c347419.scope: Deactivated successfully.
Feb 02 15:08:02 compute-0 podman[92939]: 2026-02-02 15:08:02.969312322 +0000 UTC m=+0.436964540 container died 03521e2e2bb9fdd4a107f32a494ea5297be19cec47963fb92db960561c347419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-08d0fdbbf59cd385530259396adc3c3118f117b384348842b90b30daaa8d7508-merged.mount: Deactivated successfully.
Feb 02 15:08:03 compute-0 podman[92939]: 2026-02-02 15:08:03.014096317 +0000 UTC m=+0.481748535 container remove 03521e2e2bb9fdd4a107f32a494ea5297be19cec47963fb92db960561c347419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_taussig, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:03 compute-0 systemd[1]: libpod-conmon-03521e2e2bb9fdd4a107f32a494ea5297be19cec47963fb92db960561c347419.scope: Deactivated successfully.
Feb 02 15:08:03 compute-0 sudo[92769]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:03 compute-0 sudo[93041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:03 compute-0 sudo[93041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:03 compute-0 sudo[93041]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:03 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb 02 15:08:03 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb 02 15:08:03 compute-0 sudo[93066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:08:03 compute-0 sudo[93066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:03 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:08:03 compute-0 ceph-mgr[75628]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 02 15:08:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Feb 02 15:08:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb 02 15:08:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Feb 02 15:08:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb 02 15:08:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Feb 02 15:08:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb 02 15:08:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Feb 02 15:08:03 compute-0 ceph-mon[75334]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 02 15:08:03 compute-0 ceph-mon[75334]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 02 15:08:03 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0[75330]: 2026-02-02T15:08:03.371+0000 7f1dff35e640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 02 15:08:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 02 15:08:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e2 new map
Feb 02 15:08:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-02-02T15:08:03:372964+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T15:08:03.372406+0000
                                           modified        2026-02-02T15:08:03.372406+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Feb 02 15:08:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Feb 02 15:08:03 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Feb 02 15:08:03 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Feb 02 15:08:03 compute-0 ceph-mgr[75628]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb 02 15:08:03 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb 02 15:08:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 15:08:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:03 compute-0 ceph-mgr[75628]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 02 15:08:03 compute-0 systemd[1]: libpod-cde0a210f3e4e34779c4448fc73d081956b48ddb9bbf85897dc70b0baefdca38.scope: Deactivated successfully.
Feb 02 15:08:03 compute-0 podman[92987]: 2026-02-02 15:08:03.42370106 +0000 UTC m=+0.660618210 container died cde0a210f3e4e34779c4448fc73d081956b48ddb9bbf85897dc70b0baefdca38 (image=quay.io/ceph/ceph:v20, name=heuristic_ptolemy, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce7b77710c926064da8de98f3f12120d35e7a70d9928fe4ec0a08f4b147da942-merged.mount: Deactivated successfully.
Feb 02 15:08:03 compute-0 podman[92987]: 2026-02-02 15:08:03.46306612 +0000 UTC m=+0.699983270 container remove cde0a210f3e4e34779c4448fc73d081956b48ddb9bbf85897dc70b0baefdca38 (image=quay.io/ceph/ceph:v20, name=heuristic_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:03 compute-0 systemd[1]: libpod-conmon-cde0a210f3e4e34779c4448fc73d081956b48ddb9bbf85897dc70b0baefdca38.scope: Deactivated successfully.
Feb 02 15:08:03 compute-0 sudo[92979]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:03 compute-0 podman[93106]: 2026-02-02 15:08:03.493617194 +0000 UTC m=+0.055621234 container create 95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:03 compute-0 systemd[1]: Started libpod-conmon-95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f.scope.
Feb 02 15:08:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:03 compute-0 podman[93106]: 2026-02-02 15:08:03.474178825 +0000 UTC m=+0.036182895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:03 compute-0 podman[93106]: 2026-02-02 15:08:03.568915523 +0000 UTC m=+0.130919633 container init 95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:08:03 compute-0 podman[93106]: 2026-02-02 15:08:03.572354146 +0000 UTC m=+0.134358186 container start 95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:03 compute-0 podman[93106]: 2026-02-02 15:08:03.576387211 +0000 UTC m=+0.138391341 container attach 95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:03 compute-0 trusting_engelbart[93132]: 167 167
Feb 02 15:08:03 compute-0 systemd[1]: libpod-95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f.scope: Deactivated successfully.
Feb 02 15:08:03 compute-0 conmon[93132]: conmon 95d1aba43725b035d5cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f.scope/container/memory.events
Feb 02 15:08:03 compute-0 podman[93106]: 2026-02-02 15:08:03.57964715 +0000 UTC m=+0.141651210 container died 95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-424ae322d2adfd8e39cb2f3df7cf2e5a42213de4a6553a2e58c137a06d182660-merged.mount: Deactivated successfully.
Feb 02 15:08:03 compute-0 podman[93106]: 2026-02-02 15:08:03.625926626 +0000 UTC m=+0.187930666 container remove 95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_engelbart, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:03 compute-0 sudo[93173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydfcsobzjtgrcrldrntdyyhqxydddjkx ; /usr/bin/python3'
Feb 02 15:08:03 compute-0 sudo[93173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:03 compute-0 systemd[1]: libpod-conmon-95d1aba43725b035d5cc2424f459a173e7895ee1c0434e9e88df432fed22181f.scope: Deactivated successfully.
Feb 02 15:08:03 compute-0 ceph-mon[75334]: pgmap v61: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb 02 15:08:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb 02 15:08:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb 02 15:08:03 compute-0 ceph-mon[75334]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 02 15:08:03 compute-0 ceph-mon[75334]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 02 15:08:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 02 15:08:03 compute-0 ceph-mon[75334]: osdmap e31: 3 total, 3 up, 3 in
Feb 02 15:08:03 compute-0 ceph-mon[75334]: fsmap cephfs:0
Feb 02 15:08:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:03 compute-0 python3[93175]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:03 compute-0 podman[93183]: 2026-02-02 15:08:03.791794305 +0000 UTC m=+0.047976402 container create af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldwasser, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:08:03 compute-0 systemd[1]: Started libpod-conmon-af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7.scope.
Feb 02 15:08:03 compute-0 podman[93197]: 2026-02-02 15:08:03.840658387 +0000 UTC m=+0.037591395 container create 3373a8b0ec5b14782f59cbb5f82127c075135052f4cd20cf9d9b3d9e38402e38 (image=quay.io/ceph/ceph:v20, name=friendly_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:08:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d56a09fb0c9936a51a6ab77a5b6ed5a8bb2c5f16d9c55da62b5ace41569dc00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:03 compute-0 systemd[1]: Started libpod-conmon-3373a8b0ec5b14782f59cbb5f82127c075135052f4cd20cf9d9b3d9e38402e38.scope.
Feb 02 15:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d56a09fb0c9936a51a6ab77a5b6ed5a8bb2c5f16d9c55da62b5ace41569dc00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d56a09fb0c9936a51a6ab77a5b6ed5a8bb2c5f16d9c55da62b5ace41569dc00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d56a09fb0c9936a51a6ab77a5b6ed5a8bb2c5f16d9c55da62b5ace41569dc00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:03 compute-0 podman[93183]: 2026-02-02 15:08:03.776080144 +0000 UTC m=+0.032262291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:03 compute-0 podman[93183]: 2026-02-02 15:08:03.880676171 +0000 UTC m=+0.136858278 container init af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae226e5d312dd89e10c25374839a409d192eb5eac3756ac3b09162ffe181ef61/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae226e5d312dd89e10c25374839a409d192eb5eac3756ac3b09162ffe181ef61/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae226e5d312dd89e10c25374839a409d192eb5eac3756ac3b09162ffe181ef61/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:03 compute-0 podman[93183]: 2026-02-02 15:08:03.893067283 +0000 UTC m=+0.149249380 container start af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:03 compute-0 podman[93183]: 2026-02-02 15:08:03.896816091 +0000 UTC m=+0.152998198 container attach af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:08:03 compute-0 podman[93197]: 2026-02-02 15:08:03.912061423 +0000 UTC m=+0.108994471 container init 3373a8b0ec5b14782f59cbb5f82127c075135052f4cd20cf9d9b3d9e38402e38 (image=quay.io/ceph/ceph:v20, name=friendly_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 02 15:08:03 compute-0 podman[93197]: 2026-02-02 15:08:03.91659061 +0000 UTC m=+0.113523608 container start 3373a8b0ec5b14782f59cbb5f82127c075135052f4cd20cf9d9b3d9e38402e38 (image=quay.io/ceph/ceph:v20, name=friendly_newton, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:03 compute-0 podman[93197]: 2026-02-02 15:08:03.919772736 +0000 UTC m=+0.116705774 container attach 3373a8b0ec5b14782f59cbb5f82127c075135052f4cd20cf9d9b3d9e38402e38 (image=quay.io/ceph/ceph:v20, name=friendly_newton, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:08:03 compute-0 podman[93197]: 2026-02-02 15:08:03.82516087 +0000 UTC m=+0.022093888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:04 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb 02 15:08:04 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:08:04 compute-0 ceph-mgr[75628]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb 02 15:08:04 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb 02 15:08:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 15:08:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:04 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb 02 15:08:04 compute-0 friendly_newton[93218]: Scheduled mds.cephfs update...
Feb 02 15:08:04 compute-0 systemd[1]: libpod-3373a8b0ec5b14782f59cbb5f82127c075135052f4cd20cf9d9b3d9e38402e38.scope: Deactivated successfully.
Feb 02 15:08:04 compute-0 podman[93304]: 2026-02-02 15:08:04.373131812 +0000 UTC m=+0.017093242 container died 3373a8b0ec5b14782f59cbb5f82127c075135052f4cd20cf9d9b3d9e38402e38 (image=quay.io/ceph/ceph:v20, name=friendly_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae226e5d312dd89e10c25374839a409d192eb5eac3756ac3b09162ffe181ef61-merged.mount: Deactivated successfully.
Feb 02 15:08:04 compute-0 podman[93304]: 2026-02-02 15:08:04.405774851 +0000 UTC m=+0.049736251 container remove 3373a8b0ec5b14782f59cbb5f82127c075135052f4cd20cf9d9b3d9e38402e38 (image=quay.io/ceph/ceph:v20, name=friendly_newton, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:08:04 compute-0 systemd[1]: libpod-conmon-3373a8b0ec5b14782f59cbb5f82127c075135052f4cd20cf9d9b3d9e38402e38.scope: Deactivated successfully.
Feb 02 15:08:04 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb 02 15:08:04 compute-0 sudo[93173]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:04 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb 02 15:08:04 compute-0 lvm[93331]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:08:04 compute-0 lvm[93331]: VG ceph_vg0 finished
Feb 02 15:08:04 compute-0 lvm[93334]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:08:04 compute-0 lvm[93334]: VG ceph_vg1 finished
Feb 02 15:08:04 compute-0 lvm[93336]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:08:04 compute-0 lvm[93336]: VG ceph_vg0 finished
Feb 02 15:08:04 compute-0 lvm[93337]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:08:04 compute-0 lvm[93337]: VG ceph_vg2 finished
Feb 02 15:08:04 compute-0 bold_goldwasser[93212]: {}
Feb 02 15:08:04 compute-0 systemd[1]: libpod-af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7.scope: Deactivated successfully.
Feb 02 15:08:04 compute-0 systemd[1]: libpod-af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7.scope: Consumed 1.014s CPU time.
Feb 02 15:08:04 compute-0 podman[93183]: 2026-02-02 15:08:04.648601694 +0000 UTC m=+0.904783801 container died af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:04 compute-0 ceph-mon[75334]: 2.0 scrub starts
Feb 02 15:08:04 compute-0 ceph-mon[75334]: 2.0 scrub ok
Feb 02 15:08:04 compute-0 ceph-mon[75334]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:08:04 compute-0 ceph-mon[75334]: Saving service mds.cephfs spec with placement compute-0
Feb 02 15:08:04 compute-0 ceph-mon[75334]: 2.5 scrub starts
Feb 02 15:08:04 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:04 compute-0 ceph-mon[75334]: 2.5 scrub ok
Feb 02 15:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d56a09fb0c9936a51a6ab77a5b6ed5a8bb2c5f16d9c55da62b5ace41569dc00-merged.mount: Deactivated successfully.
Feb 02 15:08:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v63: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:04 compute-0 podman[93183]: 2026-02-02 15:08:04.74603197 +0000 UTC m=+1.002214067 container remove af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 02 15:08:04 compute-0 systemd[1]: libpod-conmon-af4b6a46129081d8a67ab4b868ad90c919a6715dfd5c48db717b870264645cb7.scope: Deactivated successfully.
Feb 02 15:08:04 compute-0 sudo[93066]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:04 compute-0 sudo[93354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:08:04 compute-0 sudo[93354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:04 compute-0 sudo[93354]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:04 compute-0 sudo[93379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:04 compute-0 sudo[93379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:04 compute-0 sudo[93379]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:04 compute-0 sudo[93404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:08:04 compute-0 sudo[93404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:05 compute-0 sudo[93534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aggtgxqufdplkpowhlurwkzugrlncbie ; /usr/bin/python3'
Feb 02 15:08:05 compute-0 sudo[93534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:05 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb 02 15:08:05 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb 02 15:08:05 compute-0 python3[93537]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 02 15:08:05 compute-0 sudo[93534]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:05 compute-0 podman[93550]: 2026-02-02 15:08:05.419278555 +0000 UTC m=+0.094106566 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:05 compute-0 podman[93550]: 2026-02-02 15:08:05.528151602 +0000 UTC m=+0.202979643 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:08:05 compute-0 sudo[93658]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aznkmkzynrnepdareboyqpkyyhpxbtuq ; /usr/bin/python3'
Feb 02 15:08:05 compute-0 sudo[93658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:05 compute-0 ceph-mon[75334]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 15:08:05 compute-0 ceph-mon[75334]: Saving service mds.cephfs spec with placement compute-0
Feb 02 15:08:05 compute-0 ceph-mon[75334]: 2.f scrub starts
Feb 02 15:08:05 compute-0 ceph-mon[75334]: 2.f scrub ok
Feb 02 15:08:05 compute-0 ceph-mon[75334]: pgmap v63: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:05 compute-0 ceph-mon[75334]: 2.9 scrub starts
Feb 02 15:08:05 compute-0 ceph-mon[75334]: 2.9 scrub ok
Feb 02 15:08:05 compute-0 python3[93667]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770044885.1152997-36772-110899145767268/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=ef385d56b9da4b632a87103668ad2cc30cca0d44 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:08:05 compute-0 sudo[93658]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:06 compute-0 sudo[93821]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jknqqojpcmokldqjkebflrqqfaiffvjo ; /usr/bin/python3'
Feb 02 15:08:06 compute-0 sudo[93821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:06 compute-0 sudo[93404]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:08:06 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.c scrub starts
Feb 02 15:08:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:06 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.c scrub ok
Feb 02 15:08:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:08:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:08:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:06 compute-0 sudo[93824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:06 compute-0 sudo[93824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:06 compute-0 sudo[93824]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:06 compute-0 python3[93823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:06 compute-0 sudo[93849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:08:06 compute-0 sudo[93849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:06 compute-0 podman[93870]: 2026-02-02 15:08:06.311331087 +0000 UTC m=+0.038744509 container create 37b5b7b88d37ccb5041b678afdf69449d1e6e0188318db365326997330facce3 (image=quay.io/ceph/ceph:v20, name=interesting_haibt, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:06 compute-0 systemd[1]: Started libpod-conmon-37b5b7b88d37ccb5041b678afdf69449d1e6e0188318db365326997330facce3.scope.
Feb 02 15:08:06 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb 02 15:08:06 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb 02 15:08:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f709b5db418c2b1854c78f454b7e61f7736f05f9c0ff3ef0a870897c61ca67/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f709b5db418c2b1854c78f454b7e61f7736f05f9c0ff3ef0a870897c61ca67/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:06 compute-0 podman[93870]: 2026-02-02 15:08:06.38779559 +0000 UTC m=+0.115209022 container init 37b5b7b88d37ccb5041b678afdf69449d1e6e0188318db365326997330facce3 (image=quay.io/ceph/ceph:v20, name=interesting_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 02 15:08:06 compute-0 podman[93870]: 2026-02-02 15:08:06.294592983 +0000 UTC m=+0.022006485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:06 compute-0 podman[93870]: 2026-02-02 15:08:06.39396169 +0000 UTC m=+0.121375102 container start 37b5b7b88d37ccb5041b678afdf69449d1e6e0188318db365326997330facce3 (image=quay.io/ceph/ceph:v20, name=interesting_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:06 compute-0 podman[93870]: 2026-02-02 15:08:06.396854671 +0000 UTC m=+0.124268103 container attach 37b5b7b88d37ccb5041b678afdf69449d1e6e0188318db365326997330facce3 (image=quay.io/ceph/ceph:v20, name=interesting_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:08:06 compute-0 podman[93925]: 2026-02-02 15:08:06.546845206 +0000 UTC m=+0.038275259 container create 0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaum, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:06 compute-0 systemd[1]: Started libpod-conmon-0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150.scope.
Feb 02 15:08:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:06 compute-0 podman[93925]: 2026-02-02 15:08:06.59822603 +0000 UTC m=+0.089656093 container init 0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:06 compute-0 podman[93925]: 2026-02-02 15:08:06.601875816 +0000 UTC m=+0.093305869 container start 0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaum, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:08:06 compute-0 zealous_chaum[93941]: 167 167
Feb 02 15:08:06 compute-0 podman[93925]: 2026-02-02 15:08:06.606667538 +0000 UTC m=+0.098097611 container attach 0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:06 compute-0 systemd[1]: libpod-0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150.scope: Deactivated successfully.
Feb 02 15:08:06 compute-0 conmon[93941]: conmon 0b60ceea5acc6b949a01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150.scope/container/memory.events
Feb 02 15:08:06 compute-0 podman[93925]: 2026-02-02 15:08:06.618602199 +0000 UTC m=+0.110032262 container died 0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaum, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:06 compute-0 podman[93925]: 2026-02-02 15:08:06.530161193 +0000 UTC m=+0.021591276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d64d416e641f63238e74d8bd3527b32bb0130ec44b0695fb8f0f7f13aff11649-merged.mount: Deactivated successfully.
Feb 02 15:08:06 compute-0 podman[93925]: 2026-02-02 15:08:06.656843467 +0000 UTC m=+0.148273530 container remove 0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:06 compute-0 systemd[1]: libpod-conmon-0b60ceea5acc6b949a017942c47aa29b8dc4c3e6f8db4aa5e8f5d46a9d582150.scope: Deactivated successfully.
Feb 02 15:08:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v64: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:06 compute-0 podman[93966]: 2026-02-02 15:08:06.829178433 +0000 UTC m=+0.061510649 container create c172d6c3a6844817dc34d69ee682d99706971cecc25d23b70b0f40f2164ef36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3401205117' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb 02 15:08:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3401205117' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 02 15:08:06 compute-0 systemd[1]: Started libpod-conmon-c172d6c3a6844817dc34d69ee682d99706971cecc25d23b70b0f40f2164ef36e.scope.
Feb 02 15:08:06 compute-0 podman[93870]: 2026-02-02 15:08:06.889667239 +0000 UTC m=+0.617080681 container died 37b5b7b88d37ccb5041b678afdf69449d1e6e0188318db365326997330facce3 (image=quay.io/ceph/ceph:v20, name=interesting_haibt, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:08:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:06 compute-0 podman[93966]: 2026-02-02 15:08:06.804143055 +0000 UTC m=+0.036475331 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:06 compute-0 systemd[1]: libpod-37b5b7b88d37ccb5041b678afdf69449d1e6e0188318db365326997330facce3.scope: Deactivated successfully.
Feb 02 15:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3f4b596dca61ab1646cc7b0e155e658117ded58446678626ae0e0e8bfb5055/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3f4b596dca61ab1646cc7b0e155e658117ded58446678626ae0e0e8bfb5055/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3f4b596dca61ab1646cc7b0e155e658117ded58446678626ae0e0e8bfb5055/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3f4b596dca61ab1646cc7b0e155e658117ded58446678626ae0e0e8bfb5055/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3f4b596dca61ab1646cc7b0e155e658117ded58446678626ae0e0e8bfb5055/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:06 compute-0 podman[93966]: 2026-02-02 15:08:06.919257983 +0000 UTC m=+0.151590189 container init c172d6c3a6844817dc34d69ee682d99706971cecc25d23b70b0f40f2164ef36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f709b5db418c2b1854c78f454b7e61f7736f05f9c0ff3ef0a870897c61ca67-merged.mount: Deactivated successfully.
Feb 02 15:08:06 compute-0 podman[93966]: 2026-02-02 15:08:06.927317884 +0000 UTC m=+0.159650080 container start c172d6c3a6844817dc34d69ee682d99706971cecc25d23b70b0f40f2164ef36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:06 compute-0 podman[93966]: 2026-02-02 15:08:06.936375694 +0000 UTC m=+0.168707900 container attach c172d6c3a6844817dc34d69ee682d99706971cecc25d23b70b0f40f2164ef36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hoover, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:06 compute-0 podman[93870]: 2026-02-02 15:08:06.941078054 +0000 UTC m=+0.668491466 container remove 37b5b7b88d37ccb5041b678afdf69449d1e6e0188318db365326997330facce3 (image=quay.io/ceph/ceph:v20, name=interesting_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:08:06 compute-0 systemd[1]: libpod-conmon-37b5b7b88d37ccb5041b678afdf69449d1e6e0188318db365326997330facce3.scope: Deactivated successfully.
Feb 02 15:08:06 compute-0 sudo[93821]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:08:07 compute-0 ceph-mon[75334]: 2.c scrub starts
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:07 compute-0 ceph-mon[75334]: 2.c scrub ok
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:07 compute-0 ceph-mon[75334]: 2.a scrub starts
Feb 02 15:08:07 compute-0 ceph-mon[75334]: 2.a scrub ok
Feb 02 15:08:07 compute-0 ceph-mon[75334]: pgmap v64: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3401205117' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb 02 15:08:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3401205117' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 02 15:08:07 compute-0 stoic_hoover[93984]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:08:07 compute-0 stoic_hoover[93984]: --> All data devices are unavailable
Feb 02 15:08:07 compute-0 systemd[1]: libpod-c172d6c3a6844817dc34d69ee682d99706971cecc25d23b70b0f40f2164ef36e.scope: Deactivated successfully.
Feb 02 15:08:07 compute-0 podman[93966]: 2026-02-02 15:08:07.309194911 +0000 UTC m=+0.541527137 container died c172d6c3a6844817dc34d69ee682d99706971cecc25d23b70b0f40f2164ef36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hoover, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:08:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f3f4b596dca61ab1646cc7b0e155e658117ded58446678626ae0e0e8bfb5055-merged.mount: Deactivated successfully.
Feb 02 15:08:07 compute-0 podman[93966]: 2026-02-02 15:08:07.359151354 +0000 UTC m=+0.591483540 container remove c172d6c3a6844817dc34d69ee682d99706971cecc25d23b70b0f40f2164ef36e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:07 compute-0 systemd[1]: libpod-conmon-c172d6c3a6844817dc34d69ee682d99706971cecc25d23b70b0f40f2164ef36e.scope: Deactivated successfully.
Feb 02 15:08:07 compute-0 sudo[93849]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:07 compute-0 sudo[94028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:07 compute-0 sudo[94028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:07 compute-0 sudo[94028]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:07 compute-0 sudo[94094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deuykhcqodsrnkgqfjmfvbqkccqqpltu ; /usr/bin/python3'
Feb 02 15:08:07 compute-0 sudo[94094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:07 compute-0 sudo[94060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:08:07 compute-0 sudo[94060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:07 compute-0 python3[94101]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:07 compute-0 podman[94105]: 2026-02-02 15:08:07.702009819 +0000 UTC m=+0.057228709 container create d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37 (image=quay.io/ceph/ceph:v20, name=exciting_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:08:07 compute-0 systemd[1]: Started libpod-conmon-d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37.scope.
Feb 02 15:08:07 compute-0 podman[94129]: 2026-02-02 15:08:07.756264684 +0000 UTC m=+0.056466212 container create dade1b20af11dc65eb9793cbb5102edbc269f30002070e0ed688150356b10672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:07 compute-0 podman[94105]: 2026-02-02 15:08:07.67695657 +0000 UTC m=+0.032175510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4135c36591affb70d5decdcb0efc003692e2fa8f29a69d975045fe579a3fbf26/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4135c36591affb70d5decdcb0efc003692e2fa8f29a69d975045fe579a3fbf26/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:07 compute-0 systemd[1]: Started libpod-conmon-dade1b20af11dc65eb9793cbb5102edbc269f30002070e0ed688150356b10672.scope.
Feb 02 15:08:07 compute-0 podman[94105]: 2026-02-02 15:08:07.800521117 +0000 UTC m=+0.155739987 container init d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37 (image=quay.io/ceph/ceph:v20, name=exciting_leavitt, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:07 compute-0 podman[94105]: 2026-02-02 15:08:07.806108685 +0000 UTC m=+0.161327535 container start d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37 (image=quay.io/ceph/ceph:v20, name=exciting_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:08:07 compute-0 podman[94105]: 2026-02-02 15:08:07.809831034 +0000 UTC m=+0.165049884 container attach d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37 (image=quay.io/ceph/ceph:v20, name=exciting_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:08:07 compute-0 podman[94129]: 2026-02-02 15:08:07.814259167 +0000 UTC m=+0.114460705 container init dade1b20af11dc65eb9793cbb5102edbc269f30002070e0ed688150356b10672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:07 compute-0 podman[94129]: 2026-02-02 15:08:07.818336793 +0000 UTC m=+0.118538311 container start dade1b20af11dc65eb9793cbb5102edbc269f30002070e0ed688150356b10672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:08:07 compute-0 sad_bardeen[94151]: 167 167
Feb 02 15:08:07 compute-0 podman[94129]: 2026-02-02 15:08:07.822086342 +0000 UTC m=+0.122287850 container attach dade1b20af11dc65eb9793cbb5102edbc269f30002070e0ed688150356b10672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bardeen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:07 compute-0 systemd[1]: libpod-dade1b20af11dc65eb9793cbb5102edbc269f30002070e0ed688150356b10672.scope: Deactivated successfully.
Feb 02 15:08:07 compute-0 podman[94129]: 2026-02-02 15:08:07.823452461 +0000 UTC m=+0.123654009 container died dade1b20af11dc65eb9793cbb5102edbc269f30002070e0ed688150356b10672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bardeen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:08:07 compute-0 podman[94129]: 2026-02-02 15:08:07.730839457 +0000 UTC m=+0.031040995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b42b9438e74c55cb71531a447d542b2e1629659e71507c1596a5ede6ecb7efbf-merged.mount: Deactivated successfully.
Feb 02 15:08:07 compute-0 podman[94129]: 2026-02-02 15:08:07.870660238 +0000 UTC m=+0.170861736 container remove dade1b20af11dc65eb9793cbb5102edbc269f30002070e0ed688150356b10672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bardeen, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 15:08:07 compute-0 systemd[1]: libpod-conmon-dade1b20af11dc65eb9793cbb5102edbc269f30002070e0ed688150356b10672.scope: Deactivated successfully.
Feb 02 15:08:08 compute-0 podman[94195]: 2026-02-02 15:08:08.029227293 +0000 UTC m=+0.050894055 container create 7532921f2eaef9134bbdce0713f80bbc57aba7f600a26ca1098ac484eb792645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_diffie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:08 compute-0 systemd[1]: Started libpod-conmon-7532921f2eaef9134bbdce0713f80bbc57aba7f600a26ca1098ac484eb792645.scope.
Feb 02 15:08:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd4470938a2ef2e17ffb88509506368b3bf13bccaf4ac526918699e739d4183/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd4470938a2ef2e17ffb88509506368b3bf13bccaf4ac526918699e739d4183/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd4470938a2ef2e17ffb88509506368b3bf13bccaf4ac526918699e739d4183/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd4470938a2ef2e17ffb88509506368b3bf13bccaf4ac526918699e739d4183/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:08 compute-0 podman[94195]: 2026-02-02 15:08:08.001798804 +0000 UTC m=+0.023465626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:08 compute-0 podman[94195]: 2026-02-02 15:08:08.138771214 +0000 UTC m=+0.160437996 container init 7532921f2eaef9134bbdce0713f80bbc57aba7f600a26ca1098ac484eb792645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:08 compute-0 podman[94195]: 2026-02-02 15:08:08.146878025 +0000 UTC m=+0.168544767 container start 7532921f2eaef9134bbdce0713f80bbc57aba7f600a26ca1098ac484eb792645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_diffie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:08:08 compute-0 podman[94195]: 2026-02-02 15:08:08.150316017 +0000 UTC m=+0.171982839 container attach 7532921f2eaef9134bbdce0713f80bbc57aba7f600a26ca1098ac484eb792645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 02 15:08:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694750503' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 02 15:08:08 compute-0 exciting_leavitt[94145]: 
Feb 02 15:08:08 compute-0 exciting_leavitt[94145]: {"fsid":"e43470b2-6632-573a-87d3-0f5428ec59e9","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":102,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1770044857,"num_in_osds":3,"osd_in_since":1770044838,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":38}],"num_pgs":38,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83869696,"bytes_avail":64328056832,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-02-02T15:08:03:372964+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T15:07:44.716052+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb 02 15:08:08 compute-0 systemd[1]: libpod-d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37.scope: Deactivated successfully.
Feb 02 15:08:08 compute-0 conmon[94145]: conmon d1a34dd2f77f84ec7ff6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37.scope/container/memory.events
Feb 02 15:08:08 compute-0 podman[94105]: 2026-02-02 15:08:08.346564058 +0000 UTC m=+0.701782958 container died d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37 (image=quay.io/ceph/ceph:v20, name=exciting_leavitt, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4135c36591affb70d5decdcb0efc003692e2fa8f29a69d975045fe579a3fbf26-merged.mount: Deactivated successfully.
Feb 02 15:08:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/694750503' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 02 15:08:08 compute-0 podman[94105]: 2026-02-02 15:08:08.387786798 +0000 UTC m=+0.743005658 container remove d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37 (image=quay.io/ceph/ceph:v20, name=exciting_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:08 compute-0 systemd[1]: libpod-conmon-d1a34dd2f77f84ec7ff675a553792d7681272e435ccde26c9b58671d7c09fb37.scope: Deactivated successfully.
Feb 02 15:08:08 compute-0 sudo[94094]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]: {
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:     "0": [
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:         {
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "devices": [
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "/dev/loop3"
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             ],
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_name": "ceph_lv0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_size": "21470642176",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "name": "ceph_lv0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "tags": {
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.crush_device_class": "",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.encrypted": "0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.osd_id": "0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.type": "block",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.vdo": "0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.with_tpm": "0"
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             },
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "type": "block",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "vg_name": "ceph_vg0"
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:         }
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:     ],
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:     "1": [
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:         {
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "devices": [
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "/dev/loop4"
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             ],
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_name": "ceph_lv1",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_size": "21470642176",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "name": "ceph_lv1",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "tags": {
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.crush_device_class": "",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.encrypted": "0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.osd_id": "1",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.type": "block",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.vdo": "0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.with_tpm": "0"
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             },
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "type": "block",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "vg_name": "ceph_vg1"
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:         }
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:     ],
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:     "2": [
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:         {
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "devices": [
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "/dev/loop5"
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             ],
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_name": "ceph_lv2",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_size": "21470642176",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "name": "ceph_lv2",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "tags": {
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.crush_device_class": "",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.encrypted": "0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.osd_id": "2",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.type": "block",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.vdo": "0",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:                 "ceph.with_tpm": "0"
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             },
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "type": "block",
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:             "vg_name": "ceph_vg2"
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:         }
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]:     ]
Feb 02 15:08:08 compute-0 wizardly_diffie[94212]: }
Feb 02 15:08:08 compute-0 systemd[1]: libpod-7532921f2eaef9134bbdce0713f80bbc57aba7f600a26ca1098ac484eb792645.scope: Deactivated successfully.
Feb 02 15:08:08 compute-0 podman[94195]: 2026-02-02 15:08:08.439109602 +0000 UTC m=+0.460776314 container died 7532921f2eaef9134bbdce0713f80bbc57aba7f600a26ca1098ac484eb792645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdd4470938a2ef2e17ffb88509506368b3bf13bccaf4ac526918699e739d4183-merged.mount: Deactivated successfully.
Feb 02 15:08:08 compute-0 podman[94195]: 2026-02-02 15:08:08.469221256 +0000 UTC m=+0.490887978 container remove 7532921f2eaef9134bbdce0713f80bbc57aba7f600a26ca1098ac484eb792645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_diffie, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:08 compute-0 systemd[1]: libpod-conmon-7532921f2eaef9134bbdce0713f80bbc57aba7f600a26ca1098ac484eb792645.scope: Deactivated successfully.
Feb 02 15:08:08 compute-0 sudo[94060]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:08 compute-0 sudo[94248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:08 compute-0 sudo[94294]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxrczewxhxmsgnxfgekkvhvdlhxxxejj ; /usr/bin/python3'
Feb 02 15:08:08 compute-0 sudo[94248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:08 compute-0 sudo[94294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:08 compute-0 sudo[94248]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:08 compute-0 sudo[94299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:08:08 compute-0 sudo[94299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:08 compute-0 python3[94298]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v65: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:08 compute-0 podman[94324]: 2026-02-02 15:08:08.774505438 +0000 UTC m=+0.056207627 container create e75598021e7086058d097e669e44cb07f5318f78a0adbdea9707c04c496417f2 (image=quay.io/ceph/ceph:v20, name=vibrant_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:08:08 compute-0 systemd[1]: Started libpod-conmon-e75598021e7086058d097e669e44cb07f5318f78a0adbdea9707c04c496417f2.scope.
Feb 02 15:08:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/366a4b6bc8ea44d070ca0d054ae3f775abbbade0ea74510439cc56f695596c0f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/366a4b6bc8ea44d070ca0d054ae3f775abbbade0ea74510439cc56f695596c0f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:08 compute-0 podman[94324]: 2026-02-02 15:08:08.753150457 +0000 UTC m=+0.034852666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:08 compute-0 podman[94324]: 2026-02-02 15:08:08.851341459 +0000 UTC m=+0.133043658 container init e75598021e7086058d097e669e44cb07f5318f78a0adbdea9707c04c496417f2 (image=quay.io/ceph/ceph:v20, name=vibrant_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:08 compute-0 podman[94324]: 2026-02-02 15:08:08.857602051 +0000 UTC m=+0.139304230 container start e75598021e7086058d097e669e44cb07f5318f78a0adbdea9707c04c496417f2 (image=quay.io/ceph/ceph:v20, name=vibrant_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:08 compute-0 podman[94324]: 2026-02-02 15:08:08.860552664 +0000 UTC m=+0.142254843 container attach e75598021e7086058d097e669e44cb07f5318f78a0adbdea9707c04c496417f2 (image=quay.io/ceph/ceph:v20, name=vibrant_snyder, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:08 compute-0 podman[94353]: 2026-02-02 15:08:08.867699834 +0000 UTC m=+0.044479559 container create 75bee114ca2c6d8ec57ed20465f5740918d2eb8caa7a84d2dd3a4e4aae10f238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:08:08 compute-0 systemd[1]: Started libpod-conmon-75bee114ca2c6d8ec57ed20465f5740918d2eb8caa7a84d2dd3a4e4aae10f238.scope.
Feb 02 15:08:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:08 compute-0 podman[94353]: 2026-02-02 15:08:08.915635056 +0000 UTC m=+0.092414731 container init 75bee114ca2c6d8ec57ed20465f5740918d2eb8caa7a84d2dd3a4e4aae10f238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wescoff, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:08 compute-0 podman[94353]: 2026-02-02 15:08:08.922770856 +0000 UTC m=+0.099550541 container start 75bee114ca2c6d8ec57ed20465f5740918d2eb8caa7a84d2dd3a4e4aae10f238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wescoff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:08:08 compute-0 podman[94353]: 2026-02-02 15:08:08.92581139 +0000 UTC m=+0.102591075 container attach 75bee114ca2c6d8ec57ed20465f5740918d2eb8caa7a84d2dd3a4e4aae10f238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:08:08 compute-0 peaceful_wescoff[94372]: 167 167
Feb 02 15:08:08 compute-0 systemd[1]: libpod-75bee114ca2c6d8ec57ed20465f5740918d2eb8caa7a84d2dd3a4e4aae10f238.scope: Deactivated successfully.
Feb 02 15:08:08 compute-0 podman[94353]: 2026-02-02 15:08:08.928033587 +0000 UTC m=+0.104813282 container died 75bee114ca2c6d8ec57ed20465f5740918d2eb8caa7a84d2dd3a4e4aae10f238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:08:08 compute-0 podman[94353]: 2026-02-02 15:08:08.845600578 +0000 UTC m=+0.022380303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-069930cfd3c62628464081ebc112f545f7067d29d57f5f20a3a59c827a6d63ff-merged.mount: Deactivated successfully.
Feb 02 15:08:08 compute-0 podman[94353]: 2026-02-02 15:08:08.960077474 +0000 UTC m=+0.136857159 container remove 75bee114ca2c6d8ec57ed20465f5740918d2eb8caa7a84d2dd3a4e4aae10f238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wescoff, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:08 compute-0 systemd[1]: libpod-conmon-75bee114ca2c6d8ec57ed20465f5740918d2eb8caa7a84d2dd3a4e4aae10f238.scope: Deactivated successfully.
Feb 02 15:08:09 compute-0 podman[94415]: 2026-02-02 15:08:09.132574523 +0000 UTC m=+0.063385468 container create 464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:08:09 compute-0 systemd[1]: Started libpod-conmon-464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98.scope.
Feb 02 15:08:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e311e19e279798dc2e80a84cf5914946de162bb59b2aaf9e30ee8a0ea26ecc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e311e19e279798dc2e80a84cf5914946de162bb59b2aaf9e30ee8a0ea26ecc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e311e19e279798dc2e80a84cf5914946de162bb59b2aaf9e30ee8a0ea26ecc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e311e19e279798dc2e80a84cf5914946de162bb59b2aaf9e30ee8a0ea26ecc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:09 compute-0 podman[94415]: 2026-02-02 15:08:09.197958563 +0000 UTC m=+0.128769518 container init 464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_archimedes, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:08:09 compute-0 podman[94415]: 2026-02-02 15:08:09.203323175 +0000 UTC m=+0.134134100 container start 464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_archimedes, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:09 compute-0 podman[94415]: 2026-02-02 15:08:09.107429082 +0000 UTC m=+0.038240037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:09 compute-0 podman[94415]: 2026-02-02 15:08:09.207126996 +0000 UTC m=+0.137937921 container attach 464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 15:08:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:08:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3404615091' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:08:09 compute-0 vibrant_snyder[94351]: 
Feb 02 15:08:09 compute-0 vibrant_snyder[94351]: {"epoch":1,"fsid":"e43470b2-6632-573a-87d3-0f5428ec59e9","modified":"2026-02-02T15:06:21.638370Z","created":"2026-02-02T15:06:21.638370Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Feb 02 15:08:09 compute-0 vibrant_snyder[94351]: dumped monmap epoch 1
Feb 02 15:08:09 compute-0 systemd[1]: libpod-e75598021e7086058d097e669e44cb07f5318f78a0adbdea9707c04c496417f2.scope: Deactivated successfully.
Feb 02 15:08:09 compute-0 podman[94324]: 2026-02-02 15:08:09.385918268 +0000 UTC m=+0.667620447 container died e75598021e7086058d097e669e44cb07f5318f78a0adbdea9707c04c496417f2 (image=quay.io/ceph/ceph:v20, name=vibrant_snyder, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:08:09 compute-0 ceph-mon[75334]: pgmap v65: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3404615091' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-366a4b6bc8ea44d070ca0d054ae3f775abbbade0ea74510439cc56f695596c0f-merged.mount: Deactivated successfully.
Feb 02 15:08:09 compute-0 podman[94324]: 2026-02-02 15:08:09.419452726 +0000 UTC m=+0.701154895 container remove e75598021e7086058d097e669e44cb07f5318f78a0adbdea9707c04c496417f2 (image=quay.io/ceph/ceph:v20, name=vibrant_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:09 compute-0 systemd[1]: libpod-conmon-e75598021e7086058d097e669e44cb07f5318f78a0adbdea9707c04c496417f2.scope: Deactivated successfully.
Feb 02 15:08:09 compute-0 sudo[94294]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:09 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Feb 02 15:08:09 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Feb 02 15:08:09 compute-0 sudo[94541]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otwhqjteyqbxjcwiwiimzcssusxvjegd ; /usr/bin/python3'
Feb 02 15:08:09 compute-0 sudo[94541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:09 compute-0 lvm[94546]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:08:09 compute-0 lvm[94546]: VG ceph_vg0 finished
Feb 02 15:08:09 compute-0 lvm[94549]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:08:09 compute-0 lvm[94549]: VG ceph_vg1 finished
Feb 02 15:08:09 compute-0 lvm[94551]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:08:09 compute-0 lvm[94551]: VG ceph_vg2 finished
Feb 02 15:08:09 compute-0 lvm[94552]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:08:09 compute-0 lvm[94552]: VG ceph_vg0 finished
Feb 02 15:08:09 compute-0 python3[94543]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:09 compute-0 gifted_archimedes[94431]: {}
Feb 02 15:08:10 compute-0 systemd[1]: libpod-464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98.scope: Deactivated successfully.
Feb 02 15:08:10 compute-0 systemd[1]: libpod-464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98.scope: Consumed 1.070s CPU time.
Feb 02 15:08:10 compute-0 podman[94415]: 2026-02-02 15:08:10.000923624 +0000 UTC m=+0.931734559 container died 464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_archimedes, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:10 compute-0 podman[94555]: 2026-02-02 15:08:10.01920808 +0000 UTC m=+0.069401925 container create 1c61174d78ac94e086a5f2fa6ef39a1515fb040a9326f31f871c19b1fda26896 (image=quay.io/ceph/ceph:v20, name=kind_hoover, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:10 compute-0 systemd[1]: Started libpod-conmon-1c61174d78ac94e086a5f2fa6ef39a1515fb040a9326f31f871c19b1fda26896.scope.
Feb 02 15:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-81e311e19e279798dc2e80a84cf5914946de162bb59b2aaf9e30ee8a0ea26ecc-merged.mount: Deactivated successfully.
Feb 02 15:08:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0206a6b94c7d7ef78dce0e6a8e6854116826e484850e2b48b37645e888a05999/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0206a6b94c7d7ef78dce0e6a8e6854116826e484850e2b48b37645e888a05999/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:10 compute-0 podman[94415]: 2026-02-02 15:08:10.079534423 +0000 UTC m=+1.010345368 container remove 464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:08:10 compute-0 podman[94555]: 2026-02-02 15:08:09.990253159 +0000 UTC m=+0.040447054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:10 compute-0 systemd[1]: libpod-conmon-464b1a1680d6d6dad7f21d71be9308b1a0ede9032c89660ac8c3b50963b18a98.scope: Deactivated successfully.
Feb 02 15:08:10 compute-0 podman[94555]: 2026-02-02 15:08:10.102421626 +0000 UTC m=+0.152615471 container init 1c61174d78ac94e086a5f2fa6ef39a1515fb040a9326f31f871c19b1fda26896 (image=quay.io/ceph/ceph:v20, name=kind_hoover, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:10 compute-0 podman[94555]: 2026-02-02 15:08:10.108369642 +0000 UTC m=+0.158563457 container start 1c61174d78ac94e086a5f2fa6ef39a1515fb040a9326f31f871c19b1fda26896 (image=quay.io/ceph/ceph:v20, name=kind_hoover, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:08:10 compute-0 podman[94555]: 2026-02-02 15:08:10.112465648 +0000 UTC m=+0.162659463 container attach 1c61174d78ac94e086a5f2fa6ef39a1515fb040a9326f31f871c19b1fda26896 (image=quay.io/ceph/ceph:v20, name=kind_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Feb 02 15:08:10 compute-0 sudo[94299]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:10 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev a7dc8ffa-9c13-4db2-b9a2-e8ed22424c4b (Updating rgw.rgw deployment (+1 -> 1))
Feb 02 15:08:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bzshzr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 02 15:08:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bzshzr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb 02 15:08:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bzshzr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 15:08:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb 02 15:08:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:10 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.bzshzr on compute-0
Feb 02 15:08:10 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.bzshzr on compute-0
Feb 02 15:08:10 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.e scrub starts
Feb 02 15:08:10 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.e scrub ok
Feb 02 15:08:10 compute-0 sudo[94585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:10 compute-0 sudo[94585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:10 compute-0 sudo[94585]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:10 compute-0 sudo[94627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:08:10 compute-0 sudo[94627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:10 compute-0 ceph-mon[75334]: 2.2 scrub starts
Feb 02 15:08:10 compute-0 ceph-mon[75334]: 2.2 scrub ok
Feb 02 15:08:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bzshzr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb 02 15:08:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bzshzr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 02 15:08:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:10 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb 02 15:08:10 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb 02 15:08:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Feb 02 15:08:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2391163524' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb 02 15:08:10 compute-0 kind_hoover[94581]: [client.openstack]
Feb 02 15:08:10 compute-0 kind_hoover[94581]:         key = AQBNvYBpAAAAABAAhvMLOwrnQugwbkZIzlc9Gw==
Feb 02 15:08:10 compute-0 kind_hoover[94581]:         caps mgr = "allow *"
Feb 02 15:08:10 compute-0 kind_hoover[94581]:         caps mon = "profile rbd"
Feb 02 15:08:10 compute-0 kind_hoover[94581]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Feb 02 15:08:10 compute-0 systemd[1]: libpod-1c61174d78ac94e086a5f2fa6ef39a1515fb040a9326f31f871c19b1fda26896.scope: Deactivated successfully.
Feb 02 15:08:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:10 compute-0 podman[94555]: 2026-02-02 15:08:10.627486385 +0000 UTC m=+0.677680190 container died 1c61174d78ac94e086a5f2fa6ef39a1515fb040a9326f31f871c19b1fda26896 (image=quay.io/ceph/ceph:v20, name=kind_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0206a6b94c7d7ef78dce0e6a8e6854116826e484850e2b48b37645e888a05999-merged.mount: Deactivated successfully.
Feb 02 15:08:10 compute-0 podman[94555]: 2026-02-02 15:08:10.660594463 +0000 UTC m=+0.710788268 container remove 1c61174d78ac94e086a5f2fa6ef39a1515fb040a9326f31f871c19b1fda26896 (image=quay.io/ceph/ceph:v20, name=kind_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:08:10 compute-0 podman[94699]: 2026-02-02 15:08:10.674931865 +0000 UTC m=+0.039990894 container create 88cfd05ee67839a9c422123b5a896256838cd77f023804fc53f543f8bf80a402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcnulty, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:08:10 compute-0 systemd[1]: libpod-conmon-1c61174d78ac94e086a5f2fa6ef39a1515fb040a9326f31f871c19b1fda26896.scope: Deactivated successfully.
Feb 02 15:08:10 compute-0 sudo[94541]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:10 compute-0 systemd[1]: Started libpod-conmon-88cfd05ee67839a9c422123b5a896256838cd77f023804fc53f543f8bf80a402.scope.
Feb 02 15:08:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v66: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:10 compute-0 podman[94699]: 2026-02-02 15:08:10.727130707 +0000 UTC m=+0.092189746 container init 88cfd05ee67839a9c422123b5a896256838cd77f023804fc53f543f8bf80a402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcnulty, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:08:10 compute-0 podman[94699]: 2026-02-02 15:08:10.731350536 +0000 UTC m=+0.096409565 container start 88cfd05ee67839a9c422123b5a896256838cd77f023804fc53f543f8bf80a402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:10 compute-0 podman[94699]: 2026-02-02 15:08:10.734332469 +0000 UTC m=+0.099391548 container attach 88cfd05ee67839a9c422123b5a896256838cd77f023804fc53f543f8bf80a402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:08:10 compute-0 festive_mcnulty[94724]: 167 167
Feb 02 15:08:10 compute-0 systemd[1]: libpod-88cfd05ee67839a9c422123b5a896256838cd77f023804fc53f543f8bf80a402.scope: Deactivated successfully.
Feb 02 15:08:10 compute-0 podman[94699]: 2026-02-02 15:08:10.737176289 +0000 UTC m=+0.102235348 container died 88cfd05ee67839a9c422123b5a896256838cd77f023804fc53f543f8bf80a402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:10 compute-0 podman[94699]: 2026-02-02 15:08:10.66140745 +0000 UTC m=+0.026466499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e3815dca026487230a2cb706d5df116d21fc00be5e5d1eff694cf9c4a9c4383-merged.mount: Deactivated successfully.
Feb 02 15:08:10 compute-0 podman[94699]: 2026-02-02 15:08:10.770685546 +0000 UTC m=+0.135744575 container remove 88cfd05ee67839a9c422123b5a896256838cd77f023804fc53f543f8bf80a402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:10 compute-0 systemd[1]: libpod-conmon-88cfd05ee67839a9c422123b5a896256838cd77f023804fc53f543f8bf80a402.scope: Deactivated successfully.
Feb 02 15:08:10 compute-0 systemd[1]: Reloading.
Feb 02 15:08:10 compute-0 systemd-rc-local-generator[94761]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:08:10 compute-0 systemd-sysv-generator[94768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:08:11 compute-0 systemd[1]: Reloading.
Feb 02 15:08:11 compute-0 systemd-rc-local-generator[94810]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:08:11 compute-0 systemd-sysv-generator[94813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:08:11 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.bzshzr for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:08:11 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb 02 15:08:11 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb 02 15:08:11 compute-0 ceph-mon[75334]: Deploying daemon rgw.rgw.compute-0.bzshzr on compute-0
Feb 02 15:08:11 compute-0 ceph-mon[75334]: 2.e scrub starts
Feb 02 15:08:11 compute-0 ceph-mon[75334]: 2.e scrub ok
Feb 02 15:08:11 compute-0 ceph-mon[75334]: 2.16 scrub starts
Feb 02 15:08:11 compute-0 ceph-mon[75334]: 2.16 scrub ok
Feb 02 15:08:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2391163524' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb 02 15:08:11 compute-0 ceph-mon[75334]: pgmap v66: 38 pgs: 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:11 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb 02 15:08:11 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb 02 15:08:11 compute-0 podman[94866]: 2026-02-02 15:08:11.542526901 +0000 UTC m=+0.058279221 container create 4ceeeaded2e9584f35df059bd2042e6820b96ea722e26e0dfdd64553e3b8890a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-rgw-rgw-compute-0-bzshzr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3d90dc18ba7e2ae4af8ebb9f59d92612a4e321292ed0661d0b6293a2b7476c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3d90dc18ba7e2ae4af8ebb9f59d92612a4e321292ed0661d0b6293a2b7476c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3d90dc18ba7e2ae4af8ebb9f59d92612a4e321292ed0661d0b6293a2b7476c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3d90dc18ba7e2ae4af8ebb9f59d92612a4e321292ed0661d0b6293a2b7476c/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.bzshzr supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:11 compute-0 podman[94866]: 2026-02-02 15:08:11.605806496 +0000 UTC m=+0.121558836 container init 4ceeeaded2e9584f35df059bd2042e6820b96ea722e26e0dfdd64553e3b8890a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-rgw-rgw-compute-0-bzshzr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:11 compute-0 podman[94866]: 2026-02-02 15:08:11.51500717 +0000 UTC m=+0.030759570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:11 compute-0 podman[94866]: 2026-02-02 15:08:11.617563675 +0000 UTC m=+0.133315985 container start 4ceeeaded2e9584f35df059bd2042e6820b96ea722e26e0dfdd64553e3b8890a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-rgw-rgw-compute-0-bzshzr, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:11 compute-0 bash[94866]: 4ceeeaded2e9584f35df059bd2042e6820b96ea722e26e0dfdd64553e3b8890a
Feb 02 15:08:11 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.bzshzr for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:08:11 compute-0 radosgw[94934]: deferred set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:08:11 compute-0 radosgw[94934]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Feb 02 15:08:11 compute-0 radosgw[94934]: framework: beast
Feb 02 15:08:11 compute-0 radosgw[94934]: framework conf key: endpoint, val: 192.168.122.100:8082
Feb 02 15:08:11 compute-0 radosgw[94934]: init_numa not setting numa affinity
Feb 02 15:08:11 compute-0 sudo[94627]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 15:08:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:11 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev a7dc8ffa-9c13-4db2-b9a2-e8ed22424c4b (Updating rgw.rgw deployment (+1 -> 1))
Feb 02 15:08:11 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event a7dc8ffa-9c13-4db2-b9a2-e8ed22424c4b (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Feb 02 15:08:11 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Feb 02 15:08:11 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb 02 15:08:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 15:08:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 02 15:08:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:11 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev 4e17d4e2-37a6-4a74-b27b-202295896184 (Updating mds.cephfs deployment (+1 -> 1))
Feb 02 15:08:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mcxxtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 02 15:08:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mcxxtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 02 15:08:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mcxxtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 02 15:08:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:11 compute-0 ceph-mgr[75628]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.mcxxtn on compute-0
Feb 02 15:08:11 compute-0 ceph-mgr[75628]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.mcxxtn on compute-0
Feb 02 15:08:11 compute-0 sudo[95014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:11 compute-0 sudo[95014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:11 compute-0 sudo[95014]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:11 compute-0 sudo[95065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid e43470b2-6632-573a-87d3-0f5428ec59e9
Feb 02 15:08:11 compute-0 sudo[95109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeevejzfatsttpkhxvuvjcspazjsbmzn ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770044891.5719724-36844-188379724104233/async_wrapper.py j764827015928 30 /home/zuul/.ansible/tmp/ansible-tmp-1770044891.5719724-36844-188379724104233/AnsiballZ_command.py _'
Feb 02 15:08:11 compute-0 sudo[95065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:11 compute-0 sudo[95109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:11 compute-0 ansible-async_wrapper.py[95113]: Invoked with j764827015928 30 /home/zuul/.ansible/tmp/ansible-tmp-1770044891.5719724-36844-188379724104233/AnsiballZ_command.py _
Feb 02 15:08:11 compute-0 ansible-async_wrapper.py[95116]: Starting module and watcher
Feb 02 15:08:11 compute-0 ansible-async_wrapper.py[95116]: Start watching 95117 (30)
Feb 02 15:08:11 compute-0 ansible-async_wrapper.py[95117]: Start module (95117)
Feb 02 15:08:11 compute-0 ansible-async_wrapper.py[95113]: Return async_wrapper task started.
Feb 02 15:08:12 compute-0 sudo[95109]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:12 compute-0 python3[95118]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:12 compute-0 podman[95151]: 2026-02-02 15:08:12.23823817 +0000 UTC m=+0.046408170 container create 26d1bf590b5a342ce979e8f8c8a2ba2d0f14cb3f3c92b889a5d4a8c7cd9c5efb (image=quay.io/ceph/ceph:v20, name=cranky_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:12 compute-0 podman[95167]: 2026-02-02 15:08:12.253090294 +0000 UTC m=+0.043820566 container create 41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:12 compute-0 systemd[1]: Started libpod-conmon-26d1bf590b5a342ce979e8f8c8a2ba2d0f14cb3f3c92b889a5d4a8c7cd9c5efb.scope.
Feb 02 15:08:12 compute-0 systemd[1]: Started libpod-conmon-41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110.scope.
Feb 02 15:08:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bcfbd0d95073fb1de75acc0d62c5668a52ca5f1506b6de686afab129198769c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bcfbd0d95073fb1de75acc0d62c5668a52ca5f1506b6de686afab129198769c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:12 compute-0 podman[95151]: 2026-02-02 15:08:12.217826969 +0000 UTC m=+0.025996959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:12 compute-0 podman[95167]: 2026-02-02 15:08:12.322473897 +0000 UTC m=+0.113204159 container init 41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:12 compute-0 podman[95167]: 2026-02-02 15:08:12.229765162 +0000 UTC m=+0.020495434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:12 compute-0 podman[95151]: 2026-02-02 15:08:12.328888953 +0000 UTC m=+0.137058933 container init 26d1bf590b5a342ce979e8f8c8a2ba2d0f14cb3f3c92b889a5d4a8c7cd9c5efb (image=quay.io/ceph/ceph:v20, name=cranky_taussig, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:08:12 compute-0 podman[95167]: 2026-02-02 15:08:12.329861143 +0000 UTC m=+0.120591395 container start 41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:12 compute-0 systemd[1]: libpod-41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110.scope: Deactivated successfully.
Feb 02 15:08:12 compute-0 podman[95167]: 2026-02-02 15:08:12.335206576 +0000 UTC m=+0.125936868 container attach 41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:08:12 compute-0 beautiful_grothendieck[95193]: 167 167
Feb 02 15:08:12 compute-0 podman[95151]: 2026-02-02 15:08:12.336594465 +0000 UTC m=+0.144764435 container start 26d1bf590b5a342ce979e8f8c8a2ba2d0f14cb3f3c92b889a5d4a8c7cd9c5efb (image=quay.io/ceph/ceph:v20, name=cranky_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:08:12 compute-0 conmon[95193]: conmon 41c00c02bd8d4203f2ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110.scope/container/memory.events
Feb 02 15:08:12 compute-0 podman[95151]: 2026-02-02 15:08:12.339934516 +0000 UTC m=+0.148104476 container attach 26d1bf590b5a342ce979e8f8c8a2ba2d0f14cb3f3c92b889a5d4a8c7cd9c5efb (image=quay.io/ceph/ceph:v20, name=cranky_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:12 compute-0 podman[95167]: 2026-02-02 15:08:12.34013487 +0000 UTC m=+0.130865122 container died 41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c47f79d9b2f3def5e3e76eca3c5bffdc43abc3aef3b2da64003b607e082165-merged.mount: Deactivated successfully.
Feb 02 15:08:12 compute-0 podman[95167]: 2026-02-02 15:08:12.374298021 +0000 UTC m=+0.165028313 container remove 41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:08:12 compute-0 systemd[1]: libpod-conmon-41c00c02bd8d4203f2ae5123ef647c327d7b44fbb0225b66fb6896575a50f110.scope: Deactivated successfully.
Feb 02 15:08:12 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Feb 02 15:08:12 compute-0 systemd[1]: Reloading.
Feb 02 15:08:12 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Feb 02 15:08:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Feb 02 15:08:12 compute-0 ceph-mon[75334]: 2.11 scrub starts
Feb 02 15:08:12 compute-0 ceph-mon[75334]: 2.11 scrub ok
Feb 02 15:08:12 compute-0 ceph-mon[75334]: 2.17 scrub starts
Feb 02 15:08:12 compute-0 ceph-mon[75334]: 2.17 scrub ok
Feb 02 15:08:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:12 compute-0 ceph-mon[75334]: Saving service rgw.rgw spec with placement compute-0
Feb 02 15:08:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mcxxtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 02 15:08:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mcxxtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 02 15:08:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:12 compute-0 ceph-mon[75334]: Deploying daemon mds.cephfs.compute-0.mcxxtn on compute-0
Feb 02 15:08:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Feb 02 15:08:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Feb 02 15:08:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Feb 02 15:08:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1645336320' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb 02 15:08:12 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:12 compute-0 systemd-sysv-generator[95265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:08:12 compute-0 systemd-rc-local-generator[95262]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:08:12 compute-0 systemd[1]: Reloading.
Feb 02 15:08:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v68: 39 pgs: 1 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:12 compute-0 systemd-rc-local-generator[95298]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:08:12 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:08:12 compute-0 systemd-sysv-generator[95303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:08:12 compute-0 cranky_taussig[95191]: 
Feb 02 15:08:12 compute-0 cranky_taussig[95191]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 02 15:08:12 compute-0 podman[95151]: 2026-02-02 15:08:12.790522693 +0000 UTC m=+0.598692693 container died 26d1bf590b5a342ce979e8f8c8a2ba2d0f14cb3f3c92b889a5d4a8c7cd9c5efb (image=quay.io/ceph/ceph:v20, name=cranky_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:08:12 compute-0 systemd[1]: libpod-26d1bf590b5a342ce979e8f8c8a2ba2d0f14cb3f3c92b889a5d4a8c7cd9c5efb.scope: Deactivated successfully.
Feb 02 15:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bcfbd0d95073fb1de75acc0d62c5668a52ca5f1506b6de686afab129198769c-merged.mount: Deactivated successfully.
Feb 02 15:08:12 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.mcxxtn for e43470b2-6632-573a-87d3-0f5428ec59e9...
Feb 02 15:08:12 compute-0 podman[95151]: 2026-02-02 15:08:12.951024819 +0000 UTC m=+0.759194819 container remove 26d1bf590b5a342ce979e8f8c8a2ba2d0f14cb3f3c92b889a5d4a8c7cd9c5efb (image=quay.io/ceph/ceph:v20, name=cranky_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:12 compute-0 systemd[1]: libpod-conmon-26d1bf590b5a342ce979e8f8c8a2ba2d0f14cb3f3c92b889a5d4a8c7cd9c5efb.scope: Deactivated successfully.
Feb 02 15:08:12 compute-0 ansible-async_wrapper.py[95117]: Module complete (95117)
Feb 02 15:08:13 compute-0 sudo[95419]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qotopttfesquacnrjkkwjpfgpcccqddv ; /usr/bin/python3'
Feb 02 15:08:13 compute-0 sudo[95419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:13 compute-0 podman[95416]: 2026-02-02 15:08:13.134517401 +0000 UTC m=+0.034619272 container create 90831cfc5982fb96ef8af514b9c0b1aece7017c7b7d881f83ae4b4b84a33d150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mds-cephfs-compute-0-mcxxtn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289810b762d4e600f21201d973f337b089ad99c7c356eb693012b582ddfdcfc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289810b762d4e600f21201d973f337b089ad99c7c356eb693012b582ddfdcfc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289810b762d4e600f21201d973f337b089ad99c7c356eb693012b582ddfdcfc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289810b762d4e600f21201d973f337b089ad99c7c356eb693012b582ddfdcfc5/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.mcxxtn supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:13 compute-0 podman[95416]: 2026-02-02 15:08:13.119632537 +0000 UTC m=+0.019734428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:13 compute-0 python3[95430]: ansible-ansible.legacy.async_status Invoked with jid=j764827015928.95113 mode=status _async_dir=/root/.ansible_async
Feb 02 15:08:13 compute-0 sudo[95419]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:13 compute-0 podman[95416]: 2026-02-02 15:08:13.233336226 +0000 UTC m=+0.133438137 container init 90831cfc5982fb96ef8af514b9c0b1aece7017c7b7d881f83ae4b4b84a33d150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mds-cephfs-compute-0-mcxxtn, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:08:13 compute-0 podman[95416]: 2026-02-02 15:08:13.239954345 +0000 UTC m=+0.140056236 container start 90831cfc5982fb96ef8af514b9c0b1aece7017c7b7d881f83ae4b4b84a33d150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mds-cephfs-compute-0-mcxxtn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:13 compute-0 bash[95416]: 90831cfc5982fb96ef8af514b9c0b1aece7017c7b7d881f83ae4b4b84a33d150
Feb 02 15:08:13 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.mcxxtn for e43470b2-6632-573a-87d3-0f5428ec59e9.
Feb 02 15:08:13 compute-0 ceph-mds[95441]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:08:13 compute-0 ceph-mds[95441]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Feb 02 15:08:13 compute-0 ceph-mds[95441]: main not setting numa affinity
Feb 02 15:08:13 compute-0 ceph-mds[95441]: pidfile_write: ignore empty --pid-file
Feb 02 15:08:13 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mds-cephfs-compute-0-mcxxtn[95437]: starting mds.cephfs.compute-0.mcxxtn at 
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn Updating MDS map to version 2 from mon.0
Feb 02 15:08:13 compute-0 sudo[95065]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev 4e17d4e2-37a6-4a74-b27b-202295896184 (Updating mds.cephfs deployment (+1 -> 1))
Feb 02 15:08:13 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event 4e17d4e2-37a6-4a74-b27b-202295896184 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Feb 02 15:08:13 compute-0 sudo[95506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypwwfydfknqgyvywhivhktjlpvcjujmc ; /usr/bin/python3'
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Feb 02 15:08:13 compute-0 sudo[95506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 sudo[95509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:08:13 compute-0 sudo[95509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:13 compute-0 sudo[95509]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:13 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Feb 02 15:08:13 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Feb 02 15:08:13 compute-0 sudo[95534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:13 compute-0 sudo[95534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:13 compute-0 sudo[95534]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1645336320' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 02 15:08:13 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Feb 02 15:08:13 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e3 new map
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-02-02T15:08:13:445206+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T15:08:03.372406+0000
                                           modified        2026-02-02T15:08:03.372406+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.mcxxtn{-1:14253} state up:standby seq 1 addr [v2:192.168.122.100:6814/3357967106,v1:192.168.122.100:6815/3357967106] compat {c=[1],r=[1],i=[1fff]}]
Feb 02 15:08:13 compute-0 ceph-mon[75334]: 2.19 scrub starts
Feb 02 15:08:13 compute-0 ceph-mon[75334]: 2.19 scrub ok
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn Updating MDS map to version 3 from mon.0
Feb 02 15:08:13 compute-0 ceph-mon[75334]: osdmap e32: 3 total, 3 up, 3 in
Feb 02 15:08:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1645336320' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb 02 15:08:13 compute-0 ceph-mon[75334]: pgmap v68: 39 pgs: 1 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:13 compute-0 ceph-mon[75334]: from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:08:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn Monitors have assigned me to become a standby
Feb 02 15:08:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:13 compute-0 python3[95508]: ansible-ansible.legacy.async_status Invoked with jid=j764827015928.95113 mode=cleanup _async_dir=/root/.ansible_async
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3357967106,v1:192.168.122.100:6815/3357967106] up:boot
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3357967106,v1:192.168.122.100:6815/3357967106] as mds.0
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mcxxtn assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 02 15:08:13 compute-0 sudo[95506]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.mcxxtn"} v 0)
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.mcxxtn"} : dispatch
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e3 all = 0
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e4 new map
Feb 02 15:08:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-02-02T15:08:13:469848+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T15:08:03.372406+0000
                                           modified        2026-02-02T15:08:13.469842+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14253}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.mcxxtn{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/3357967106,v1:192.168.122.100:6815/3357967106] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Feb 02 15:08:13 compute-0 sudo[95559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mcxxtn=up:creating}
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn Updating MDS map to version 4 from mon.0
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.4 handle_mds_map I am now mds.0.4
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Feb 02 15:08:13 compute-0 sudo[95559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x1
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x100
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x600
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x601
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x602
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x603
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x604
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x605
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x606
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x607
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x608
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.cache creating system inode with ino:0x609
Feb 02 15:08:13 compute-0 ceph-mds[95441]: mds.0.4 creating_done
Feb 02 15:08:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mcxxtn is now active in filesystem cephfs as rank 0
Feb 02 15:08:13 compute-0 sudo[96227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvgjwycfgrcogumysskocvotimgdjwkk ; /usr/bin/python3'
Feb 02 15:08:13 compute-0 sudo[96227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:13 compute-0 podman[96226]: 2026-02-02 15:08:13.892997595 +0000 UTC m=+0.059803993 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:13 compute-0 python3[96235]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:14 compute-0 podman[96226]: 2026-02-02 15:08:14.008029552 +0000 UTC m=+0.174835910 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:14 compute-0 podman[96249]: 2026-02-02 15:08:14.04207905 +0000 UTC m=+0.039508335 container create 2928abcb2ea34ea16257a84fb96dd7209e12928123ada2d05dae1c5602c1b145 (image=quay.io/ceph/ceph:v20, name=fervent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:08:14 compute-0 systemd[1]: Started libpod-conmon-2928abcb2ea34ea16257a84fb96dd7209e12928123ada2d05dae1c5602c1b145.scope.
Feb 02 15:08:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c1e37a721b4e139f4b8ca3a0a2bba5f3e6765746c7f03a2fe121013d58fc5c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c1e37a721b4e139f4b8ca3a0a2bba5f3e6765746c7f03a2fe121013d58fc5c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:14 compute-0 podman[96249]: 2026-02-02 15:08:14.102835742 +0000 UTC m=+0.100265037 container init 2928abcb2ea34ea16257a84fb96dd7209e12928123ada2d05dae1c5602c1b145 (image=quay.io/ceph/ceph:v20, name=fervent_jones, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:14 compute-0 podman[96249]: 2026-02-02 15:08:14.108048971 +0000 UTC m=+0.105478256 container start 2928abcb2ea34ea16257a84fb96dd7209e12928123ada2d05dae1c5602c1b145 (image=quay.io/ceph/ceph:v20, name=fervent_jones, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:08:14 compute-0 podman[96249]: 2026-02-02 15:08:14.111396023 +0000 UTC m=+0.108825318 container attach 2928abcb2ea34ea16257a84fb96dd7209e12928123ada2d05dae1c5602c1b145 (image=quay.io/ceph/ceph:v20, name=fervent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:14 compute-0 podman[96249]: 2026-02-02 15:08:14.026073162 +0000 UTC m=+0.023502467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:14 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Feb 02 15:08:14 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb 02 15:08:14 compute-0 ceph-mon[75334]: 2.13 scrub starts
Feb 02 15:08:14 compute-0 ceph-mon[75334]: 2.13 scrub ok
Feb 02 15:08:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1645336320' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 02 15:08:14 compute-0 ceph-mon[75334]: 2.7 scrub starts
Feb 02 15:08:14 compute-0 ceph-mon[75334]: osdmap e33: 3 total, 3 up, 3 in
Feb 02 15:08:14 compute-0 ceph-mon[75334]: 2.7 scrub ok
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mds.? [v2:192.168.122.100:6814/3357967106,v1:192.168.122.100:6815/3357967106] up:boot
Feb 02 15:08:14 compute-0 ceph-mon[75334]: daemon mds.cephfs.compute-0.mcxxtn assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: Cluster is now healthy
Feb 02 15:08:14 compute-0 ceph-mon[75334]: fsmap cephfs:0 1 up:standby
Feb 02 15:08:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.mcxxtn"} : dispatch
Feb 02 15:08:14 compute-0 ceph-mon[75334]: fsmap cephfs:1 {0=cephfs.compute-0.mcxxtn=up:creating}
Feb 02 15:08:14 compute-0 ceph-mon[75334]: daemon mds.cephfs.compute-0.mcxxtn is now active in filesystem cephfs as rank 0
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e5 new map
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-02-02T15:08:14:482474+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-02T15:08:03.372406+0000
                                           modified        2026-02-02T15:08:14.482470+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14253}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14253 members: 14253
                                           [mds.cephfs.compute-0.mcxxtn{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3357967106,v1:192.168.122.100:6815/3357967106] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Feb 02 15:08:14 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn Updating MDS map to version 5 from mon.0
Feb 02 15:08:14 compute-0 ceph-mds[95441]: mds.0.4 handle_mds_map I am now mds.0.4
Feb 02 15:08:14 compute-0 ceph-mds[95441]: mds.0.4 handle_mds_map state change up:creating --> up:active
Feb 02 15:08:14 compute-0 ceph-mds[95441]: mds.0.4 recovery_done -- successful recovery!
Feb 02 15:08:14 compute-0 ceph-mds[95441]: mds.0.4 active_start
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3357967106,v1:192.168.122.100:6815/3357967106] up:active
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mcxxtn=up:active}
Feb 02 15:08:14 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:14 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:08:14 compute-0 fervent_jones[96288]: 
Feb 02 15:08:14 compute-0 fervent_jones[96288]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 02 15:08:14 compute-0 systemd[1]: libpod-2928abcb2ea34ea16257a84fb96dd7209e12928123ada2d05dae1c5602c1b145.scope: Deactivated successfully.
Feb 02 15:08:14 compute-0 podman[96249]: 2026-02-02 15:08:14.54589924 +0000 UTC m=+0.543328535 container died 2928abcb2ea34ea16257a84fb96dd7209e12928123ada2d05dae1c5602c1b145 (image=quay.io/ceph/ceph:v20, name=fervent_jones, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-97c1e37a721b4e139f4b8ca3a0a2bba5f3e6765746c7f03a2fe121013d58fc5c-merged.mount: Deactivated successfully.
Feb 02 15:08:14 compute-0 podman[96249]: 2026-02-02 15:08:14.595018756 +0000 UTC m=+0.592448041 container remove 2928abcb2ea34ea16257a84fb96dd7209e12928123ada2d05dae1c5602c1b145 (image=quay.io/ceph/ceph:v20, name=fervent_jones, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:08:14 compute-0 systemd[1]: libpod-conmon-2928abcb2ea34ea16257a84fb96dd7209e12928123ada2d05dae1c5602c1b145.scope: Deactivated successfully.
Feb 02 15:08:14 compute-0 sudo[96227]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:08:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:08:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:08:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:08:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:08:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:08:14 compute-0 ceph-mgr[75628]: [progress INFO root] Writing back 6 completed events
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 15:08:14 compute-0 sudo[95559]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v71: 40 pgs: 2 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:08:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:14 compute-0 sudo[96467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:14 compute-0 sudo[96467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:14 compute-0 sudo[96467]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:14 compute-0 sudo[96492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:08:14 compute-0 sudo[96492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:15 compute-0 podman[96529]: 2026-02-02 15:08:15.135575002 +0000 UTC m=+0.043521759 container create 0042a73b0751fabd69dfe7f89e0fa63d5d9ebb06c299f53378c03b7124ba35d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:08:15 compute-0 systemd[1]: Started libpod-conmon-0042a73b0751fabd69dfe7f89e0fa63d5d9ebb06c299f53378c03b7124ba35d8.scope.
Feb 02 15:08:15 compute-0 podman[96529]: 2026-02-02 15:08:15.117167664 +0000 UTC m=+0.025114401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:15 compute-0 podman[96529]: 2026-02-02 15:08:15.22740985 +0000 UTC m=+0.135356657 container init 0042a73b0751fabd69dfe7f89e0fa63d5d9ebb06c299f53378c03b7124ba35d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:15 compute-0 sudo[96572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoazzodidhrnxfysgwwufkqrqrhvfqxz ; /usr/bin/python3'
Feb 02 15:08:15 compute-0 sudo[96572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:15 compute-0 podman[96529]: 2026-02-02 15:08:15.236657505 +0000 UTC m=+0.144604222 container start 0042a73b0751fabd69dfe7f89e0fa63d5d9ebb06c299f53378c03b7124ba35d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:08:15 compute-0 wonderful_keldysh[96551]: 167 167
Feb 02 15:08:15 compute-0 podman[96529]: 2026-02-02 15:08:15.240856704 +0000 UTC m=+0.148803421 container attach 0042a73b0751fabd69dfe7f89e0fa63d5d9ebb06c299f53378c03b7124ba35d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:08:15 compute-0 systemd[1]: libpod-0042a73b0751fabd69dfe7f89e0fa63d5d9ebb06c299f53378c03b7124ba35d8.scope: Deactivated successfully.
Feb 02 15:08:15 compute-0 podman[96529]: 2026-02-02 15:08:15.241733062 +0000 UTC m=+0.149679819 container died 0042a73b0751fabd69dfe7f89e0fa63d5d9ebb06c299f53378c03b7124ba35d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:08:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-feca1b71e81147201f84b075db47a413a96c9d51bc0db57ad971e50c9b8847e3-merged.mount: Deactivated successfully.
Feb 02 15:08:15 compute-0 podman[96529]: 2026-02-02 15:08:15.285097827 +0000 UTC m=+0.193044554 container remove 0042a73b0751fabd69dfe7f89e0fa63d5d9ebb06c299f53378c03b7124ba35d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb 02 15:08:15 compute-0 systemd[1]: libpod-conmon-0042a73b0751fabd69dfe7f89e0fa63d5d9ebb06c299f53378c03b7124ba35d8.scope: Deactivated successfully.
Feb 02 15:08:15 compute-0 python3[96575]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:15 compute-0 podman[96597]: 2026-02-02 15:08:15.43219234 +0000 UTC m=+0.039901252 container create 3e7b90608735c73363516e00f32ab318fc5ebfe1ba8b03689a11665848ce47f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:08:15 compute-0 podman[96596]: 2026-02-02 15:08:15.455022842 +0000 UTC m=+0.058771161 container create 71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679 (image=quay.io/ceph/ceph:v20, name=eager_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Feb 02 15:08:15 compute-0 systemd[1]: Started libpod-conmon-3e7b90608735c73363516e00f32ab318fc5ebfe1ba8b03689a11665848ce47f9.scope.
Feb 02 15:08:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 02 15:08:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Feb 02 15:08:15 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Feb 02 15:08:15 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Feb 02 15:08:15 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:15 compute-0 ceph-mon[75334]: 2.18 scrub starts
Feb 02 15:08:15 compute-0 ceph-mon[75334]: 2.18 scrub ok
Feb 02 15:08:15 compute-0 ceph-mon[75334]: osdmap e34: 3 total, 3 up, 3 in
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb 02 15:08:15 compute-0 ceph-mon[75334]: mds.? [v2:192.168.122.100:6814/3357967106,v1:192.168.122.100:6815/3357967106] up:active
Feb 02 15:08:15 compute-0 ceph-mon[75334]: fsmap cephfs:1 {0=cephfs.compute-0.mcxxtn=up:active}
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:15 compute-0 ceph-mon[75334]: pgmap v71: 40 pgs: 2 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:15 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 02 15:08:15 compute-0 ceph-mon[75334]: osdmap e35: 3 total, 3 up, 3 in
Feb 02 15:08:15 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Feb 02 15:08:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:15 compute-0 systemd[1]: Started libpod-conmon-71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679.scope.
Feb 02 15:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d684073dcc88cf0f4afd44efb46db443120c6e60e6cd0ed06fa60564e83289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d684073dcc88cf0f4afd44efb46db443120c6e60e6cd0ed06fa60564e83289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d684073dcc88cf0f4afd44efb46db443120c6e60e6cd0ed06fa60564e83289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d684073dcc88cf0f4afd44efb46db443120c6e60e6cd0ed06fa60564e83289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d684073dcc88cf0f4afd44efb46db443120c6e60e6cd0ed06fa60564e83289/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6f36e6a2c9d57fcecbc29b2146c35f717b6ede407284c5fe64fca1696d170d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6f36e6a2c9d57fcecbc29b2146c35f717b6ede407284c5fe64fca1696d170d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:15 compute-0 podman[96597]: 2026-02-02 15:08:15.414459966 +0000 UTC m=+0.022168908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:15 compute-0 podman[96596]: 2026-02-02 15:08:15.429640486 +0000 UTC m=+0.033388855 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:15 compute-0 podman[96597]: 2026-02-02 15:08:15.531169839 +0000 UTC m=+0.138878771 container init 3e7b90608735c73363516e00f32ab318fc5ebfe1ba8b03689a11665848ce47f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:15 compute-0 podman[96596]: 2026-02-02 15:08:15.536961971 +0000 UTC m=+0.140710280 container init 71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679 (image=quay.io/ceph/ceph:v20, name=eager_matsumoto, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:15 compute-0 podman[96597]: 2026-02-02 15:08:15.537227006 +0000 UTC m=+0.144935918 container start 3e7b90608735c73363516e00f32ab318fc5ebfe1ba8b03689a11665848ce47f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:15 compute-0 podman[96597]: 2026-02-02 15:08:15.540888334 +0000 UTC m=+0.148597266 container attach 3e7b90608735c73363516e00f32ab318fc5ebfe1ba8b03689a11665848ce47f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:08:15 compute-0 podman[96596]: 2026-02-02 15:08:15.542793814 +0000 UTC m=+0.146542103 container start 71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679 (image=quay.io/ceph/ceph:v20, name=eager_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:15 compute-0 podman[96596]: 2026-02-02 15:08:15.546463242 +0000 UTC m=+0.150211581 container attach 71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679 (image=quay.io/ceph/ceph:v20, name=eager_matsumoto, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:15 compute-0 vigorous_poitras[96624]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:08:15 compute-0 vigorous_poitras[96624]: --> All data devices are unavailable
Feb 02 15:08:15 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:08:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} v 0)
Feb 02 15:08:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 15:08:15 compute-0 eager_matsumoto[96631]: 
Feb 02 15:08:15 compute-0 eager_matsumoto[96631]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Feb 02 15:08:15 compute-0 systemd[1]: libpod-3e7b90608735c73363516e00f32ab318fc5ebfe1ba8b03689a11665848ce47f9.scope: Deactivated successfully.
Feb 02 15:08:15 compute-0 podman[96597]: 2026-02-02 15:08:15.922605098 +0000 UTC m=+0.530314020 container died 3e7b90608735c73363516e00f32ab318fc5ebfe1ba8b03689a11665848ce47f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 15:08:15 compute-0 systemd[1]: libpod-71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679.scope: Deactivated successfully.
Feb 02 15:08:15 compute-0 conmon[96631]: conmon 71979e9adcfeb05c009d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679.scope/container/memory.events
Feb 02 15:08:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-07d684073dcc88cf0f4afd44efb46db443120c6e60e6cd0ed06fa60564e83289-merged.mount: Deactivated successfully.
Feb 02 15:08:15 compute-0 podman[96597]: 2026-02-02 15:08:15.965559344 +0000 UTC m=+0.573268286 container remove 3e7b90608735c73363516e00f32ab318fc5ebfe1ba8b03689a11665848ce47f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:08:15 compute-0 podman[96679]: 2026-02-02 15:08:15.970310084 +0000 UTC m=+0.030660937 container died 71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679 (image=quay.io/ceph/ceph:v20, name=eager_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:08:15 compute-0 systemd[1]: libpod-conmon-3e7b90608735c73363516e00f32ab318fc5ebfe1ba8b03689a11665848ce47f9.scope: Deactivated successfully.
Feb 02 15:08:15 compute-0 podman[96679]: 2026-02-02 15:08:15.998481629 +0000 UTC m=+0.058832402 container remove 71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679 (image=quay.io/ceph/ceph:v20, name=eager_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb 02 15:08:16 compute-0 systemd[1]: libpod-conmon-71979e9adcfeb05c009df867fd14228348474f0faebc0ebe698607d578d2e679.scope: Deactivated successfully.
Feb 02 15:08:16 compute-0 sudo[96492]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:16 compute-0 sudo[96572]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:16 compute-0 sudo[96701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:16 compute-0 sudo[96701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:16 compute-0 sudo[96701]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:16 compute-0 sudo[96726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:08:16 compute-0 sudo[96726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6f36e6a2c9d57fcecbc29b2146c35f717b6ede407284c5fe64fca1696d170d6-merged.mount: Deactivated successfully.
Feb 02 15:08:16 compute-0 podman[96764]: 2026-02-02 15:08:16.38952823 +0000 UTC m=+0.050426675 container create c32e0d05cbcce10c028e2436ce8ed8204c2c377ca8f78138fb0d7eb0aaa5648f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mayer, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:16 compute-0 systemd[1]: Started libpod-conmon-c32e0d05cbcce10c028e2436ce8ed8204c2c377ca8f78138fb0d7eb0aaa5648f.scope.
Feb 02 15:08:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:16 compute-0 podman[96764]: 2026-02-02 15:08:16.371344666 +0000 UTC m=+0.032243121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Feb 02 15:08:16 compute-0 podman[96764]: 2026-02-02 15:08:16.47343612 +0000 UTC m=+0.134334615 container init c32e0d05cbcce10c028e2436ce8ed8204c2c377ca8f78138fb0d7eb0aaa5648f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:08:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Feb 02 15:08:16 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Feb 02 15:08:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb 02 15:08:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb 02 15:08:16 compute-0 podman[96764]: 2026-02-02 15:08:16.48291681 +0000 UTC m=+0.143815265 container start c32e0d05cbcce10c028e2436ce8ed8204c2c377ca8f78138fb0d7eb0aaa5648f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:08:16 compute-0 jovial_mayer[96780]: 167 167
Feb 02 15:08:16 compute-0 systemd[1]: libpod-c32e0d05cbcce10c028e2436ce8ed8204c2c377ca8f78138fb0d7eb0aaa5648f.scope: Deactivated successfully.
Feb 02 15:08:16 compute-0 podman[96764]: 2026-02-02 15:08:16.488359964 +0000 UTC m=+0.149258419 container attach c32e0d05cbcce10c028e2436ce8ed8204c2c377ca8f78138fb0d7eb0aaa5648f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:08:16 compute-0 podman[96764]: 2026-02-02 15:08:16.48909065 +0000 UTC m=+0.149989085 container died c32e0d05cbcce10c028e2436ce8ed8204c2c377ca8f78138fb0d7eb0aaa5648f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mayer, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c0094496eadac1f9a2a9bc4330cff380053659b1017a517aac3826b960986b-merged.mount: Deactivated successfully.
Feb 02 15:08:16 compute-0 podman[96764]: 2026-02-02 15:08:16.522812012 +0000 UTC m=+0.183710417 container remove c32e0d05cbcce10c028e2436ce8ed8204c2c377ca8f78138fb0d7eb0aaa5648f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:16 compute-0 systemd[1]: libpod-conmon-c32e0d05cbcce10c028e2436ce8ed8204c2c377ca8f78138fb0d7eb0aaa5648f.scope: Deactivated successfully.
Feb 02 15:08:16 compute-0 podman[96804]: 2026-02-02 15:08:16.651194051 +0000 UTC m=+0.052303885 container create 1d737fac588bcb604d8d5123cee4a2b2cedcf419711f3ee6ab0eb531bd6869db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 15:08:16 compute-0 systemd[1]: Started libpod-conmon-1d737fac588bcb604d8d5123cee4a2b2cedcf419711f3ee6ab0eb531bd6869db.scope.
Feb 02 15:08:16 compute-0 sudo[96841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwkzimtsyuyrdlqrlkuppgnoorediabo ; /usr/bin/python3'
Feb 02 15:08:16 compute-0 sudo[96841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86043156c0747453e56a20be66cca2588e0a6dd0e586ae0dcbf94240d2ae5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86043156c0747453e56a20be66cca2588e0a6dd0e586ae0dcbf94240d2ae5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86043156c0747453e56a20be66cca2588e0a6dd0e586ae0dcbf94240d2ae5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86043156c0747453e56a20be66cca2588e0a6dd0e586ae0dcbf94240d2ae5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:16 compute-0 podman[96804]: 2026-02-02 15:08:16.627638104 +0000 UTC m=+0.028747998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v74: 41 pgs: 1 unknown, 1 creating+peering, 39 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 02 15:08:16 compute-0 ceph-mon[75334]: 2.1b scrub starts
Feb 02 15:08:16 compute-0 ceph-mon[75334]: 2.1b scrub ok
Feb 02 15:08:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 15:08:16 compute-0 ceph-mon[75334]: osdmap e36: 3 total, 3 up, 3 in
Feb 02 15:08:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb 02 15:08:16 compute-0 podman[96804]: 2026-02-02 15:08:16.732877014 +0000 UTC m=+0.133986888 container init 1d737fac588bcb604d8d5123cee4a2b2cedcf419711f3ee6ab0eb531bd6869db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_euclid, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:08:16 compute-0 podman[96804]: 2026-02-02 15:08:16.740087766 +0000 UTC m=+0.141197570 container start 1d737fac588bcb604d8d5123cee4a2b2cedcf419711f3ee6ab0eb531bd6869db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:08:16 compute-0 podman[96804]: 2026-02-02 15:08:16.744137302 +0000 UTC m=+0.145247186 container attach 1d737fac588bcb604d8d5123cee4a2b2cedcf419711f3ee6ab0eb531bd6869db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:16 compute-0 python3[96848]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:16 compute-0 podman[96851]: 2026-02-02 15:08:16.884430541 +0000 UTC m=+0.030660448 container create f165076e5ad8c365c5ddee76c25180955d1cd7711593fd4383e5e349ac424c4a (image=quay.io/ceph/ceph:v20, name=keen_swartz, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:16 compute-0 systemd[1]: Started libpod-conmon-f165076e5ad8c365c5ddee76c25180955d1cd7711593fd4383e5e349ac424c4a.scope.
Feb 02 15:08:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ef407b94b77bfb635f79bf204e843a232b394f30d8435160e1e449d3eccbf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ef407b94b77bfb635f79bf204e843a232b394f30d8435160e1e449d3eccbf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:16 compute-0 podman[96851]: 2026-02-02 15:08:16.956685736 +0000 UTC m=+0.102915683 container init f165076e5ad8c365c5ddee76c25180955d1cd7711593fd4383e5e349ac424c4a (image=quay.io/ceph/ceph:v20, name=keen_swartz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:16 compute-0 podman[96851]: 2026-02-02 15:08:16.961499617 +0000 UTC m=+0.107729524 container start f165076e5ad8c365c5ddee76c25180955d1cd7711593fd4383e5e349ac424c4a (image=quay.io/ceph/ceph:v20, name=keen_swartz, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:16 compute-0 podman[96851]: 2026-02-02 15:08:16.964682055 +0000 UTC m=+0.110912012 container attach f165076e5ad8c365c5ddee76c25180955d1cd7711593fd4383e5e349ac424c4a (image=quay.io/ceph/ceph:v20, name=keen_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:08:16 compute-0 podman[96851]: 2026-02-02 15:08:16.870525228 +0000 UTC m=+0.016755145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:16 compute-0 practical_euclid[96845]: {
Feb 02 15:08:16 compute-0 practical_euclid[96845]:     "0": [
Feb 02 15:08:16 compute-0 practical_euclid[96845]:         {
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "devices": [
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "/dev/loop3"
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             ],
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_name": "ceph_lv0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_size": "21470642176",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "name": "ceph_lv0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "tags": {
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.crush_device_class": "",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.encrypted": "0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.osd_id": "0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.type": "block",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.vdo": "0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.with_tpm": "0"
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             },
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "type": "block",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "vg_name": "ceph_vg0"
Feb 02 15:08:16 compute-0 practical_euclid[96845]:         }
Feb 02 15:08:16 compute-0 practical_euclid[96845]:     ],
Feb 02 15:08:16 compute-0 practical_euclid[96845]:     "1": [
Feb 02 15:08:16 compute-0 practical_euclid[96845]:         {
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "devices": [
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "/dev/loop4"
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             ],
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_name": "ceph_lv1",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_size": "21470642176",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "name": "ceph_lv1",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "tags": {
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.crush_device_class": "",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.encrypted": "0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.osd_id": "1",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.type": "block",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.vdo": "0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.with_tpm": "0"
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             },
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "type": "block",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "vg_name": "ceph_vg1"
Feb 02 15:08:16 compute-0 practical_euclid[96845]:         }
Feb 02 15:08:16 compute-0 practical_euclid[96845]:     ],
Feb 02 15:08:16 compute-0 practical_euclid[96845]:     "2": [
Feb 02 15:08:16 compute-0 practical_euclid[96845]:         {
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "devices": [
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "/dev/loop5"
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             ],
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_name": "ceph_lv2",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_size": "21470642176",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "name": "ceph_lv2",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "tags": {
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.crush_device_class": "",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.encrypted": "0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.osd_id": "2",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.type": "block",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.vdo": "0",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:                 "ceph.with_tpm": "0"
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             },
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "type": "block",
Feb 02 15:08:16 compute-0 practical_euclid[96845]:             "vg_name": "ceph_vg2"
Feb 02 15:08:16 compute-0 practical_euclid[96845]:         }
Feb 02 15:08:16 compute-0 practical_euclid[96845]:     ]
Feb 02 15:08:16 compute-0 practical_euclid[96845]: }
Feb 02 15:08:16 compute-0 ansible-async_wrapper.py[95116]: Done in kid B.
Feb 02 15:08:17 compute-0 systemd[1]: libpod-1d737fac588bcb604d8d5123cee4a2b2cedcf419711f3ee6ab0eb531bd6869db.scope: Deactivated successfully.
Feb 02 15:08:17 compute-0 podman[96874]: 2026-02-02 15:08:17.040543536 +0000 UTC m=+0.024653242 container died 1d737fac588bcb604d8d5123cee4a2b2cedcf419711f3ee6ab0eb531bd6869db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_euclid, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:08:17 compute-0 podman[96874]: 2026-02-02 15:08:17.083332848 +0000 UTC m=+0.067442544 container remove 1d737fac588bcb604d8d5123cee4a2b2cedcf419711f3ee6ab0eb531bd6869db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_euclid, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:17 compute-0 systemd[1]: libpod-conmon-1d737fac588bcb604d8d5123cee4a2b2cedcf419711f3ee6ab0eb531bd6869db.scope: Deactivated successfully.
Feb 02 15:08:17 compute-0 sudo[96726]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a86043156c0747453e56a20be66cca2588e0a6dd0e586ae0dcbf94240d2ae5a-merged.mount: Deactivated successfully.
Feb 02 15:08:17 compute-0 sudo[96909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:17 compute-0 sudo[96909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:17 compute-0 sudo[96909]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:17 compute-0 sudo[96934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:08:17 compute-0 sudo[96934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:17 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:17 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:08:17 compute-0 keen_swartz[96868]: 
Feb 02 15:08:17 compute-0 keen_swartz[96868]: [{"container_id": "74836a9dee83", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.19%", "created": "2026-02-02T15:07:04.044541Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-02-02T15:07:04.097639Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T15:08:14.720071Z", "memory_usage": 7803502, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-02-02T15:07:03.942905Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@crash.compute-0", "version": "20.2.0"}, {"container_id": "90831cfc5982", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.19%", "created": "2026-02-02T15:08:13.253063Z", "daemon_id": "cephfs.compute-0.mcxxtn", "daemon_name": "mds.cephfs.compute-0.mcxxtn", "daemon_type": "mds", "events": ["2026-02-02T15:08:13.330201Z daemon:mds.cephfs.compute-0.mcxxtn [INFO] \"Deployed mds.cephfs.compute-0.mcxxtn on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T15:08:14.720604Z", "memory_usage": 18360565, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-02-02T15:08:13.124863Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@mds.cephfs.compute-0.mcxxtn", "version": "20.2.0"}, {"container_id": "b5c8003f4156", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "19.95%", "created": "2026-02-02T15:06:27.491680Z", "daemon_id": "compute-0.rxryxi", "daemon_name": "mgr.compute-0.rxryxi", "daemon_type": "mgr", "events": ["2026-02-02T15:07:07.901317Z daemon:mgr.compute-0.rxryxi [INFO] \"Reconfigured mgr.compute-0.rxryxi on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T15:08:14.719978Z", "memory_usage": 546203238, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-02-02T15:06:27.070638Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@mgr.compute-0.rxryxi", "version": "20.2.0"}, {"container_id": "a5faa4b9cf66", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.90%", "created": "2026-02-02T15:06:23.532586Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-02-02T15:07:07.358644Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T15:08:14.719854Z", "memory_request": 2147483648, "memory_usage": 39824916, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-02-02T15:06:25.460954Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@mon.compute-0", "version": "20.2.0"}, {"container_id": "27a84c32fe88", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.71%", "created": "2026-02-02T15:07:25.148012Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-02-02T15:07:25.222491Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T15:08:14.720163Z", "memory_request": 4294967296, "memory_usage": 59380858, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T15:07:25.041558Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@osd.0", "version": "20.2.0"}, {"container_id": "9db978a5e94b", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.83%", "created": "2026-02-02T15:07:28.931096Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-02-02T15:07:28.999115Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T15:08:14.720256Z", "memory_request": 4294967296, "memory_usage": 61142466, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T15:07:28.822627Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@osd.1", "version": "20.2.0"}, {"container_id": "f9df3643bc3a", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.08%", "created": "2026-02-02T15:07:32.553177Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-02-02T15:07:32.649326Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T15:08:14.720351Z", "memory_request": 4294967296, "memory_usage": 61100523, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T15:07:32.423300Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@osd.2", "version": "20.2.0"}, {"container_id": "4ceeeaded2e9", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "4.49%", "created": "2026-02-02T15:08:11.629030Z", "daemon_id": "rgw.compute-0.bzshzr", "daemon_name": "rgw.rgw.compute-0.bzshzr", "daemon_type": "rgw", "events": ["2026-02-02T15:08:11.711864Z daemon:rgw.rgw.compute-0.bzshzr [INFO] \"Deployed rgw.rgw.compute-0.bzshzr on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2026-02-02T15:08:14.720483Z", "memory_usage": 55165583, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-02-02T15:08:11.522254Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e43470b2-6632-573a-87d3-0f5428ec59e9@rgw.rgw.compute-0.bzshzr", "version": "20.2.0"}]
Feb 02 15:08:17 compute-0 systemd[1]: libpod-f165076e5ad8c365c5ddee76c25180955d1cd7711593fd4383e5e349ac424c4a.scope: Deactivated successfully.
Feb 02 15:08:17 compute-0 podman[96851]: 2026-02-02 15:08:17.360013576 +0000 UTC m=+0.506243513 container died f165076e5ad8c365c5ddee76c25180955d1cd7711593fd4383e5e349ac424c4a (image=quay.io/ceph/ceph:v20, name=keen_swartz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-417ef407b94b77bfb635f79bf204e843a232b394f30d8435160e1e449d3eccbf-merged.mount: Deactivated successfully.
Feb 02 15:08:17 compute-0 podman[96851]: 2026-02-02 15:08:17.403179047 +0000 UTC m=+0.549408944 container remove f165076e5ad8c365c5ddee76c25180955d1cd7711593fd4383e5e349ac424c4a (image=quay.io/ceph/ceph:v20, name=keen_swartz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:08:17 compute-0 systemd[1]: libpod-conmon-f165076e5ad8c365c5ddee76c25180955d1cd7711593fd4383e5e349ac424c4a.scope: Deactivated successfully.
Feb 02 15:08:17 compute-0 sudo[96841]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Feb 02 15:08:17 compute-0 rsyslogd[1004]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "74836a9dee83", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 02 15:08:17 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 02 15:08:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Feb 02 15:08:17 compute-0 podman[96983]: 2026-02-02 15:08:17.496364003 +0000 UTC m=+0.058148228 container create 070c6af28f94c0f8585a3db599b78be22115bde3af200d916bc3b6e3e1084cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:17 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Feb 02 15:08:17 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:17 compute-0 systemd[1]: Started libpod-conmon-070c6af28f94c0f8585a3db599b78be22115bde3af200d916bc3b6e3e1084cc2.scope.
Feb 02 15:08:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:17 compute-0 podman[96983]: 2026-02-02 15:08:17.470223322 +0000 UTC m=+0.032007627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:17 compute-0 podman[96983]: 2026-02-02 15:08:17.574985702 +0000 UTC m=+0.136769917 container init 070c6af28f94c0f8585a3db599b78be22115bde3af200d916bc3b6e3e1084cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:08:17 compute-0 podman[96983]: 2026-02-02 15:08:17.581426818 +0000 UTC m=+0.143211033 container start 070c6af28f94c0f8585a3db599b78be22115bde3af200d916bc3b6e3e1084cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:17 compute-0 podman[96983]: 2026-02-02 15:08:17.584109324 +0000 UTC m=+0.145893539 container attach 070c6af28f94c0f8585a3db599b78be22115bde3af200d916bc3b6e3e1084cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:08:17 compute-0 keen_banach[96999]: 167 167
Feb 02 15:08:17 compute-0 systemd[1]: libpod-070c6af28f94c0f8585a3db599b78be22115bde3af200d916bc3b6e3e1084cc2.scope: Deactivated successfully.
Feb 02 15:08:17 compute-0 podman[96983]: 2026-02-02 15:08:17.586001594 +0000 UTC m=+0.147785829 container died 070c6af28f94c0f8585a3db599b78be22115bde3af200d916bc3b6e3e1084cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-701d4ab39febbb47b9f1231c9958e39762a356a35a564bf88ec24d56e4648240-merged.mount: Deactivated successfully.
Feb 02 15:08:17 compute-0 podman[96983]: 2026-02-02 15:08:17.623162629 +0000 UTC m=+0.184946844 container remove 070c6af28f94c0f8585a3db599b78be22115bde3af200d916bc3b6e3e1084cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:08:17 compute-0 systemd[1]: libpod-conmon-070c6af28f94c0f8585a3db599b78be22115bde3af200d916bc3b6e3e1084cc2.scope: Deactivated successfully.
Feb 02 15:08:17 compute-0 ceph-mon[75334]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:08:17 compute-0 ceph-mon[75334]: pgmap v74: 41 pgs: 1 unknown, 1 creating+peering, 39 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 02 15:08:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 02 15:08:17 compute-0 ceph-mon[75334]: osdmap e37: 3 total, 3 up, 3 in
Feb 02 15:08:17 compute-0 podman[97025]: 2026-02-02 15:08:17.758640777 +0000 UTC m=+0.044428618 container create fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:08:17 compute-0 systemd[1]: Started libpod-conmon-fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e.scope.
Feb 02 15:08:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbe04a8027e22e3dc91c5cddb3a29b834e15fa8450ee96b02a4da9d82491792/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbe04a8027e22e3dc91c5cddb3a29b834e15fa8450ee96b02a4da9d82491792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbe04a8027e22e3dc91c5cddb3a29b834e15fa8450ee96b02a4da9d82491792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbe04a8027e22e3dc91c5cddb3a29b834e15fa8450ee96b02a4da9d82491792/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:17 compute-0 podman[97025]: 2026-02-02 15:08:17.74412055 +0000 UTC m=+0.029908411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:17 compute-0 podman[97025]: 2026-02-02 15:08:17.846837288 +0000 UTC m=+0.132625149 container init fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_colden, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:17 compute-0 podman[97025]: 2026-02-02 15:08:17.855159914 +0000 UTC m=+0.140947785 container start fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_colden, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:08:17 compute-0 podman[97025]: 2026-02-02 15:08:17.860165969 +0000 UTC m=+0.145953840 container attach fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:08:18 compute-0 sudo[97080]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eotgqzvxmpsmtiitishuxwcxmxtfpxms ; /usr/bin/python3'
Feb 02 15:08:18 compute-0 sudo[97080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:18 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Feb 02 15:08:18 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Feb 02 15:08:18 compute-0 python3[97082]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:18 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Feb 02 15:08:18 compute-0 podman[97104]: 2026-02-02 15:08:18.325375315 +0000 UTC m=+0.043308855 container create e3beff44b76faecbfb322dfddfe7be0ff47edecff2f8cc5a32858c6d0b47aa19 (image=quay.io/ceph/ceph:v20, name=recursing_northcutt, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:08:18 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Feb 02 15:08:18 compute-0 systemd[1]: Started libpod-conmon-e3beff44b76faecbfb322dfddfe7be0ff47edecff2f8cc5a32858c6d0b47aa19.scope.
Feb 02 15:08:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de1ede4e2ca29d73ea8722fc69844fb4a41fa19bdabeb872d7c61e2844bc41f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de1ede4e2ca29d73ea8722fc69844fb4a41fa19bdabeb872d7c61e2844bc41f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:18 compute-0 podman[97104]: 2026-02-02 15:08:18.398183881 +0000 UTC m=+0.116117441 container init e3beff44b76faecbfb322dfddfe7be0ff47edecff2f8cc5a32858c6d0b47aa19 (image=quay.io/ceph/ceph:v20, name=recursing_northcutt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:08:18 compute-0 podman[97104]: 2026-02-02 15:08:18.306633229 +0000 UTC m=+0.024566809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:18 compute-0 podman[97104]: 2026-02-02 15:08:18.404113086 +0000 UTC m=+0.122046616 container start e3beff44b76faecbfb322dfddfe7be0ff47edecff2f8cc5a32858c6d0b47aa19 (image=quay.io/ceph/ceph:v20, name=recursing_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:08:18 compute-0 podman[97104]: 2026-02-02 15:08:18.406964176 +0000 UTC m=+0.124897946 container attach e3beff44b76faecbfb322dfddfe7be0ff47edecff2f8cc5a32858c6d0b47aa19 (image=quay.io/ceph/ceph:v20, name=recursing_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:08:18 compute-0 ceph-mds[95441]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 02 15:08:18 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mds-cephfs-compute-0-mcxxtn[95437]: 2026-02-02T15:08:18.486+0000 7fcf8c855640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 02 15:08:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Feb 02 15:08:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Feb 02 15:08:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Feb 02 15:08:18 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:18 compute-0 lvm[97173]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:08:18 compute-0 lvm[97173]: VG ceph_vg0 finished
Feb 02 15:08:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb 02 15:08:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb 02 15:08:18 compute-0 lvm[97176]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:08:18 compute-0 lvm[97176]: VG ceph_vg1 finished
Feb 02 15:08:18 compute-0 lvm[97187]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:08:18 compute-0 lvm[97187]: VG ceph_vg2 finished
Feb 02 15:08:18 compute-0 vigorous_colden[97042]: {}
Feb 02 15:08:18 compute-0 systemd[1]: libpod-fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e.scope: Deactivated successfully.
Feb 02 15:08:18 compute-0 podman[97025]: 2026-02-02 15:08:18.644205492 +0000 UTC m=+0.929993343 container died fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_colden, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:18 compute-0 systemd[1]: libpod-fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e.scope: Consumed 1.129s CPU time.
Feb 02 15:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cbe04a8027e22e3dc91c5cddb3a29b834e15fa8450ee96b02a4da9d82491792-merged.mount: Deactivated successfully.
Feb 02 15:08:18 compute-0 podman[97025]: 2026-02-02 15:08:18.695986804 +0000 UTC m=+0.981774655 container remove fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_colden, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:08:18 compute-0 systemd[1]: libpod-conmon-fcb421572b9a6521656def91c42fd29873cc8c39d2453b7b26adec19b7ab596e.scope: Deactivated successfully.
Feb 02 15:08:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v77: 42 pgs: 1 unknown, 1 creating+peering, 40 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 02 15:08:18 compute-0 sudo[96934]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:18 compute-0 ceph-mon[75334]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 02 15:08:18 compute-0 ceph-mon[75334]: 2.1d scrub starts
Feb 02 15:08:18 compute-0 ceph-mon[75334]: 2.1d scrub ok
Feb 02 15:08:18 compute-0 ceph-mon[75334]: osdmap e38: 3 total, 3 up, 3 in
Feb 02 15:08:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb 02 15:08:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:18 compute-0 sudo[97202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:08:18 compute-0 sudo[97202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:18 compute-0 sudo[97202]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:18 compute-0 sudo[97227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:18 compute-0 sudo[97227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:18 compute-0 sudo[97227]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:18 compute-0 sudo[97252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:08:18 compute-0 sudo[97252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 02 15:08:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870326620' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 02 15:08:18 compute-0 recursing_northcutt[97150]: 
Feb 02 15:08:18 compute-0 recursing_northcutt[97150]: {"fsid":"e43470b2-6632-573a-87d3-0f5428ec59e9","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":113,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":38,"num_osds":3,"num_up_osds":3,"osd_up_since":1770044857,"num_in_osds":3,"osd_in_since":1770044838,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":39},{"state_name":"creating+peering","count":1},{"state_name":"unknown","count":1}],"num_pgs":41,"num_pools":10,"num_objects":29,"data_bytes":463390,"bytes_used":83955712,"bytes_avail":64327970816,"bytes_total":64411926528,"unknown_pgs_ratio":0.024390242993831635,"inactive_pgs_ratio":0.024390242993831635,"read_bytes_sec":1279,"write_bytes_sec":5374,"read_op_per_sec":0,"write_op_per_sec":13},"fsmap":{"epoch":5,"btime":"2026-02-02T15:08:14:482474+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.mcxxtn","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T15:07:44.716052+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb 02 15:08:18 compute-0 systemd[1]: libpod-e3beff44b76faecbfb322dfddfe7be0ff47edecff2f8cc5a32858c6d0b47aa19.scope: Deactivated successfully.
Feb 02 15:08:18 compute-0 podman[97104]: 2026-02-02 15:08:18.97164311 +0000 UTC m=+0.689576670 container died e3beff44b76faecbfb322dfddfe7be0ff47edecff2f8cc5a32858c6d0b47aa19 (image=quay.io/ceph/ceph:v20, name=recursing_northcutt, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-de1ede4e2ca29d73ea8722fc69844fb4a41fa19bdabeb872d7c61e2844bc41f2-merged.mount: Deactivated successfully.
Feb 02 15:08:19 compute-0 podman[97104]: 2026-02-02 15:08:19.015642589 +0000 UTC m=+0.733576129 container remove e3beff44b76faecbfb322dfddfe7be0ff47edecff2f8cc5a32858c6d0b47aa19 (image=quay.io/ceph/ceph:v20, name=recursing_northcutt, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:19 compute-0 sudo[97080]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:19 compute-0 systemd[1]: libpod-conmon-e3beff44b76faecbfb322dfddfe7be0ff47edecff2f8cc5a32858c6d0b47aa19.scope: Deactivated successfully.
Feb 02 15:08:19 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Feb 02 15:08:19 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Feb 02 15:08:19 compute-0 podman[97333]: 2026-02-02 15:08:19.320836178 +0000 UTC m=+0.058338481 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:08:19 compute-0 podman[97333]: 2026-02-02 15:08:19.430456561 +0000 UTC m=+0.167958844 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:08:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Feb 02 15:08:19 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 02 15:08:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Feb 02 15:08:19 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Feb 02 15:08:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb 02 15:08:19 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb 02 15:08:19 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb 02 15:08:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb 02 15:08:19 compute-0 ceph-mon[75334]: 2.10 scrub starts
Feb 02 15:08:19 compute-0 ceph-mon[75334]: 2.10 scrub ok
Feb 02 15:08:19 compute-0 ceph-mon[75334]: pgmap v77: 42 pgs: 1 unknown, 1 creating+peering, 40 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 02 15:08:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3870326620' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 02 15:08:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 02 15:08:19 compute-0 ceph-mon[75334]: osdmap e39: 3 total, 3 up, 3 in
Feb 02 15:08:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb 02 15:08:19 compute-0 sudo[97462]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvpqkhbaqpqqhaqaxsoqjdiivipjsier ; /usr/bin/python3'
Feb 02 15:08:19 compute-0 sudo[97462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:19 compute-0 python3[97472]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:20 compute-0 podman[97506]: 2026-02-02 15:08:20.00624156 +0000 UTC m=+0.057722969 container create 10898ea654471c173ed052b2f81b770e285f50d351e8e0ccb422a12a6d28ebd2 (image=quay.io/ceph/ceph:v20, name=vigilant_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 02 15:08:20 compute-0 systemd[1]: Started libpod-conmon-10898ea654471c173ed052b2f81b770e285f50d351e8e0ccb422a12a6d28ebd2.scope.
Feb 02 15:08:20 compute-0 podman[97506]: 2026-02-02 15:08:19.980682651 +0000 UTC m=+0.032164130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b45e2b10f3ba7080b92747ae21fda8799725921790a95fbb4f796c2a32cb9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b45e2b10f3ba7080b92747ae21fda8799725921790a95fbb4f796c2a32cb9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:20 compute-0 podman[97506]: 2026-02-02 15:08:20.119544741 +0000 UTC m=+0.171026190 container init 10898ea654471c173ed052b2f81b770e285f50d351e8e0ccb422a12a6d28ebd2 (image=quay.io/ceph/ceph:v20, name=vigilant_newton, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:20 compute-0 podman[97506]: 2026-02-02 15:08:20.126844184 +0000 UTC m=+0.178325583 container start 10898ea654471c173ed052b2f81b770e285f50d351e8e0ccb422a12a6d28ebd2 (image=quay.io/ceph/ceph:v20, name=vigilant_newton, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 02 15:08:20 compute-0 podman[97506]: 2026-02-02 15:08:20.131577815 +0000 UTC m=+0.183059214 container attach 10898ea654471c173ed052b2f81b770e285f50d351e8e0ccb422a12a6d28ebd2 (image=quay.io/ceph/ceph:v20, name=vigilant_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:20 compute-0 sudo[97252]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:20 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb 02 15:08:20 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb 02 15:08:20 compute-0 sudo[97565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:20 compute-0 sudo[97565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:20 compute-0 sudo[97565]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:20 compute-0 sudo[97609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:08:20 compute-0 sudo[97609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 02 15:08:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2368612554' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:08:20 compute-0 vigilant_newton[97545]: 
Feb 02 15:08:20 compute-0 systemd[1]: libpod-10898ea654471c173ed052b2f81b770e285f50d351e8e0ccb422a12a6d28ebd2.scope: Deactivated successfully.
Feb 02 15:08:20 compute-0 vigilant_newton[97545]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.bzshzr","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Feb 02 15:08:20 compute-0 podman[97506]: 2026-02-02 15:08:20.577486633 +0000 UTC m=+0.628968022 container died 10898ea654471c173ed052b2f81b770e285f50d351e8e0ccb422a12a6d28ebd2 (image=quay.io/ceph/ceph:v20, name=vigilant_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:20 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb 02 15:08:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-950b45e2b10f3ba7080b92747ae21fda8799725921790a95fbb4f796c2a32cb9-merged.mount: Deactivated successfully.
Feb 02 15:08:20 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb 02 15:08:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:20 compute-0 podman[97645]: 2026-02-02 15:08:20.630360489 +0000 UTC m=+0.061179213 container create 5c76f9ea36a67eb4269d787a17a7aafae74e0c516f993b827c1cc34de51d5a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:08:20 compute-0 podman[97506]: 2026-02-02 15:08:20.647362837 +0000 UTC m=+0.698844266 container remove 10898ea654471c173ed052b2f81b770e285f50d351e8e0ccb422a12a6d28ebd2 (image=quay.io/ceph/ceph:v20, name=vigilant_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:08:20 compute-0 systemd[1]: libpod-conmon-10898ea654471c173ed052b2f81b770e285f50d351e8e0ccb422a12a6d28ebd2.scope: Deactivated successfully.
Feb 02 15:08:20 compute-0 sudo[97462]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:20 compute-0 systemd[1]: Started libpod-conmon-5c76f9ea36a67eb4269d787a17a7aafae74e0c516f993b827c1cc34de51d5a6a.scope.
Feb 02 15:08:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:20 compute-0 podman[97645]: 2026-02-02 15:08:20.599579009 +0000 UTC m=+0.030397723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:20 compute-0 podman[97645]: 2026-02-02 15:08:20.708732542 +0000 UTC m=+0.139551256 container init 5c76f9ea36a67eb4269d787a17a7aafae74e0c516f993b827c1cc34de51d5a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:08:20 compute-0 podman[97645]: 2026-02-02 15:08:20.712473 +0000 UTC m=+0.143291684 container start 5c76f9ea36a67eb4269d787a17a7aafae74e0c516f993b827c1cc34de51d5a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:08:20 compute-0 magical_jemison[97694]: 167 167
Feb 02 15:08:20 compute-0 systemd[1]: libpod-5c76f9ea36a67eb4269d787a17a7aafae74e0c516f993b827c1cc34de51d5a6a.scope: Deactivated successfully.
Feb 02 15:08:20 compute-0 podman[97645]: 2026-02-02 15:08:20.719032199 +0000 UTC m=+0.149850903 container attach 5c76f9ea36a67eb4269d787a17a7aafae74e0c516f993b827c1cc34de51d5a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:08:20 compute-0 podman[97645]: 2026-02-02 15:08:20.719398467 +0000 UTC m=+0.150217151 container died 5c76f9ea36a67eb4269d787a17a7aafae74e0c516f993b827c1cc34de51d5a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:08:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v80: 42 pgs: 42 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Feb 02 15:08:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8372108927a04659676cbf8349d0fa80701d5815fde85cc731ce2ca35f3b1ef5-merged.mount: Deactivated successfully.
Feb 02 15:08:20 compute-0 podman[97645]: 2026-02-02 15:08:20.758629014 +0000 UTC m=+0.189447698 container remove 5c76f9ea36a67eb4269d787a17a7aafae74e0c516f993b827c1cc34de51d5a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:20 compute-0 systemd[1]: libpod-conmon-5c76f9ea36a67eb4269d787a17a7aafae74e0c516f993b827c1cc34de51d5a6a.scope: Deactivated successfully.
Feb 02 15:08:20 compute-0 radosgw[94934]: v1 topic migration: starting v1 topic migration..
Feb 02 15:08:20 compute-0 radosgw[94934]: v1 topic migration: finished v1 topic migration
Feb 02 15:08:20 compute-0 ceph-mon[75334]: 2.12 scrub starts
Feb 02 15:08:20 compute-0 ceph-mon[75334]: 2.12 scrub ok
Feb 02 15:08:20 compute-0 ceph-mon[75334]: 2.d scrub starts
Feb 02 15:08:20 compute-0 ceph-mon[75334]: 2.d scrub ok
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3286828479' entity='client.rgw.rgw.compute-0.bzshzr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 02 15:08:20 compute-0 ceph-mon[75334]: osdmap e40: 3 total, 3 up, 3 in
Feb 02 15:08:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2368612554' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 02 15:08:20 compute-0 radosgw[94934]: framework: beast
Feb 02 15:08:20 compute-0 radosgw[94934]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Feb 02 15:08:20 compute-0 radosgw[94934]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Feb 02 15:08:20 compute-0 radosgw[94934]: starting handler: beast
Feb 02 15:08:20 compute-0 radosgw[94934]: set uid:gid to 167:167 (ceph:ceph)
Feb 02 15:08:20 compute-0 radosgw[94934]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.bzshzr,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=0c17bfce-dd76-4c5d-8f4e-360200b19365,zone_name=default,zonegroup_id=43a24cf8-36b3-4fe9-a0cf-1f9268c2e88d,zonegroup_name=default}
Feb 02 15:08:20 compute-0 podman[97736]: 2026-02-02 15:08:20.888439164 +0000 UTC m=+0.038652277 container create 0c245c5e560bdd31593da2e98f755367c9d5204fc4efff32f39e7ca17e789971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:08:20 compute-0 systemd[1]: Started libpod-conmon-0c245c5e560bdd31593da2e98f755367c9d5204fc4efff32f39e7ca17e789971.scope.
Feb 02 15:08:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a8ad9c5d6d1d40b17ed4e56fb1810e7f9572a54ce5004efc63108157ccda0fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a8ad9c5d6d1d40b17ed4e56fb1810e7f9572a54ce5004efc63108157ccda0fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a8ad9c5d6d1d40b17ed4e56fb1810e7f9572a54ce5004efc63108157ccda0fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a8ad9c5d6d1d40b17ed4e56fb1810e7f9572a54ce5004efc63108157ccda0fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a8ad9c5d6d1d40b17ed4e56fb1810e7f9572a54ce5004efc63108157ccda0fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:20 compute-0 podman[97736]: 2026-02-02 15:08:20.872563749 +0000 UTC m=+0.022776892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:20 compute-0 podman[97736]: 2026-02-02 15:08:20.979300231 +0000 UTC m=+0.129513404 container init 0c245c5e560bdd31593da2e98f755367c9d5204fc4efff32f39e7ca17e789971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 15:08:20 compute-0 podman[97736]: 2026-02-02 15:08:20.986166576 +0000 UTC m=+0.136379699 container start 0c245c5e560bdd31593da2e98f755367c9d5204fc4efff32f39e7ca17e789971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:21 compute-0 podman[97736]: 2026-02-02 15:08:21.000758013 +0000 UTC m=+0.150971126 container attach 0c245c5e560bdd31593da2e98f755367c9d5204fc4efff32f39e7ca17e789971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:21 compute-0 sudo[97793]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdzvlstpatwulopybwqrntpevfdlwgpp ; /usr/bin/python3'
Feb 02 15:08:21 compute-0 sudo[97793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:21 compute-0 elegant_vaughan[97753]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:08:21 compute-0 elegant_vaughan[97753]: --> All data devices are unavailable
Feb 02 15:08:21 compute-0 systemd[1]: libpod-0c245c5e560bdd31593da2e98f755367c9d5204fc4efff32f39e7ca17e789971.scope: Deactivated successfully.
Feb 02 15:08:21 compute-0 podman[97736]: 2026-02-02 15:08:21.480975145 +0000 UTC m=+0.631188318 container died 0c245c5e560bdd31593da2e98f755367c9d5204fc4efff32f39e7ca17e789971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a8ad9c5d6d1d40b17ed4e56fb1810e7f9572a54ce5004efc63108157ccda0fd-merged.mount: Deactivated successfully.
Feb 02 15:08:21 compute-0 podman[97736]: 2026-02-02 15:08:21.544460986 +0000 UTC m=+0.694674109 container remove 0c245c5e560bdd31593da2e98f755367c9d5204fc4efff32f39e7ca17e789971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:08:21 compute-0 systemd[1]: libpod-conmon-0c245c5e560bdd31593da2e98f755367c9d5204fc4efff32f39e7ca17e789971.scope: Deactivated successfully.
Feb 02 15:08:21 compute-0 sudo[97609]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:21 compute-0 python3[97797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:21 compute-0 sudo[97814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:21 compute-0 sudo[97814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:21 compute-0 sudo[97814]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:21 compute-0 podman[97832]: 2026-02-02 15:08:21.647802625 +0000 UTC m=+0.036853308 container create 3570993880d862b995a5208e6aa0b0b5440646dd603829d870663716e732c9b1 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:21 compute-0 systemd[1]: Started libpod-conmon-3570993880d862b995a5208e6aa0b0b5440646dd603829d870663716e732c9b1.scope.
Feb 02 15:08:21 compute-0 sudo[97850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:08:21 compute-0 sudo[97850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c12004590354f432f47b53e8446ffb32685a23cdf14ccf84cae5629566b6c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c12004590354f432f47b53e8446ffb32685a23cdf14ccf84cae5629566b6c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:21 compute-0 podman[97832]: 2026-02-02 15:08:21.632278678 +0000 UTC m=+0.021329361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:21 compute-0 podman[97832]: 2026-02-02 15:08:21.728825535 +0000 UTC m=+0.117876238 container init 3570993880d862b995a5208e6aa0b0b5440646dd603829d870663716e732c9b1 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:08:21 compute-0 podman[97832]: 2026-02-02 15:08:21.736494617 +0000 UTC m=+0.125545340 container start 3570993880d862b995a5208e6aa0b0b5440646dd603829d870663716e732c9b1 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:21 compute-0 podman[97832]: 2026-02-02 15:08:21.740737557 +0000 UTC m=+0.129788250 container attach 3570993880d862b995a5208e6aa0b0b5440646dd603829d870663716e732c9b1 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:21 compute-0 ceph-mon[75334]: 2.14 scrub starts
Feb 02 15:08:21 compute-0 ceph-mon[75334]: 2.14 scrub ok
Feb 02 15:08:21 compute-0 ceph-mon[75334]: 2.4 scrub starts
Feb 02 15:08:21 compute-0 ceph-mon[75334]: 2.4 scrub ok
Feb 02 15:08:21 compute-0 ceph-mon[75334]: pgmap v80: 42 pgs: 42 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Feb 02 15:08:21 compute-0 podman[97915]: 2026-02-02 15:08:21.935999576 +0000 UTC m=+0.039917922 container create edb451bb6bcfaa2241d2fe50ec5841b5c10a05f0a3737d0bc2efd5c65a9c4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:08:21 compute-0 systemd[1]: Started libpod-conmon-edb451bb6bcfaa2241d2fe50ec5841b5c10a05f0a3737d0bc2efd5c65a9c4b7a.scope.
Feb 02 15:08:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:22 compute-0 podman[97915]: 2026-02-02 15:08:22.003834208 +0000 UTC m=+0.107752584 container init edb451bb6bcfaa2241d2fe50ec5841b5c10a05f0a3737d0bc2efd5c65a9c4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:22 compute-0 podman[97915]: 2026-02-02 15:08:22.010084619 +0000 UTC m=+0.114002965 container start edb451bb6bcfaa2241d2fe50ec5841b5c10a05f0a3737d0bc2efd5c65a9c4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:08:22 compute-0 podman[97915]: 2026-02-02 15:08:21.921663694 +0000 UTC m=+0.025582050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:22 compute-0 fervent_visvesvaraya[97932]: 167 167
Feb 02 15:08:22 compute-0 podman[97915]: 2026-02-02 15:08:22.013694246 +0000 UTC m=+0.117612592 container attach edb451bb6bcfaa2241d2fe50ec5841b5c10a05f0a3737d0bc2efd5c65a9c4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Feb 02 15:08:22 compute-0 systemd[1]: libpod-edb451bb6bcfaa2241d2fe50ec5841b5c10a05f0a3737d0bc2efd5c65a9c4b7a.scope: Deactivated successfully.
Feb 02 15:08:22 compute-0 podman[97915]: 2026-02-02 15:08:22.014643225 +0000 UTC m=+0.118561561 container died edb451bb6bcfaa2241d2fe50ec5841b5c10a05f0a3737d0bc2efd5c65a9c4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e06f8c0b1299e8025db57dc651039590e55b21e95402a41627f91bbfa798493d-merged.mount: Deactivated successfully.
Feb 02 15:08:22 compute-0 podman[97915]: 2026-02-02 15:08:22.068763638 +0000 UTC m=+0.172681994 container remove edb451bb6bcfaa2241d2fe50ec5841b5c10a05f0a3737d0bc2efd5c65a9c4b7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:08:22 compute-0 systemd[1]: libpod-conmon-edb451bb6bcfaa2241d2fe50ec5841b5c10a05f0a3737d0bc2efd5c65a9c4b7a.scope: Deactivated successfully.
Feb 02 15:08:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Feb 02 15:08:22 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/632469792' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Feb 02 15:08:22 compute-0 affectionate_brown[97878]: mimic
Feb 02 15:08:22 compute-0 systemd[1]: libpod-3570993880d862b995a5208e6aa0b0b5440646dd603829d870663716e732c9b1.scope: Deactivated successfully.
Feb 02 15:08:22 compute-0 podman[97832]: 2026-02-02 15:08:22.132418441 +0000 UTC m=+0.521469134 container died 3570993880d862b995a5208e6aa0b0b5440646dd603829d870663716e732c9b1 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-87c12004590354f432f47b53e8446ffb32685a23cdf14ccf84cae5629566b6c9-merged.mount: Deactivated successfully.
Feb 02 15:08:22 compute-0 podman[97832]: 2026-02-02 15:08:22.163214161 +0000 UTC m=+0.552264844 container remove 3570993880d862b995a5208e6aa0b0b5440646dd603829d870663716e732c9b1 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:08:22 compute-0 systemd[1]: libpod-conmon-3570993880d862b995a5208e6aa0b0b5440646dd603829d870663716e732c9b1.scope: Deactivated successfully.
Feb 02 15:08:22 compute-0 sudo[97793]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:22 compute-0 podman[97961]: 2026-02-02 15:08:22.199308162 +0000 UTC m=+0.045188684 container create 1b0077cddc5e1d91485df7e8cce9f7ddf6681eaa5b27e058ad8e07f78c384bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:22 compute-0 systemd[1]: Started libpod-conmon-1b0077cddc5e1d91485df7e8cce9f7ddf6681eaa5b27e058ad8e07f78c384bc2.scope.
Feb 02 15:08:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9079486f8b17a9aac5ad33320886ff28607c6f9752f969bccb478cf65dbb01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9079486f8b17a9aac5ad33320886ff28607c6f9752f969bccb478cf65dbb01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9079486f8b17a9aac5ad33320886ff28607c6f9752f969bccb478cf65dbb01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9079486f8b17a9aac5ad33320886ff28607c6f9752f969bccb478cf65dbb01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:22 compute-0 podman[97961]: 2026-02-02 15:08:22.26510114 +0000 UTC m=+0.110981702 container init 1b0077cddc5e1d91485df7e8cce9f7ddf6681eaa5b27e058ad8e07f78c384bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:08:22 compute-0 podman[97961]: 2026-02-02 15:08:22.176231135 +0000 UTC m=+0.022111647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:22 compute-0 podman[97961]: 2026-02-02 15:08:22.273248612 +0000 UTC m=+0.119129114 container start 1b0077cddc5e1d91485df7e8cce9f7ddf6681eaa5b27e058ad8e07f78c384bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:22 compute-0 podman[97961]: 2026-02-02 15:08:22.276215035 +0000 UTC m=+0.122095597 container attach 1b0077cddc5e1d91485df7e8cce9f7ddf6681eaa5b27e058ad8e07f78c384bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:08:22 compute-0 determined_easley[97985]: {
Feb 02 15:08:22 compute-0 determined_easley[97985]:     "0": [
Feb 02 15:08:22 compute-0 determined_easley[97985]:         {
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "devices": [
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "/dev/loop3"
Feb 02 15:08:22 compute-0 determined_easley[97985]:             ],
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_name": "ceph_lv0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_size": "21470642176",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "name": "ceph_lv0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "tags": {
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.crush_device_class": "",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.encrypted": "0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.osd_id": "0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.type": "block",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.vdo": "0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.with_tpm": "0"
Feb 02 15:08:22 compute-0 determined_easley[97985]:             },
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "type": "block",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "vg_name": "ceph_vg0"
Feb 02 15:08:22 compute-0 determined_easley[97985]:         }
Feb 02 15:08:22 compute-0 determined_easley[97985]:     ],
Feb 02 15:08:22 compute-0 determined_easley[97985]:     "1": [
Feb 02 15:08:22 compute-0 determined_easley[97985]:         {
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "devices": [
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "/dev/loop4"
Feb 02 15:08:22 compute-0 determined_easley[97985]:             ],
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_name": "ceph_lv1",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_size": "21470642176",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "name": "ceph_lv1",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "tags": {
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.crush_device_class": "",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.encrypted": "0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.osd_id": "1",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.type": "block",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.vdo": "0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.with_tpm": "0"
Feb 02 15:08:22 compute-0 determined_easley[97985]:             },
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "type": "block",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "vg_name": "ceph_vg1"
Feb 02 15:08:22 compute-0 determined_easley[97985]:         }
Feb 02 15:08:22 compute-0 determined_easley[97985]:     ],
Feb 02 15:08:22 compute-0 determined_easley[97985]:     "2": [
Feb 02 15:08:22 compute-0 determined_easley[97985]:         {
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "devices": [
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "/dev/loop5"
Feb 02 15:08:22 compute-0 determined_easley[97985]:             ],
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_name": "ceph_lv2",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_size": "21470642176",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "name": "ceph_lv2",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "tags": {
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.cluster_name": "ceph",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.crush_device_class": "",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.encrypted": "0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.objectstore": "bluestore",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.osd_id": "2",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.type": "block",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.vdo": "0",
Feb 02 15:08:22 compute-0 determined_easley[97985]:                 "ceph.with_tpm": "0"
Feb 02 15:08:22 compute-0 determined_easley[97985]:             },
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "type": "block",
Feb 02 15:08:22 compute-0 determined_easley[97985]:             "vg_name": "ceph_vg2"
Feb 02 15:08:22 compute-0 determined_easley[97985]:         }
Feb 02 15:08:22 compute-0 determined_easley[97985]:     ]
Feb 02 15:08:22 compute-0 determined_easley[97985]: }
Feb 02 15:08:22 compute-0 systemd[1]: libpod-1b0077cddc5e1d91485df7e8cce9f7ddf6681eaa5b27e058ad8e07f78c384bc2.scope: Deactivated successfully.
Feb 02 15:08:22 compute-0 podman[97961]: 2026-02-02 15:08:22.541047752 +0000 UTC m=+0.386928264 container died 1b0077cddc5e1d91485df7e8cce9f7ddf6681eaa5b27e058ad8e07f78c384bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b9079486f8b17a9aac5ad33320886ff28607c6f9752f969bccb478cf65dbb01-merged.mount: Deactivated successfully.
Feb 02 15:08:22 compute-0 podman[97961]: 2026-02-02 15:08:22.579394322 +0000 UTC m=+0.425274834 container remove 1b0077cddc5e1d91485df7e8cce9f7ddf6681eaa5b27e058ad8e07f78c384bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:22 compute-0 systemd[1]: libpod-conmon-1b0077cddc5e1d91485df7e8cce9f7ddf6681eaa5b27e058ad8e07f78c384bc2.scope: Deactivated successfully.
Feb 02 15:08:22 compute-0 sudo[97850]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:22 compute-0 sudo[98006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:08:22 compute-0 sudo[98006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:22 compute-0 sudo[98006]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v81: 42 pgs: 42 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 195 B/s rd, 391 B/s wr, 1 op/s
Feb 02 15:08:22 compute-0 sudo[98031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:08:22 compute-0 sudo[98031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/632469792' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Feb 02 15:08:22 compute-0 sudo[98091]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grninhgkdasxtmkndhgtrjsuqlbtihjq ; /usr/bin/python3'
Feb 02 15:08:22 compute-0 sudo[98091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:08:23 compute-0 podman[98093]: 2026-02-02 15:08:23.012683034 +0000 UTC m=+0.048200358 container create 76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:23 compute-0 systemd[1]: Started libpod-conmon-76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e.scope.
Feb 02 15:08:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:23 compute-0 podman[98093]: 2026-02-02 15:08:22.993400906 +0000 UTC m=+0.028918310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:23 compute-0 podman[98093]: 2026-02-02 15:08:23.098968644 +0000 UTC m=+0.134485978 container init 76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jones, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:23 compute-0 python3[98094]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:08:23 compute-0 podman[98093]: 2026-02-02 15:08:23.106798959 +0000 UTC m=+0.142316313 container start 76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jones, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:08:23 compute-0 zealous_jones[98110]: 167 167
Feb 02 15:08:23 compute-0 systemd[1]: libpod-76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e.scope: Deactivated successfully.
Feb 02 15:08:23 compute-0 podman[98093]: 2026-02-02 15:08:23.111198313 +0000 UTC m=+0.146715627 container attach 76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:08:23 compute-0 conmon[98110]: conmon 76f5e514c49406696fb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e.scope/container/memory.events
Feb 02 15:08:23 compute-0 podman[98093]: 2026-02-02 15:08:23.112407417 +0000 UTC m=+0.147924731 container died 76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 15:08:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-61f329105d5b8e528bfdcb3cea5f45ab28401496da6a8baac414072d49c2313c-merged.mount: Deactivated successfully.
Feb 02 15:08:23 compute-0 podman[98093]: 2026-02-02 15:08:23.154360833 +0000 UTC m=+0.189878147 container remove 76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:08:23 compute-0 podman[98115]: 2026-02-02 15:08:23.157808216 +0000 UTC m=+0.040137998 container create 02096f2e17ad880aef098ce7834f96e0a3bae9d7d572837c92c900f42e251d13 (image=quay.io/ceph/ceph:v20, name=elated_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:08:23 compute-0 systemd[1]: libpod-conmon-76f5e514c49406696fb3c91e0188695354a1b3617d3a84e938aa86dc2a3ddf6e.scope: Deactivated successfully.
Feb 02 15:08:23 compute-0 systemd[1]: Started libpod-conmon-02096f2e17ad880aef098ce7834f96e0a3bae9d7d572837c92c900f42e251d13.scope.
Feb 02 15:08:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4eb9516c03188a3414a0b792f903b37c22cf87ae546281ccf6726e53bbe4f60/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4eb9516c03188a3414a0b792f903b37c22cf87ae546281ccf6726e53bbe4f60/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:23 compute-0 podman[98115]: 2026-02-02 15:08:23.22388756 +0000 UTC m=+0.106217402 container init 02096f2e17ad880aef098ce7834f96e0a3bae9d7d572837c92c900f42e251d13 (image=quay.io/ceph/ceph:v20, name=elated_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:08:23 compute-0 podman[98115]: 2026-02-02 15:08:23.228353254 +0000 UTC m=+0.110683036 container start 02096f2e17ad880aef098ce7834f96e0a3bae9d7d572837c92c900f42e251d13 (image=quay.io/ceph/ceph:v20, name=elated_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:23 compute-0 podman[98115]: 2026-02-02 15:08:23.231733516 +0000 UTC m=+0.114063348 container attach 02096f2e17ad880aef098ce7834f96e0a3bae9d7d572837c92c900f42e251d13 (image=quay.io/ceph/ceph:v20, name=elated_dirac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:23 compute-0 podman[98115]: 2026-02-02 15:08:23.141654215 +0000 UTC m=+0.023984017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:08:23 compute-0 podman[98154]: 2026-02-02 15:08:23.27125853 +0000 UTC m=+0.046855071 container create 04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:08:23 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:08:23 compute-0 systemd[1]: Started libpod-conmon-04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1.scope.
Feb 02 15:08:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a5b7fa2fb82a5a4651c0b5416b68d562be9fd9712da2c23ace2cf84ece410d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a5b7fa2fb82a5a4651c0b5416b68d562be9fd9712da2c23ace2cf84ece410d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a5b7fa2fb82a5a4651c0b5416b68d562be9fd9712da2c23ace2cf84ece410d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a5b7fa2fb82a5a4651c0b5416b68d562be9fd9712da2c23ace2cf84ece410d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:08:23 compute-0 podman[98154]: 2026-02-02 15:08:23.24618423 +0000 UTC m=+0.021780841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:08:23 compute-0 podman[98154]: 2026-02-02 15:08:23.347462867 +0000 UTC m=+0.123059398 container init 04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:08:23 compute-0 podman[98154]: 2026-02-02 15:08:23.351909162 +0000 UTC m=+0.127505673 container start 04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb 02 15:08:23 compute-0 podman[98154]: 2026-02-02 15:08:23.355771502 +0000 UTC m=+0.131368033 container attach 04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:08:23 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb 02 15:08:23 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb 02 15:08:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Feb 02 15:08:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621722062' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Feb 02 15:08:23 compute-0 elated_dirac[98146]: 
Feb 02 15:08:23 compute-0 systemd[1]: libpod-02096f2e17ad880aef098ce7834f96e0a3bae9d7d572837c92c900f42e251d13.scope: Deactivated successfully.
Feb 02 15:08:23 compute-0 elated_dirac[98146]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Feb 02 15:08:23 compute-0 podman[98115]: 2026-02-02 15:08:23.766937898 +0000 UTC m=+0.649267680 container died 02096f2e17ad880aef098ce7834f96e0a3bae9d7d572837c92c900f42e251d13 (image=quay.io/ceph/ceph:v20, name=elated_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:08:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4eb9516c03188a3414a0b792f903b37c22cf87ae546281ccf6726e53bbe4f60-merged.mount: Deactivated successfully.
Feb 02 15:08:23 compute-0 ceph-mon[75334]: pgmap v81: 42 pgs: 42 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 195 B/s rd, 391 B/s wr, 1 op/s
Feb 02 15:08:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2621722062' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Feb 02 15:08:23 compute-0 podman[98115]: 2026-02-02 15:08:23.802155461 +0000 UTC m=+0.684485243 container remove 02096f2e17ad880aef098ce7834f96e0a3bae9d7d572837c92c900f42e251d13 (image=quay.io/ceph/ceph:v20, name=elated_dirac, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:08:23 compute-0 systemd[1]: libpod-conmon-02096f2e17ad880aef098ce7834f96e0a3bae9d7d572837c92c900f42e251d13.scope: Deactivated successfully.
Feb 02 15:08:23 compute-0 sudo[98091]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:23 compute-0 lvm[98282]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:08:23 compute-0 lvm[98282]: VG ceph_vg0 finished
Feb 02 15:08:23 compute-0 lvm[98285]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:08:23 compute-0 lvm[98285]: VG ceph_vg1 finished
Feb 02 15:08:24 compute-0 lvm[98287]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:08:24 compute-0 lvm[98287]: VG ceph_vg2 finished
Feb 02 15:08:24 compute-0 boring_thompson[98172]: {}
Feb 02 15:08:24 compute-0 systemd[1]: libpod-04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1.scope: Deactivated successfully.
Feb 02 15:08:24 compute-0 systemd[1]: libpod-04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1.scope: Consumed 1.034s CPU time.
Feb 02 15:08:24 compute-0 podman[98154]: 2026-02-02 15:08:24.149215124 +0000 UTC m=+0.924811715 container died 04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Feb 02 15:08:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a5b7fa2fb82a5a4651c0b5416b68d562be9fd9712da2c23ace2cf84ece410d3-merged.mount: Deactivated successfully.
Feb 02 15:08:24 compute-0 podman[98154]: 2026-02-02 15:08:24.193343105 +0000 UTC m=+0.968939616 container remove 04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_thompson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:08:24 compute-0 systemd[1]: libpod-conmon-04e5849a6089f9f136435b9cd38c6eb5cba5f30a1447b1f9c9bd5f42e27f4cf1.scope: Deactivated successfully.
Feb 02 15:08:24 compute-0 sudo[98031]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:08:24 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:08:24 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:24 compute-0 sudo[98304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:08:24 compute-0 sudo[98304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:08:24 compute-0 sudo[98304]: pam_unix(sudo:session): session closed for user root
Feb 02 15:08:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v82: 42 pgs: 42 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 164 B/s rd, 328 B/s wr, 1 op/s
Feb 02 15:08:25 compute-0 ceph-mon[75334]: 2.3 scrub starts
Feb 02 15:08:25 compute-0 ceph-mon[75334]: 2.3 scrub ok
Feb 02 15:08:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:25 compute-0 ceph-mon[75334]: pgmap v82: 42 pgs: 42 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 164 B/s rd, 328 B/s wr, 1 op/s
Feb 02 15:08:25 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb 02 15:08:25 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb 02 15:08:25 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Feb 02 15:08:25 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Feb 02 15:08:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:26 compute-0 ceph-mon[75334]: 2.1a scrub starts
Feb 02 15:08:26 compute-0 ceph-mon[75334]: 2.1a scrub ok
Feb 02 15:08:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v83: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Feb 02 15:08:27 compute-0 ceph-mon[75334]: 2.15 scrub starts
Feb 02 15:08:27 compute-0 ceph-mon[75334]: 2.15 scrub ok
Feb 02 15:08:27 compute-0 ceph-mon[75334]: pgmap v83: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Feb 02 15:08:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v84: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 7.2 KiB/s wr, 154 op/s
Feb 02 15:08:29 compute-0 ceph-mon[75334]: pgmap v84: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 7.2 KiB/s wr, 154 op/s
Feb 02 15:08:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v85: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 6.3 KiB/s wr, 138 op/s
Feb 02 15:08:31 compute-0 ceph-mon[75334]: pgmap v85: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 6.3 KiB/s wr, 138 op/s
Feb 02 15:08:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v86: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 02 15:08:33 compute-0 ceph-mon[75334]: pgmap v86: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 02 15:08:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v87: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 02 15:08:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:35 compute-0 ceph-mon[75334]: pgmap v87: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 02 15:08:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v88: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 02 15:08:37 compute-0 ceph-mon[75334]: pgmap v88: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 02 15:08:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v89: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:38 compute-0 ceph-mon[75334]: pgmap v89: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v90: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:41 compute-0 ceph-mon[75334]: pgmap v90: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:08:42
Feb 02 15:08:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:08:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:08:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Feb 02 15:08:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:08:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v91: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:43 compute-0 ceph-mon[75334]: pgmap v91: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:08:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v92: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.531990209749498e-07 of space, bias 4.0, pg target 0.0009038388251699398 quantized to 16 (current 1)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Feb 02 15:08:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Feb 02 15:08:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Feb 02 15:08:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Feb 02 15:08:45 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Feb 02 15:08:45 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev 40bd03a7-2362-4192-a5de-dd3ed1993ff1 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 02 15:08:45 compute-0 ceph-mon[75334]: pgmap v92: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:45 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Feb 02 15:08:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v94: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 15:08:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Feb 02 15:08:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:46 compute-0 ceph-mon[75334]: osdmap e41: 3 total, 3 up, 3 in
Feb 02 15:08:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Feb 02 15:08:46 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Feb 02 15:08:46 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev 1b5b4cea-ffaa-48f6-94fb-27010363b4fb (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 02 15:08:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Feb 02 15:08:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=10.843644142s) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active pruub 88.888175964s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=10.843644142s) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown pruub 88.888175964s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Feb 02 15:08:47 compute-0 ceph-mon[75334]: pgmap v94: 42 pgs: 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:47 compute-0 ceph-mon[75334]: osdmap e42: 3 total, 3 up, 3 in
Feb 02 15:08:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Feb 02 15:08:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Feb 02 15:08:47 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev 1e0cff4f-c347-45ca-88f8-905afaf992e5 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 02 15:08:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Feb 02 15:08:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=42/43 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:47 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v97: 73 pgs: 31 unknown, 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 15:08:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 15:08:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Feb 02 15:08:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb 02 15:08:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Feb 02 15:08:48 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Feb 02 15:08:48 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=44 pruub=10.498197556s) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active pruub 93.689582825s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:08:48 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev d7ac605c-49e6-4124-9238-3a38e37e5834 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb 02 15:08:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Feb 02 15:08:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:48 compute-0 ceph-mon[75334]: osdmap e43: 3 total, 3 up, 3 in
Feb 02 15:08:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb 02 15:08:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:48 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=44 pruub=10.498197556s) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown pruub 93.689582825s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Feb 02 15:08:49 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Feb 02 15:08:49 compute-0 ceph-mgr[75628]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Feb 02 15:08:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Feb 02 15:08:49 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Feb 02 15:08:49 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=10.469370842s) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active pruub 87.296669006s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=10.469370842s) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown pruub 87.296669006s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev ac2f4931-9dba-49dd-96ea-8dda5dec7537 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 02 15:08:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Feb 02 15:08:49 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:49 compute-0 ceph-mon[75334]: pgmap v97: 73 pgs: 31 unknown, 42 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:49 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:49 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-mon[75334]: osdmap e44: 3 total, 3 up, 3 in
Feb 02 15:08:49 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.8( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 45 pg[5.d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=44/45 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:49 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v100: 135 pgs: 31 unknown, 104 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 15:08:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Feb 02 15:08:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb 02 15:08:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Feb 02 15:08:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb 02 15:08:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Feb 02 15:08:50 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Feb 02 15:08:50 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46 pruub=11.408913612s) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active pruub 92.933502197s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:08:50 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46 pruub=11.408913612s) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown pruub 92.933502197s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev 960a0a80-fd54-4777-ae5b-16b3d8f68d12 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 02 15:08:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Feb 02 15:08:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-mon[75334]: 3.1b scrub starts
Feb 02 15:08:51 compute-0 ceph-mon[75334]: 3.1b scrub ok
Feb 02 15:08:51 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:51 compute-0 ceph-mon[75334]: osdmap e45: 3 total, 3 up, 3 in
Feb 02 15:08:51 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:51 compute-0 ceph-mon[75334]: pgmap v100: 135 pgs: 31 unknown, 104 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:51 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:51 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.0( empty local-lis/les=44/46 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb 02 15:08:51 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 46 pg[6.0( v 35'39 (0'0,35'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=9.785219193s) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 33'38 mlcod 33'38 active pruub 95.711914062s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 46 pg[6.0( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=9.785219193s) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 33'38 mlcod 0'0 unknown pruub 95.711914062s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Feb 02 15:08:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=46/47 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev 10b04432-fbfd-4ea3-9fec-ec7e56b3b893 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.d( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.8( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.a( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.9( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.e( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.5( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.2( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.7( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.c( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.4( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.f( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.0( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 47 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb 02 15:08:52 compute-0 ceph-mon[75334]: osdmap e46: 3 total, 3 up, 3 in
Feb 02 15:08:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:52 compute-0 ceph-mon[75334]: osdmap e47: 3 total, 3 up, 3 in
Feb 02 15:08:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v103: 181 pgs: 77 unknown, 104 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 15:08:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 15:08:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Feb 02 15:08:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Feb 02 15:08:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Feb 02 15:08:52 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev 743d79bf-8d9a-41e1-b8ab-b48642ce12ba (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 48 pg[8.0( v 33'6 (0'0,33'6] local-lis/les=32/33 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48 pruub=8.522173882s) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 33'5 mlcod 33'5 active pruub 92.002777100s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:08:52 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 48 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=34/35 n=210 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=10.541043282s) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 39'482 mlcod 39'482 active pruub 94.021972656s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:08:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Feb 02 15:08:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:52 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 48 pg[8.0( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48 pruub=8.522173882s) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 33'5 mlcod 0'0 unknown pruub 92.002777100s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:52 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 48 pg[9.0( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=10.541043282s) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 39'482 mlcod 0'0 unknown pruub 94.021972656s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9dd00 space 0x55d61787c540 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f99d80 space 0x55d6179ce240 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f99280 space 0x55d618109440 0x0~98 clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9db00 space 0x55d6184de240 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d618028500 space 0x55d617337a40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9c280 space 0x55d6184dee40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f64480 space 0x55d617336240 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617909d80 space 0x55d61787c840 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f84200 space 0x55d61733ba40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d618007000 space 0x55d6177bfa40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d61806db00 space 0x55d61737d140 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d618029800 space 0x55d617351740 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f8a400 space 0x55d61787ce40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617990980 space 0x55d617868240 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d6180a7d00 space 0x55d617351d40 0x0~98 clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617990400 space 0x55d61787d140 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617fcf700 space 0x55d617339a40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f99000 space 0x55d617869d40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617990200 space 0x55d61787da40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f64280 space 0x55d617336b40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617990000 space 0x55d617338240 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d61805fd00 space 0x55d618023440 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d61806d680 space 0x55d6177bf140 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d6180a7f00 space 0x55d617869440 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d61806c300 space 0x55d6178f8240 0x0~98 clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d6180a7580 space 0x55d617337440 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9d780 space 0x55d6183c3740 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9c080 space 0x55d61781cb40 0x0~98 clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9c580 space 0x55d61737ce40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f98a00 space 0x55d617816840 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f8a900 space 0x55d6177bf740 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f98100 space 0x55d61843d440 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9d600 space 0x55d617816e40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f8a880 space 0x55d6179cf140 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f7c480 space 0x55d61733a840 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f91100 space 0x55d6180eda40 0x0~98 clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617990f80 space 0x55d618108540 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d618028980 space 0x55d6180ed140 0x0~98 clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d618046c00 space 0x55d6180fa540 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f8a280 space 0x55d6184de840 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617990d80 space 0x55d618108e40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d6180a7f80 space 0x55d617868b40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f99e80 space 0x55d6180f0e40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f7c100 space 0x55d617350e40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9d500 space 0x55d61781dd40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f91300 space 0x55d61737a240 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9d300 space 0x55d6180f1a40 0x0~98 clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d618029e80 space 0x55d617350540 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9d800 space 0x55d617339140 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f99d00 space 0x55d6178cab40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617990280 space 0x55d6180fbd40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f99f00 space 0x55d6178ca240 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f9c500 space 0x55d6183c2540 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f84400 space 0x55d61733b140 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d61806c980 space 0x55d6179b1d40 0x0~9a clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f99780 space 0x55d617339440 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d6180a7c00 space 0x55d617337d40 0x0~6e clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d61806d580 space 0x55d6179ce540 0x0~98 clean)
Feb 02 15:08:52 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d618410d80) split_cache   moving buffer(0x55d617f99980 space 0x55d617338b40 0x0~6e clean)
Feb 02 15:08:53 compute-0 ceph-mon[75334]: 5.1f scrub starts
Feb 02 15:08:53 compute-0 ceph-mon[75334]: 5.1f scrub ok
Feb 02 15:08:53 compute-0 ceph-mon[75334]: pgmap v103: 181 pgs: 77 unknown, 104 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:53 compute-0 ceph-mon[75334]: osdmap e48: 3 total, 3 up, 3 in
Feb 02 15:08:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb 02 15:08:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Feb 02 15:08:53 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Feb 02 15:08:53 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] update: starting ev cb143b11-9992-48d9-90f2-7d2a7dd1eb30 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.14( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.15( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.16( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.14( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.17( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.10( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev 40bd03a7-2362-4192-a5de-dd3ed1993ff1 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event 40bd03a7-2362-4192-a5de-dd3ed1993ff1 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev 1b5b4cea-ffaa-48f6-94fb-27010363b4fb (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event 1b5b4cea-ffaa-48f6-94fb-27010363b4fb (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.11( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev 1e0cff4f-c347-45ca-88f8-905afaf992e5 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event 1e0cff4f-c347-45ca-88f8-905afaf992e5 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.13( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev d7ac605c-49e6-4124-9238-3a38e37e5834 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.12( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event d7ac605c-49e6-4124-9238-3a38e37e5834 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.12( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev ac2f4931-9dba-49dd-96ea-8dda5dec7537 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event ac2f4931-9dba-49dd-96ea-8dda5dec7537 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev 960a0a80-fd54-4777-ae5b-16b3d8f68d12 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event 960a0a80-fd54-4777-ae5b-16b3d8f68d12 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.13( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev 10b04432-fbfd-4ea3-9fec-ec7e56b3b893 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event 10b04432-fbfd-4ea3-9fec-ec7e56b3b893 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev 743d79bf-8d9a-41e1-b8ab-b48642ce12ba (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event 743d79bf-8d9a-41e1-b8ab-b48642ce12ba (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1c( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] complete: finished ev cb143b11-9992-48d9-90f2-7d2a7dd1eb30 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 02 15:08:53 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event cb143b11-9992-48d9-90f2-7d2a7dd1eb30 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1e( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.19( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.18( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.19( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.18( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1a( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1b( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.4( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.5( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.4( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.6( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.5( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1f( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.7( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.e( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.d( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.3( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.2( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.c( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1d( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.f( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.b( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.9( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.11( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.10( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.a( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1( v 33'6 (0'0,33'6] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.8( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.9( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.2( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.15( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.14( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.3( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=34/35 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.16( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.14( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.17( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.12( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.12( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.10( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.13( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1c( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1e( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.18( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.18( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1a( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1a( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1b( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.19( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.4( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.5( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.5( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.4( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1f( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.7( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.6( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.e( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.3( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.d( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.2( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.c( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.0( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 33'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.f( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.a( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.b( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.9( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.11( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.10( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 39'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.a( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1d( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.1( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.8( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[8.2( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 49 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:54 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb 02 15:08:54 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Feb 02 15:08:54 compute-0 ceph-mgr[75628]: [progress INFO root] Writing back 15 completed events
Feb 02 15:08:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 15:08:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v106: 243 pgs: 108 unknown, 135 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 15:08:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb 02 15:08:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 02 15:08:54 compute-0 ceph-mon[75334]: osdmap e49: 3 total, 3 up, 3 in
Feb 02 15:08:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:08:54 compute-0 ceph-mon[75334]: pgmap v106: 243 pgs: 108 unknown, 135 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 02 15:08:55 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb 02 15:08:55 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb 02 15:08:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:08:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Feb 02 15:08:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Feb 02 15:08:55 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Feb 02 15:08:55 compute-0 ceph-mon[75334]: 3.1c scrub starts
Feb 02 15:08:55 compute-0 ceph-mon[75334]: 3.1c scrub ok
Feb 02 15:08:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 02 15:08:55 compute-0 ceph-mon[75334]: osdmap e50: 3 total, 3 up, 3 in
Feb 02 15:08:56 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Feb 02 15:08:56 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Feb 02 15:08:56 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Feb 02 15:08:56 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Feb 02 15:08:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v108: 305 pgs: 62 unknown, 243 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:56 compute-0 ceph-mon[75334]: 5.16 scrub starts
Feb 02 15:08:56 compute-0 ceph-mon[75334]: 5.16 scrub ok
Feb 02 15:08:56 compute-0 ceph-mon[75334]: 4.16 scrub starts
Feb 02 15:08:56 compute-0 ceph-mon[75334]: 4.16 scrub ok
Feb 02 15:08:56 compute-0 ceph-mon[75334]: pgmap v108: 305 pgs: 62 unknown, 243 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=50 pruub=10.331398964s) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active pruub 98.057472229s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=50 pruub=10.331398964s) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown pruub 98.057472229s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Feb 02 15:08:57 compute-0 ceph-mon[75334]: 5.1e scrub starts
Feb 02 15:08:57 compute-0 ceph-mon[75334]: 5.1e scrub ok
Feb 02 15:08:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Feb 02 15:08:57 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.16( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.13( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1d( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.7( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.5( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.c( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.a( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.16( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.13( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1d( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.7( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.5( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.c( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.a( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=50/51 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:57 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[38,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 50 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=36/37 n=9 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=15.402663231s) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 39'17 mlcod 39'17 active pruub 100.423538208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.0( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=15.402663231s) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 39'17 mlcod 0'0 unknown pruub 100.423538208s@ mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.3( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.2( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.4( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.5( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.6( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.7( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.8( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.9( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.10( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.11( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.12( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.13( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.14( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.15( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.16( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.17( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.18( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.19( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.1a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.1b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.1c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.1d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.1e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 51 pg[10.1f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:08:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 62 unknown, 243 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Feb 02 15:08:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Feb 02 15:08:58 compute-0 ceph-mon[75334]: osdmap e51: 3 total, 3 up, 3 in
Feb 02 15:08:58 compute-0 ceph-mon[75334]: pgmap v110: 305 pgs: 62 unknown, 243 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.1c( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.12( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.9( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.18( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.e( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 39'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.d( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.c( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.a( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.5( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.1f( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.3( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.1b( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.1d( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.14( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.15( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 52 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:08:59 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Feb 02 15:08:59 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb 02 15:08:59 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb 02 15:08:59 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Feb 02 15:08:59 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Feb 02 15:08:59 compute-0 ceph-mon[75334]: osdmap e52: 3 total, 3 up, 3 in
Feb 02 15:08:59 compute-0 ceph-mon[75334]: 4.18 scrub starts
Feb 02 15:08:59 compute-0 ceph-mon[75334]: 4.18 scrub ok
Feb 02 15:09:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:01 compute-0 ceph-mon[75334]: 5.1a scrub starts
Feb 02 15:09:01 compute-0 ceph-mon[75334]: 5.1a scrub ok
Feb 02 15:09:01 compute-0 ceph-mon[75334]: pgmap v112: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:01 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb 02 15:09:01 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb 02 15:09:02 compute-0 ceph-mon[75334]: 5.1b scrub starts
Feb 02 15:09:02 compute-0 ceph-mon[75334]: 5.1b scrub ok
Feb 02 15:09:02 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Feb 02 15:09:02 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Feb 02 15:09:02 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Feb 02 15:09:02 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Feb 02 15:09:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v113: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:03 compute-0 ceph-mon[75334]: 4.14 scrub starts
Feb 02 15:09:03 compute-0 ceph-mon[75334]: 4.14 scrub ok
Feb 02 15:09:03 compute-0 ceph-mon[75334]: 5.12 scrub starts
Feb 02 15:09:03 compute-0 ceph-mon[75334]: 5.12 scrub ok
Feb 02 15:09:03 compute-0 ceph-mon[75334]: pgmap v113: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:03 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Feb 02 15:09:03 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Feb 02 15:09:03 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Feb 02 15:09:03 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Feb 02 15:09:04 compute-0 ceph-mon[75334]: 4.1c scrub starts
Feb 02 15:09:04 compute-0 ceph-mon[75334]: 4.1c scrub ok
Feb 02 15:09:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 15:09:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 15:09:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 15:09:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Feb 02 15:09:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 02 15:09:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 15:09:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Feb 02 15:09:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 02 15:09:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 15:09:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 15:09:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 15:09:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.839615822s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952651978s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.839588165s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952651978s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.839618683s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952743530s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.839564323s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952743530s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.839121819s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952590942s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.839082718s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952590942s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809926987s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923446655s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809905052s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923507690s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809827805s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923446655s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809864998s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923507690s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809754372s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923461914s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809740067s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923461914s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809581757s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923530579s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809814453s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923797607s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809531212s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923530579s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.838492393s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952529907s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809777260s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923797607s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.838459015s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952529907s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.9( v 52'19 (0'0,52'19] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809305191s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 101.923530579s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.9( v 52'19 (0'0,52'19] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809283257s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 101.923530579s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809279442s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923553467s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.838171005s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952499390s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.809242249s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923553467s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.838019371s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952400208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.838144302s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952499390s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.838006973s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952400208s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.12( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807994843s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 101.921455383s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837882042s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952415466s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837845802s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952377319s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837861061s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952415466s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837823868s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952377319s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.e( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.808972359s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 101.923576355s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.e( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.808919907s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 101.923576355s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.838582039s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952690125s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837576866s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952369690s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837414742s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952369690s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.12( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.806698799s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 101.921455383s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837743759s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952690125s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837062836s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952201843s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837008476s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952201843s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.808224678s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923645020s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.808175087s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923645020s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837241173s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952705383s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.837199211s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952705383s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.808218956s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923873901s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.d( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807927132s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 101.923629761s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.808205605s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923873901s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.d( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807852745s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 101.923629761s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.836227417s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952117920s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807898521s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923896790s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.836130142s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952117920s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807824135s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923896790s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807824135s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923919678s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.835367203s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.951622009s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=50/52 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807774544s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923919678s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.835351944s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.951622009s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807621956s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923919678s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807589531s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923919678s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.835258484s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.951744080s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807553291s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.924079895s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.835210800s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.951728821s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807528496s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.924079895s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.835213661s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.951744080s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.835144997s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.951728821s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807263374s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.923988342s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.835242271s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.952003479s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807226181s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.923988342s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.835225105s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.952003479s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.15( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807118416s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 101.924018860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.834596634s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.951515198s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.14( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807050705s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 101.924018860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.834544182s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.951515198s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.14( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807026863s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 101.924018860s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.15( v 52'19 (0'0,52'19] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807025909s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 101.924018860s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.807026863s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.924049377s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.806997299s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.924049377s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.806797981s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.924072266s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.834218025s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.951484680s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.806777954s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 101.924064636s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.806766510s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.924072266s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=50/52 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=9.806727409s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 101.924064636s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.834176064s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.951484680s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.833946228s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 101.951469421s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=44/46 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=9.833865166s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 101.951469421s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.16( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.7( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.17( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.b( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.6( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.1a( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.e( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-mon[75334]: 3.19 scrub starts
Feb 02 15:09:05 compute-0 ceph-mon[75334]: 3.19 scrub ok
Feb 02 15:09:05 compute-0 ceph-mon[75334]: pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 02 15:09:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 02 15:09:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.d( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.9( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.713004112s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.244239807s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712966919s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.244239807s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737770081s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 active pruub 110.269111633s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737731934s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 110.269111633s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.733657837s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 active pruub 110.265068054s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712707520s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.244125366s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712677002s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.244125366s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.733624458s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 110.265068054s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712629318s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.244163513s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712603569s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.244163513s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712559700s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.244171143s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737736702s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 active pruub 110.269386292s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712541580s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.244171143s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737844467s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 active pruub 110.269546509s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.776263237s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.528312683s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.742777824s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.494865417s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.742755890s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.494865417s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.654177666s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.406448364s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.654160500s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.406448364s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.776241302s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.528312683s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.727874756s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.480346680s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.727860451s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.480346680s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.14( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.740288734s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.492897034s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.14( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.740274429s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.492897034s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.654130936s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.406837463s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.727471352s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.480216980s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.654109955s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.406837463s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.727458000s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.480216980s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.15( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.740012169s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.492874146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.15( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.739970207s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.492874146s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.777526855s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530509949s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.777495384s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530509949s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.738759995s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.492843628s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.738738060s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.492843628s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737713814s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 110.269386292s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737820625s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 110.269546509s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712282181s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.244110107s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712259293s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.244110107s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737691879s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 active pruub 110.269569397s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737666130s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 110.269569397s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712128639s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.244132996s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737476349s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 active pruub 110.269515991s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711922646s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.244003296s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.712078094s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.244132996s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737452507s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 110.269515991s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711897850s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.244003296s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711735725s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.243972778s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711711884s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.243972778s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711850166s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.244232178s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711467743s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.243888855s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711444855s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.243888855s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711823463s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.244232178s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711201668s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.243728638s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711181641s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.243728638s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711279869s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.243881226s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.711239815s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.243881226s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.737022400s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 active pruub 110.269683838s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.736997604s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 110.269683838s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.736959457s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 active pruub 110.269691467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.736913681s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 110.269691467s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710989952s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.243804932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710939407s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.243804932s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710781097s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.243690491s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710758209s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.243690491s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710573196s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.243598938s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710550308s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.243598938s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710492134s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.243598938s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710472107s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.243598938s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.704239845s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.237480164s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710617065s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.243850708s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.704215050s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.237480164s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.710576057s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.243850708s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.704116821s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.237464905s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.704087257s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.237464905s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.704006195s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 108.237480164s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=8.703979492s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 108.237480164s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.646811485s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.407165527s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.646777153s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.407165527s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.769742012s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530433655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.769708633s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530433655s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.719411850s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.480361938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.719385147s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.480361938s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.641541481s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.402725220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.641510010s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.402725220s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.718800545s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.480171204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.718772888s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.480171204s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.10( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.732460022s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.493995667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.10( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.732403755s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.493995667s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731163025s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.492950439s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731098175s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.492950439s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.768370628s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530426025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.768337250s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530426025s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731656075s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.493957520s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731626511s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.493957520s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.767856598s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530525208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.767836571s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530525208s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.12( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731190681s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.493949890s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.12( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731157303s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.493949890s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.643619537s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.406524658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.643577576s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.406524658s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.717182159s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.480155945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.717148781s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.480155945s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.767484665s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530601501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730819702s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.493972778s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.643436432s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.406585693s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730797768s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.493972778s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.716915131s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.480125427s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.767397881s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530601501s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.716893196s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.480125427s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.643410683s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.406585693s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731324196s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.494659424s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731307983s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.494659424s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.643292427s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.406669617s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.643223763s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.406669617s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.716658592s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.480148315s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.767116547s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530609131s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731176376s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.494689941s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.716635704s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.480148315s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.767088890s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530609131s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.18( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731444359s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.495140076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.766874313s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530609131s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.18( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731423378s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.495140076s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.766850471s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530609131s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1c( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730725288s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.494636536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1c( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730709076s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.494636536s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.642760277s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.406700134s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.642731667s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.406700134s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.766552925s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530632019s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731159210s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.494689941s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.766524315s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530632019s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.642507553s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.406700134s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731233597s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.495452881s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.642493248s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.406700134s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.731197357s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.495452881s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.715332031s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.479660034s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.766294479s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530639648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.715299606s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.479660034s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.766279221s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530639648s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1a( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730861664s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.495246887s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1a( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730820656s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.495246887s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1b( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730639458s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.495300293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1b( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730626106s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.495300293s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.641938210s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.406791687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.4( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730749130s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.495643616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.641896248s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.406791687s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.4( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730690956s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.495643616s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730631828s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 39'483 active pruub 108.495605469s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730409622s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 39'483 unknown NOTIFY pruub 108.495605469s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764855385s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530647278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764840126s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530647278s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.640908241s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.406814575s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.640872002s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.406814575s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.711699486s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.477722168s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.711685181s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.477722168s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.6( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729836464s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.496009827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.6( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729795456s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.496009827s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764618874s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530914307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729617119s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.495918274s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764593124s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530914307s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729579926s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.495918274s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.713048935s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.479675293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.713035583s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.479675293s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1f( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729277611s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.495918274s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764159203s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530822754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1f( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729238510s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.495918274s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764122009s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530822754s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.640259743s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.407051086s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.640246391s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.407051086s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.763852119s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530914307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.763826370s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530914307s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.710614204s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.477775574s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.710584641s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.477775574s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.640947342s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.408203125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.640918732s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.408203125s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.d( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729185104s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.496536255s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.712285995s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.479652405s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.d( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729165077s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.496536255s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.712196350s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.479652405s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.763346672s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530883789s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.709779739s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.477363586s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.763278008s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530883789s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.709763527s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.477363586s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.639423370s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.407089233s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.639391899s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.407089233s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.e( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728650093s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.496421814s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.e( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728630066s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.496421814s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.639087677s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.407104492s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.639068604s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.407104492s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.709253311s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.477340698s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.762832642s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530937195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.762804985s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530937195s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.709214211s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.477340698s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728483200s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.496528625s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.709158897s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.477348328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.709093094s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.477348328s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.638850212s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.407173157s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.638826370s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.407173157s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.c( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728212357s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.496589661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728157997s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.496574402s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728101730s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.496574402s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.c( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728177071s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.496589661s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.639411926s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.407997131s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.639382362s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.407997131s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.762290001s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530960083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.762254715s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.530998230s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.762254715s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530960083s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.762237549s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.530998230s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1d( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730381012s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.499168396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.708637238s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.477493286s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.1d( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.730344772s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.499168396s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729627609s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.498519897s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.708617210s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.477493286s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729614258s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.498519897s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.f( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729435921s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.498466492s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.708288193s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.477340698s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.761941910s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.531005859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.f( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729411125s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.498466492s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.708267212s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.477340698s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.761920929s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.531005859s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.b( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729285240s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.498481750s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.707745552s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.476997375s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.b( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729249001s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.498481750s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.761644363s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.531036377s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.761630058s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.531036377s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.9( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729077339s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.498519897s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.11( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729011536s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.498527527s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.9( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.729038239s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.498519897s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.11( v 33'6 (0'0,33'6] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728999138s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.498527527s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.707484245s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.476997375s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.765010834s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.534759521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764975548s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.534759521s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.707002640s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.476898193s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.638279915s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.408180237s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.706977844s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.476898193s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764751434s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.534805298s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764735222s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.534805298s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.638242722s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.408180237s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.704257011s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 106.474365234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=10.704238892s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 106.474365234s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.637981415s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.408187866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.637941360s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.408187866s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728670120s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.498992920s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728649139s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.498992920s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764397621s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.534767151s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.637788773s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.408195496s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764354706s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.534767151s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.637741089s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.408195496s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728631020s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.499176025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728579521s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.499176025s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.725904465s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.496528625s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764077187s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 104.534774780s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=8.764036179s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 104.534774780s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.2( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728632927s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 active pruub 108.499450684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[8.2( v 33'6 (0'0,33'6] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728618622s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 unknown NOTIFY pruub 108.499450684s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728490829s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 108.499443054s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=12.728463173s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.499443054s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.636427879s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 110.408203125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=14.636389732s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 110.408203125s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.1a( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.1d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Feb 02 15:09:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Feb 02 15:09:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Feb 02 15:09:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Feb 02 15:09:05 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.14( v 52'19 lc 37'7 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.12( v 52'19 lc 39'17 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[8.15( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[8.4( v 33'6 (0'0,33'6] local-lis/les=53/54 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[8.d( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[8.2( v 33'6 (0'0,33'6] local-lis/les=53/54 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[8.1c( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.10( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.14( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.18( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.1f( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.1a( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.1d( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[8.11( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[8.12( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[8.1b( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.15( v 52'19 lc 37'3 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[6.d( v 35'39 lc 33'13 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=35'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[6.7( v 35'39 lc 33'20 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[6.5( v 35'39 lc 33'9 (0'0,35'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[6.f( v 35'39 lc 33'1 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.e( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.9( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.c( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.d( v 52'19 lc 37'5 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.f( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.e( v 52'19 lc 37'4 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.b( v 33'6 (0'0,33'6] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.9( v 52'19 lc 37'8 (0'0,52'19] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[8.6( v 33'6 lc 0'0 (0'0,33'6] local-lis/les=53/54 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 02 15:09:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 02 15:09:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:09:06 compute-0 ceph-mon[75334]: osdmap e53: 3 total, 3 up, 3 in
Feb 02 15:09:06 compute-0 ceph-mon[75334]: 10.1c scrub starts
Feb 02 15:09:06 compute-0 ceph-mon[75334]: 10.1c scrub ok
Feb 02 15:09:06 compute-0 ceph-mon[75334]: osdmap e54: 3 total, 3 up, 3 in
Feb 02 15:09:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Feb 02 15:09:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Feb 02 15:09:06 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Feb 02 15:09:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 16 unknown, 61 peering, 228 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:06 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 55 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=54/55 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=49'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:07 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Feb 02 15:09:07 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Feb 02 15:09:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Feb 02 15:09:07 compute-0 ceph-mon[75334]: osdmap e55: 3 total, 3 up, 3 in
Feb 02 15:09:07 compute-0 ceph-mon[75334]: pgmap v118: 305 pgs: 16 unknown, 61 peering, 228 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Feb 02 15:09:07 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.320693970s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 113.532859802s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.320841789s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 113.533035278s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.320619583s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.532859802s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.320780754s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.533035278s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.314605713s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 113.527008057s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.314564705s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.527008057s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.319987297s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 113.532936096s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.319748878s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.532936096s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.318852425s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 113.532844543s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.318764687s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 113.532966614s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.313600540s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 113.527046204s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.318670273s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.532966614s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.318822861s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.532844543s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:07 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.312728882s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.527046204s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 sudo[98352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkmwstzjqumnutnvlkatxhciarhvuksg ; /usr/bin/python3'
Feb 02 15:09:08 compute-0 sudo[98352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:09:08 compute-0 python3[98354]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:09:08 compute-0 podman[98355]: 2026-02-02 15:09:08.571030811 +0000 UTC m=+0.064478174 container create 4b97809b4145b5c3905f091936d613add0879e59a1bdf143331d4c5f79237489 (image=quay.io/ceph/ceph:v20, name=priceless_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:09:08 compute-0 systemd[76706]: Starting Mark boot as successful...
Feb 02 15:09:08 compute-0 systemd[76706]: Finished Mark boot as successful.
Feb 02 15:09:08 compute-0 systemd[1]: Started libpod-conmon-4b97809b4145b5c3905f091936d613add0879e59a1bdf143331d4c5f79237489.scope.
Feb 02 15:09:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7696213e0564b58ece3fdd6f6d6bfa64affcd2fda376c5518ae2d03e97ab3eaa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7696213e0564b58ece3fdd6f6d6bfa64affcd2fda376c5518ae2d03e97ab3eaa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:08 compute-0 podman[98355]: 2026-02-02 15:09:08.543109191 +0000 UTC m=+0.036556614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:09:08 compute-0 podman[98355]: 2026-02-02 15:09:08.643197715 +0000 UTC m=+0.136645138 container init 4b97809b4145b5c3905f091936d613add0879e59a1bdf143331d4c5f79237489 (image=quay.io/ceph/ceph:v20, name=priceless_curran, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:09:08 compute-0 podman[98355]: 2026-02-02 15:09:08.650862336 +0000 UTC m=+0.144309699 container start 4b97809b4145b5c3905f091936d613add0879e59a1bdf143331d4c5f79237489 (image=quay.io/ceph/ceph:v20, name=priceless_curran, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:09:08 compute-0 podman[98355]: 2026-02-02 15:09:08.655509239 +0000 UTC m=+0.148956612 container attach 4b97809b4145b5c3905f091936d613add0879e59a1bdf143331d4c5f79237489 (image=quay.io/ceph/ceph:v20, name=priceless_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:09:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Feb 02 15:09:08 compute-0 ceph-mon[75334]: 10.18 scrub starts
Feb 02 15:09:08 compute-0 ceph-mon[75334]: 10.18 scrub ok
Feb 02 15:09:08 compute-0 ceph-mon[75334]: osdmap e56: 3 total, 3 up, 3 in
Feb 02 15:09:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Feb 02 15:09:08 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.306180954s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 113.533485413s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.306118011s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.533485413s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.305862427s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 113.533477783s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.305819511s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.533477783s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=49'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=49'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.301617622s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=49'484 lcod 49'484 active pruub 113.533660889s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.301453590s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=49'484 lcod 49'484 unknown NOTIFY pruub 113.533660889s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.300771713s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 113.533493042s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.300663948s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.533493042s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.300011635s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 113.533248901s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.299989700s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 113.533264160s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.299912453s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.533264160s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.299580574s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 113.533050537s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.299512863s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.533050537s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.299751282s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.533248901s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.299361229s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 113.533096313s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.299294472s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.533096313s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.299164772s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 113.533630371s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:08 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.299036980s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 113.533630371s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:08 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 16 unknown, 61 peering, 228 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 122 B/s, 0 objects/s recovering
Feb 02 15:09:09 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Feb 02 15:09:09 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Feb 02 15:09:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Feb 02 15:09:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Feb 02 15:09:09 compute-0 ceph-mon[75334]: osdmap e57: 3 total, 3 up, 3 in
Feb 02 15:09:09 compute-0 ceph-mon[75334]: pgmap v121: 305 pgs: 16 unknown, 61 peering, 228 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 122 B/s, 0 objects/s recovering
Feb 02 15:09:09 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 58 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:09 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:09 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:09 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 58 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:09 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:09 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:09 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 58 pg[9.5( v 55'485 (0'0,55'485] local-lis/les=57/58 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=55'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:09 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:09 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Feb 02 15:09:09 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/34 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:09 compute-0 priceless_curran[98371]: could not fetch user info: no user info saved
Feb 02 15:09:09 compute-0 systemd[1]: libpod-4b97809b4145b5c3905f091936d613add0879e59a1bdf143331d4c5f79237489.scope: Deactivated successfully.
Feb 02 15:09:09 compute-0 podman[98355]: 2026-02-02 15:09:09.822333587 +0000 UTC m=+1.315780930 container died 4b97809b4145b5c3905f091936d613add0879e59a1bdf143331d4c5f79237489 (image=quay.io/ceph/ceph:v20, name=priceless_curran, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:09:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7696213e0564b58ece3fdd6f6d6bfa64affcd2fda376c5518ae2d03e97ab3eaa-merged.mount: Deactivated successfully.
Feb 02 15:09:09 compute-0 podman[98355]: 2026-02-02 15:09:09.865107088 +0000 UTC m=+1.358554451 container remove 4b97809b4145b5c3905f091936d613add0879e59a1bdf143331d4c5f79237489 (image=quay.io/ceph/ceph:v20, name=priceless_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:09:09 compute-0 systemd[1]: libpod-conmon-4b97809b4145b5c3905f091936d613add0879e59a1bdf143331d4c5f79237489.scope: Deactivated successfully.
Feb 02 15:09:09 compute-0 sudo[98352]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:10 compute-0 sudo[98493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxjpjjezkisynuqzmexckeqvefiqhytr ; /usr/bin/python3'
Feb 02 15:09:10 compute-0 sudo[98493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:09:10 compute-0 python3[98495]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:09:10 compute-0 podman[98496]: 2026-02-02 15:09:10.255860505 +0000 UTC m=+0.059691049 container create ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2 (image=quay.io/ceph/ceph:v20, name=infallible_ganguly, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:09:10 compute-0 systemd[1]: Started libpod-conmon-ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2.scope.
Feb 02 15:09:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c59f56230546f78c73d2c7c87d40c930c6550e08ac370af6a0ef950f1851d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c59f56230546f78c73d2c7c87d40c930c6550e08ac370af6a0ef950f1851d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:10 compute-0 podman[98496]: 2026-02-02 15:09:10.230826618 +0000 UTC m=+0.034657202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 02 15:09:10 compute-0 podman[98496]: 2026-02-02 15:09:10.340606049 +0000 UTC m=+0.144436593 container init ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2 (image=quay.io/ceph/ceph:v20, name=infallible_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 02 15:09:10 compute-0 podman[98496]: 2026-02-02 15:09:10.347647366 +0000 UTC m=+0.151477900 container start ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2 (image=quay.io/ceph/ceph:v20, name=infallible_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:09:10 compute-0 podman[98496]: 2026-02-02 15:09:10.351732286 +0000 UTC m=+0.155562810 container attach ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2 (image=quay.io/ceph/ceph:v20, name=infallible_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]: {
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "user_id": "openstack",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "display_name": "openstack",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "email": "",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "suspended": 0,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "max_buckets": 1000,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "subusers": [],
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "keys": [
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         {
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:             "user": "openstack",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:             "access_key": "2C0JW48Q238GG111VGSG",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:             "secret_key": "NiO61X9HYxU0sGEDBoG3Bn3Pjou7ibs5toagGKqZ",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:             "active": true,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:             "create_date": "2026-02-02T15:09:10.599861Z"
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         }
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     ],
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "swift_keys": [],
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "caps": [],
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "op_mask": "read, write, delete",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "default_placement": "",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "default_storage_class": "",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "placement_tags": [],
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "bucket_quota": {
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "enabled": false,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "check_on_raw": false,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "max_size": -1,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "max_size_kb": 0,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "max_objects": -1
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     },
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "user_quota": {
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "enabled": false,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "check_on_raw": false,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "max_size": -1,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "max_size_kb": 0,
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:         "max_objects": -1
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     },
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "temp_url_keys": [],
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "type": "rgw",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "mfa_ids": [],
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "account_id": "",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "path": "/",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "create_date": "2026-02-02T15:09:10.599233Z",
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "tags": [],
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]:     "group_ids": []
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]: }
Feb 02 15:09:10 compute-0 infallible_ganguly[98512]: 
Feb 02 15:09:10 compute-0 systemd[1]: libpod-ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2.scope: Deactivated successfully.
Feb 02 15:09:10 compute-0 conmon[98512]: conmon ee601157b77867437533 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2.scope/container/memory.events
Feb 02 15:09:10 compute-0 podman[98496]: 2026-02-02 15:09:10.630963233 +0000 UTC m=+0.434793747 container died ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2 (image=quay.io/ceph/ceph:v20, name=infallible_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:09:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c59f56230546f78c73d2c7c87d40c930c6550e08ac370af6a0ef950f1851d0-merged.mount: Deactivated successfully.
Feb 02 15:09:10 compute-0 podman[98496]: 2026-02-02 15:09:10.669118691 +0000 UTC m=+0.472949205 container remove ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2 (image=quay.io/ceph/ceph:v20, name=infallible_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:09:10 compute-0 systemd[1]: libpod-conmon-ee601157b778674375332447768663e53004fb658fca86a6047a413ebd366cf2.scope: Deactivated successfully.
Feb 02 15:09:10 compute-0 sudo[98493]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:10 compute-0 ceph-mon[75334]: 5.17 scrub starts
Feb 02 15:09:10 compute-0 ceph-mon[75334]: 5.17 scrub ok
Feb 02 15:09:10 compute-0 ceph-mon[75334]: osdmap e58: 3 total, 3 up, 3 in
Feb 02 15:09:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 8.1 KiB/s wr, 208 op/s; 1.6 KiB/s, 2 keys/s, 30 objects/s recovering
Feb 02 15:09:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Feb 02 15:09:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 02 15:09:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Feb 02 15:09:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 02 15:09:11 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.c scrub starts
Feb 02 15:09:11 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.c scrub ok
Feb 02 15:09:11 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.b scrub starts
Feb 02 15:09:11 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.b scrub ok
Feb 02 15:09:11 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Feb 02 15:09:11 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Feb 02 15:09:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Feb 02 15:09:11 compute-0 ceph-mon[75334]: pgmap v123: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 8.1 KiB/s wr, 208 op/s; 1.6 KiB/s, 2 keys/s, 30 objects/s recovering
Feb 02 15:09:11 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 02 15:09:11 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 02 15:09:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 02 15:09:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 02 15:09:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Feb 02 15:09:11 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Feb 02 15:09:12 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Feb 02 15:09:12 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Feb 02 15:09:12 compute-0 ceph-mon[75334]: 4.c scrub starts
Feb 02 15:09:12 compute-0 ceph-mon[75334]: 4.c scrub ok
Feb 02 15:09:12 compute-0 ceph-mon[75334]: 5.b scrub starts
Feb 02 15:09:12 compute-0 ceph-mon[75334]: 5.b scrub ok
Feb 02 15:09:12 compute-0 ceph-mon[75334]: 9.14 scrub starts
Feb 02 15:09:12 compute-0 ceph-mon[75334]: 9.14 scrub ok
Feb 02 15:09:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 02 15:09:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 02 15:09:12 compute-0 ceph-mon[75334]: osdmap e59: 3 total, 3 up, 3 in
Feb 02 15:09:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v125: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 6.5 KiB/s wr, 167 op/s; 1.3 KiB/s, 1 keys/s, 24 objects/s recovering
Feb 02 15:09:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Feb 02 15:09:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 02 15:09:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Feb 02 15:09:12 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 02 15:09:13 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Feb 02 15:09:13 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Feb 02 15:09:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Feb 02 15:09:13 compute-0 ceph-mon[75334]: 11.16 scrub starts
Feb 02 15:09:13 compute-0 ceph-mon[75334]: 11.16 scrub ok
Feb 02 15:09:13 compute-0 ceph-mon[75334]: pgmap v125: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 6.5 KiB/s wr, 167 op/s; 1.3 KiB/s, 1 keys/s, 24 objects/s recovering
Feb 02 15:09:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 02 15:09:13 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 02 15:09:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 02 15:09:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 02 15:09:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Feb 02 15:09:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=15.940719604s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=35'39 active pruub 120.213035583s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=15.940765381s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=35'39 active pruub 120.213363647s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=15.940719604s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=35'39 unknown NOTIFY pruub 120.213363647s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=15.940549850s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=35'39 unknown NOTIFY pruub 120.213035583s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=15.940036774s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=35'39 active pruub 120.213546753s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=15.939943314s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=35'39 unknown NOTIFY pruub 120.213546753s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=15.940068245s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=35'39 active pruub 120.213775635s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=15.940044403s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=35'39 unknown NOTIFY pruub 120.213775635s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 60 pg[6.3( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 60 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 60 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 60 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 59 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=10.205365181s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=35'39 lcod 0'0 active pruub 118.269851685s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 60 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=10.205301285s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 118.269851685s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 59 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=10.204319000s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=35'39 lcod 0'0 active pruub 118.269531250s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.6( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 59 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=10.203925133s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=35'39 lcod 0'0 active pruub 118.269371033s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 60 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=10.203884125s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 118.269371033s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 60 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=10.204242706s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 118.269531250s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 59 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=10.203611374s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=35'39 lcod 0'0 active pruub 118.269340515s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:13 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 60 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=10.203289986s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 118.269340515s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.2( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:13 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 60 pg[6.e( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:09:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:09:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:09:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:09:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:09:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:09:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Feb 02 15:09:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Feb 02 15:09:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Feb 02 15:09:14 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 61 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=59/61 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:14 compute-0 ceph-mgr[75628]: [progress INFO root] Completed event 8ebb8663-0960-416a-8a66-5add6ff2f71a (Global Recovery Event) in 25 seconds
Feb 02 15:09:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 6.9 KiB/s wr, 170 op/s; 1.2 KiB/s, 1 keys/s, 24 objects/s recovering
Feb 02 15:09:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Feb 02 15:09:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 02 15:09:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Feb 02 15:09:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 02 15:09:14 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 61 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=59/61 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:14 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 61 pg[6.e( v 35'39 lc 33'19 (0'0,35'39] local-lis/les=59/61 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:14 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 61 pg[6.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=59/61 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:14 compute-0 ceph-mon[75334]: 7.19 scrub starts
Feb 02 15:09:14 compute-0 ceph-mon[75334]: 7.19 scrub ok
Feb 02 15:09:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 02 15:09:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 02 15:09:14 compute-0 ceph-mon[75334]: osdmap e60: 3 total, 3 up, 3 in
Feb 02 15:09:14 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 61 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=60/61 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=35'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:14 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 61 pg[6.7( v 35'39 lc 33'20 (0'0,35'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:14 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 61 pg[6.f( v 35'39 lc 33'1 (0'0,35'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:14 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 61 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:15 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Feb 02 15:09:15 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Feb 02 15:09:15 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Feb 02 15:09:15 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Feb 02 15:09:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Feb 02 15:09:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 02 15:09:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 02 15:09:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Feb 02 15:09:15 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Feb 02 15:09:15 compute-0 ceph-mon[75334]: osdmap e61: 3 total, 3 up, 3 in
Feb 02 15:09:15 compute-0 ceph-mon[75334]: pgmap v128: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 6.9 KiB/s wr, 170 op/s; 1.2 KiB/s, 1 keys/s, 24 objects/s recovering
Feb 02 15:09:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 02 15:09:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 02 15:09:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 02 15:09:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 02 15:09:15 compute-0 ceph-mon[75334]: osdmap e62: 3 total, 3 up, 3 in
Feb 02 15:09:16 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 62 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=15.839751244s) [1] r=-1 lpr=62 pi=[46,62)/1 crt=35'39 lcod 0'0 active pruub 126.269889832s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:16 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 62 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=15.839698792s) [1] r=-1 lpr=62 pi=[46,62)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 126.269889832s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:16 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 62 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=15.839563370s) [1] r=-1 lpr=62 pi=[46,62)/1 crt=35'39 lcod 0'0 active pruub 126.270004272s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:16 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 62 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=15.839500427s) [1] r=-1 lpr=62 pi=[46,62)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 126.270004272s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:16 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 62 pg[6.c( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=62) [1] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:16 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 62 pg[6.4( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=62) [1] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:16 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.b scrub starts
Feb 02 15:09:16 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.b scrub ok
Feb 02 15:09:16 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Feb 02 15:09:16 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Feb 02 15:09:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 18 op/s; 128 B/s, 1 keys/s, 1 objects/s recovering
Feb 02 15:09:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Feb 02 15:09:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 02 15:09:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Feb 02 15:09:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 02 15:09:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Feb 02 15:09:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 02 15:09:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 02 15:09:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Feb 02 15:09:16 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Feb 02 15:09:16 compute-0 ceph-mon[75334]: 5.0 scrub starts
Feb 02 15:09:16 compute-0 ceph-mon[75334]: 5.0 scrub ok
Feb 02 15:09:16 compute-0 ceph-mon[75334]: 8.16 scrub starts
Feb 02 15:09:16 compute-0 ceph-mon[75334]: 8.16 scrub ok
Feb 02 15:09:16 compute-0 ceph-mon[75334]: 4.b scrub starts
Feb 02 15:09:16 compute-0 ceph-mon[75334]: 4.b scrub ok
Feb 02 15:09:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 02 15:09:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 02 15:09:16 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 63 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=12.896493912s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=35'39 active pruub 120.213211060s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:16 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 63 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=12.896669388s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=35'39 active pruub 120.213546753s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:16 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 63 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=12.896628380s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=35'39 unknown NOTIFY pruub 120.213546753s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:16 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 63 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=12.896400452s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=35'39 unknown NOTIFY pruub 120.213211060s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:16 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 63 pg[6.c( v 35'39 lc 33'17 (0'0,35'39] local-lis/les=62/63 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=62) [1] r=0 lpr=62 pi=[46,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:16 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 63 pg[6.4( v 35'39 lc 33'15 (0'0,35'39] local-lis/les=62/63 n=2 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=62) [1] r=0 lpr=62 pi=[46,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:16 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 63 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:16 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 63 pg[6.5( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:17 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Feb 02 15:09:17 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Feb 02 15:09:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Feb 02 15:09:17 compute-0 ceph-mon[75334]: 8.17 scrub starts
Feb 02 15:09:17 compute-0 ceph-mon[75334]: 8.17 scrub ok
Feb 02 15:09:17 compute-0 ceph-mon[75334]: pgmap v130: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 18 op/s; 128 B/s, 1 keys/s, 1 objects/s recovering
Feb 02 15:09:17 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 02 15:09:17 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 02 15:09:17 compute-0 ceph-mon[75334]: osdmap e63: 3 total, 3 up, 3 in
Feb 02 15:09:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Feb 02 15:09:17 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Feb 02 15:09:17 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 64 pg[6.5( v 35'39 lc 33'9 (0'0,35'39] local-lis/les=63/64 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:17 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 64 pg[6.d( v 35'39 lc 33'13 (0'0,35'39] local-lis/les=63/64 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:18 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Feb 02 15:09:18 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Feb 02 15:09:18 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Feb 02 15:09:18 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Feb 02 15:09:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 21 op/s; 161 B/s, 2 keys/s, 2 objects/s recovering
Feb 02 15:09:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Feb 02 15:09:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 02 15:09:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Feb 02 15:09:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 02 15:09:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Feb 02 15:09:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 02 15:09:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 02 15:09:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Feb 02 15:09:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Feb 02 15:09:18 compute-0 ceph-mon[75334]: 5.6 scrub starts
Feb 02 15:09:18 compute-0 ceph-mon[75334]: 5.6 scrub ok
Feb 02 15:09:18 compute-0 ceph-mon[75334]: osdmap e64: 3 total, 3 up, 3 in
Feb 02 15:09:18 compute-0 ceph-mon[75334]: 4.3 scrub starts
Feb 02 15:09:18 compute-0 ceph-mon[75334]: 4.3 scrub ok
Feb 02 15:09:18 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 02 15:09:18 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 02 15:09:19 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Feb 02 15:09:19 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Feb 02 15:09:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Feb 02 15:09:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Feb 02 15:09:19 compute-0 ceph-mgr[75628]: [progress INFO root] Writing back 16 completed events
Feb 02 15:09:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 02 15:09:19 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:09:19 compute-0 ceph-mon[75334]: 11.13 scrub starts
Feb 02 15:09:19 compute-0 ceph-mon[75334]: 11.13 scrub ok
Feb 02 15:09:19 compute-0 ceph-mon[75334]: pgmap v133: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 21 op/s; 161 B/s, 2 keys/s, 2 objects/s recovering
Feb 02 15:09:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 02 15:09:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 02 15:09:19 compute-0 ceph-mon[75334]: osdmap e65: 3 total, 3 up, 3 in
Feb 02 15:09:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:09:20 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Feb 02 15:09:20 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.568070412s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=39'483 lcod 0'0 active pruub 124.493400574s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.568013191s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.493400574s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 65 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.570377350s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=58'484 lcod 58'484 active pruub 124.496223450s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 65 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.570310593s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=58'484 lcod 58'484 unknown NOTIFY pruub 124.496223450s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.570534706s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=39'483 lcod 0'0 active pruub 124.496635437s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.570466995s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.496635437s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 65 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.571805954s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=58'488 lcod 58'488 active pruub 124.499107361s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 65 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.571199417s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=58'488 lcod 58'488 unknown NOTIFY pruub 124.499107361s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Feb 02 15:09:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Feb 02 15:09:20 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 66 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=58'488 lcod 58'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 66 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=58'484 lcod 58'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 66 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=58'484 lcod 58'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 66 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=58'488 lcod 58'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:20 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 445 B/s, 2 objects/s recovering
Feb 02 15:09:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Feb 02 15:09:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 02 15:09:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Feb 02 15:09:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 02 15:09:20 compute-0 ceph-mon[75334]: 10.0 scrub starts
Feb 02 15:09:20 compute-0 ceph-mon[75334]: 10.0 scrub ok
Feb 02 15:09:20 compute-0 ceph-mon[75334]: 7.1d scrub starts
Feb 02 15:09:20 compute-0 ceph-mon[75334]: 7.1d scrub ok
Feb 02 15:09:20 compute-0 ceph-mon[75334]: 4.1e scrub starts
Feb 02 15:09:20 compute-0 ceph-mon[75334]: 4.1e scrub ok
Feb 02 15:09:20 compute-0 ceph-mon[75334]: osdmap e66: 3 total, 3 up, 3 in
Feb 02 15:09:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 02 15:09:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 02 15:09:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Feb 02 15:09:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 02 15:09:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 02 15:09:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Feb 02 15:09:21 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Feb 02 15:09:21 compute-0 ceph-mon[75334]: pgmap v136: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 445 B/s, 2 objects/s recovering
Feb 02 15:09:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 02 15:09:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 02 15:09:21 compute-0 ceph-mon[75334]: osdmap e67: 3 total, 3 up, 3 in
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=66/67 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 67 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=66/67 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[48,66)/1 crt=58'485 lcod 58'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 67 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=66/67 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[48,66)/1 crt=58'489 lcod 58'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 360 B/s, 1 objects/s recovering
Feb 02 15:09:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Feb 02 15:09:22 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 02 15:09:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Feb 02 15:09:22 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 02 15:09:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Feb 02 15:09:22 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 02 15:09:22 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 02 15:09:22 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 02 15:09:22 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 02 15:09:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Feb 02 15:09:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.155169487s) [2] async=[2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 active pruub 128.552444458s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.154977798s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 128.552444458s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=11.097169876s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=58'486 lcod 58'486 active pruub 124.495391846s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=11.097130775s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=58'486 lcod 58'486 unknown NOTIFY pruub 124.495391846s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=66/67 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.153800011s) [2] async=[2] r=-1 lpr=68 pi=[48,68)/1 crt=58'485 lcod 58'484 active pruub 128.552215576s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=66/67 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.153647423s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=58'485 lcod 58'484 unknown NOTIFY pruub 128.552215576s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=66/67 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.150740623s) [2] async=[2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 active pruub 128.549407959s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=66/67 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.150652885s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 128.549407959s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=66/67 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.153297424s) [2] async=[2] r=-1 lpr=68 pi=[48,68)/1 crt=58'489 lcod 58'488 active pruub 128.552230835s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=66/67 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.153238297s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=58'489 lcod 58'488 unknown NOTIFY pruub 128.552230835s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=11.099726677s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 active pruub 124.498847961s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 68 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=11.099691391s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.498847961s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=0/0 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 pct=0'0 crt=58'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=0/0 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=58'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=0/0 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 pct=0'0 crt=58'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=0/0 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=58'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Feb 02 15:09:23 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=67 pruub=9.146295547s) [2] r=-1 lpr=67 pi=[56,67)/1 crt=39'483 active pruub 127.014770508s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=67 pruub=9.146221161s) [2] r=-1 lpr=67 pi=[56,67)/1 crt=39'483 unknown NOTIFY pruub 127.014770508s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=67) [2] r=0 lpr=68 pi=[56,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 67 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=57/58 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.156557083s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=58'484 lcod 58'484 active pruub 128.026473999s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 68 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=57/58 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.156393051s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=58'484 lcod 58'484 unknown NOTIFY pruub 128.026473999s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 67 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=67 pruub=9.144697189s) [2] r=-1 lpr=67 pi=[56,67)/1 crt=58'484 lcod 58'484 active pruub 127.014785767s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 68 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=67 pruub=9.144521713s) [2] r=-1 lpr=67 pi=[56,67)/1 crt=58'484 lcod 58'484 unknown NOTIFY pruub 127.014785767s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 68 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=68 pruub=8.399067879s) [2] r=-1 lpr=68 pi=[46,68)/1 crt=35'39 lcod 0'0 active pruub 126.269584656s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 68 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=68 pruub=8.399024010s) [2] r=-1 lpr=68 pi=[46,68)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 126.269584656s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 67 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=57/58 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.155819893s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=58'486 lcod 58'486 active pruub 128.026504517s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 68 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=57/58 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.155697823s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=58'486 lcod 58'486 unknown NOTIFY pruub 128.026504517s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=67) [2] r=0 lpr=68 pi=[57,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=67) [2] r=0 lpr=68 pi=[56,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[6.8( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=68) [2] r=0 lpr=68 pi=[46,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=67) [2] r=0 lpr=68 pi=[57,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Feb 02 15:09:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Feb 02 15:09:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] r=0 lpr=69 pi=[56,69)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] r=0 lpr=69 pi=[56,69)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 69 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=57/58 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=0 lpr=69 pi=[57,69)/1 crt=58'484 lcod 58'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 69 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=57/58 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=0 lpr=69 pi=[57,69)/1 crt=58'484 lcod 58'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 69 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] r=0 lpr=69 pi=[56,69)/1 crt=58'484 lcod 58'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 69 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] r=0 lpr=69 pi=[56,69)/1 crt=58'484 lcod 58'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 69 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=57/58 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=0 lpr=69 pi=[57,69)/1 crt=58'486 lcod 58'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 69 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=57/58 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=0 lpr=69 pi=[57,69)/1 crt=58'486 lcod 58'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[56,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[56,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[57,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[57,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[57,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[57,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[56,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[56,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=68/69 n=1 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=68) [2] r=0 lpr=68 pi=[46,68)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:23 compute-0 ceph-mon[75334]: pgmap v138: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 360 B/s, 1 objects/s recovering
Feb 02 15:09:23 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 02 15:09:23 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 02 15:09:23 compute-0 ceph-mon[75334]: osdmap e68: 3 total, 3 up, 3 in
Feb 02 15:09:23 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 69 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=58'486 lcod 58'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 69 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=58'486 lcod 58'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 69 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:23 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 69 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.e( v 58'489 (0'0,58'489] local-lis/les=68/69 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=58'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=68/69 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=58'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 69 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=68/69 n=7 ec=48/34 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:23 compute-0 sshd-session[98609]: Accepted publickey for zuul from 192.168.122.30 port 46466 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:09:23 compute-0 systemd-logind[786]: New session 34 of user zuul.
Feb 02 15:09:23 compute-0 systemd[1]: Started Session 34 of User zuul.
Feb 02 15:09:23 compute-0 sshd-session[98609]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:09:24 compute-0 sudo[98665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:09:24 compute-0 sudo[98665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:09:24 compute-0 sudo[98665]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:24 compute-0 sudo[98690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:09:24 compute-0 sudo[98690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:09:24 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb 02 15:09:24 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb 02 15:09:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 2 unknown, 4 peering, 299 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 202 B/s, 5 objects/s recovering
Feb 02 15:09:24 compute-0 sudo[98690]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Feb 02 15:09:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Feb 02 15:09:24 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Feb 02 15:09:24 compute-0 ceph-mon[75334]: 8.13 scrub starts
Feb 02 15:09:24 compute-0 ceph-mon[75334]: 8.13 scrub ok
Feb 02 15:09:24 compute-0 ceph-mon[75334]: osdmap e69: 3 total, 3 up, 3 in
Feb 02 15:09:24 compute-0 ceph-mon[75334]: pgmap v141: 305 pgs: 2 unknown, 4 peering, 299 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 202 B/s, 5 objects/s recovering
Feb 02 15:09:24 compute-0 ceph-mon[75334]: osdmap e70: 3 total, 3 up, 3 in
Feb 02 15:09:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:09:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:09:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:09:24 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:09:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:09:24 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:09:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:09:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:09:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:09:24 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:09:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:09:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:09:24 compute-0 python3.9[98826]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:09:24 compute-0 sudo[98844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:09:24 compute-0 sudo[98844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:09:24 compute-0 sudo[98844]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:25 compute-0 sudo[98872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:09:25 compute-0 sudo[98872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:09:25 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Feb 02 15:09:25 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Feb 02 15:09:25 compute-0 podman[98928]: 2026-02-02 15:09:25.286956167 +0000 UTC m=+0.034881216 container create 4790659697ebd79d78b783d23ebf99e9efeced2ecc1dc9b6663b980bfe940fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_raman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:09:25 compute-0 systemd[1]: Started libpod-conmon-4790659697ebd79d78b783d23ebf99e9efeced2ecc1dc9b6663b980bfe940fe1.scope.
Feb 02 15:09:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:09:25 compute-0 podman[98928]: 2026-02-02 15:09:25.270517722 +0000 UTC m=+0.018442771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:09:25 compute-0 podman[98928]: 2026-02-02 15:09:25.371939096 +0000 UTC m=+0.119864145 container init 4790659697ebd79d78b783d23ebf99e9efeced2ecc1dc9b6663b980bfe940fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_raman, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:09:25 compute-0 podman[98928]: 2026-02-02 15:09:25.379893463 +0000 UTC m=+0.127818482 container start 4790659697ebd79d78b783d23ebf99e9efeced2ecc1dc9b6663b980bfe940fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:09:25 compute-0 podman[98928]: 2026-02-02 15:09:25.383440571 +0000 UTC m=+0.131365610 container attach 4790659697ebd79d78b783d23ebf99e9efeced2ecc1dc9b6663b980bfe940fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_raman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:09:25 compute-0 reverent_raman[98964]: 167 167
Feb 02 15:09:25 compute-0 systemd[1]: libpod-4790659697ebd79d78b783d23ebf99e9efeced2ecc1dc9b6663b980bfe940fe1.scope: Deactivated successfully.
Feb 02 15:09:25 compute-0 podman[98928]: 2026-02-02 15:09:25.385690692 +0000 UTC m=+0.133615711 container died 4790659697ebd79d78b783d23ebf99e9efeced2ecc1dc9b6663b980bfe940fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_raman, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:09:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-baf038cbe951d8e7a01a30a20705b4c1128259a905537df52163a7de9a1e6de0-merged.mount: Deactivated successfully.
Feb 02 15:09:25 compute-0 podman[98928]: 2026-02-02 15:09:25.423588554 +0000 UTC m=+0.171513583 container remove 4790659697ebd79d78b783d23ebf99e9efeced2ecc1dc9b6663b980bfe940fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_raman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:09:25 compute-0 systemd[1]: libpod-conmon-4790659697ebd79d78b783d23ebf99e9efeced2ecc1dc9b6663b980bfe940fe1.scope: Deactivated successfully.
Feb 02 15:09:25 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 70 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=69/70 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[48,69)/1 crt=58'487 lcod 58'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:25 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 70 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=69/70 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[57,69)/1 crt=58'487 lcod 58'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:25 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 70 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=69/70 n=7 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[57,69)/1 crt=58'485 lcod 58'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:25 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 70 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=69/70 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[56,69)/1 crt=58'485 lcod 58'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:25 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[56,69)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:25 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 70 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=69/70 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[48,69)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:25 compute-0 podman[99011]: 2026-02-02 15:09:25.56376669 +0000 UTC m=+0.055521414 container create 138800b7f32c1c2d3f65d6380b8da2755e9933fbef62d9fdfa403e3dd1aff113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williamson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:09:25 compute-0 systemd[1]: Started libpod-conmon-138800b7f32c1c2d3f65d6380b8da2755e9933fbef62d9fdfa403e3dd1aff113.scope.
Feb 02 15:09:25 compute-0 podman[99011]: 2026-02-02 15:09:25.542519198 +0000 UTC m=+0.034274022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:09:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8985cdafc3f5d46802d5365a837f19a03e61a30656b72c8cc82a61ae5ee4563/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8985cdafc3f5d46802d5365a837f19a03e61a30656b72c8cc82a61ae5ee4563/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8985cdafc3f5d46802d5365a837f19a03e61a30656b72c8cc82a61ae5ee4563/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Feb 02 15:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8985cdafc3f5d46802d5365a837f19a03e61a30656b72c8cc82a61ae5ee4563/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8985cdafc3f5d46802d5365a837f19a03e61a30656b72c8cc82a61ae5ee4563/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Feb 02 15:09:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Feb 02 15:09:25 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 71 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=0/0 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=71) [2] r=0 lpr=71 pi=[57,71)/1 pct=0'0 crt=58'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:25 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 71 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=0/0 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=71) [2] r=0 lpr=71 pi=[57,71)/1 crt=58'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:25 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 71 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=0/0 n=6 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 pct=0'0 crt=58'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:25 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 71 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=0/0 n=6 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=58'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:25 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 71 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=0/0 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=71) [2] r=0 lpr=71 pi=[56,71)/1 pct=0'0 crt=58'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:25 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 71 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=0/0 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=71) [2] r=0 lpr=71 pi=[56,71)/1 crt=58'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:25 compute-0 podman[99011]: 2026-02-02 15:09:25.666020273 +0000 UTC m=+0.157775087 container init 138800b7f32c1c2d3f65d6380b8da2755e9933fbef62d9fdfa403e3dd1aff113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williamson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:09:25 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 71 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=69/70 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=71 pruub=15.793219566s) [2] async=[2] r=-1 lpr=71 pi=[57,71)/1 crt=58'487 lcod 58'486 active pruub 135.782730103s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:25 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 71 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=69/70 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=71 pruub=15.793129921s) [2] r=-1 lpr=71 pi=[57,71)/1 crt=58'487 lcod 58'486 unknown NOTIFY pruub 135.782730103s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:25 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 71 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=69/70 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=71 pruub=15.792307854s) [2] async=[2] r=-1 lpr=71 pi=[56,71)/1 crt=58'485 lcod 58'484 active pruub 135.782897949s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:25 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 71 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=69/70 n=6 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.792175293s) [2] async=[2] r=-1 lpr=71 pi=[48,71)/1 crt=58'487 lcod 58'486 active pruub 132.000549316s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:25 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 71 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=69/70 n=6 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.792080879s) [2] r=-1 lpr=71 pi=[48,71)/1 crt=58'487 lcod 58'486 unknown NOTIFY pruub 132.000549316s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:25 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 71 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=69/70 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=71 pruub=15.792007446s) [2] r=-1 lpr=71 pi=[56,71)/1 crt=58'485 lcod 58'484 unknown NOTIFY pruub 135.782897949s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:25 compute-0 podman[99011]: 2026-02-02 15:09:25.679672537 +0000 UTC m=+0.171427311 container start 138800b7f32c1c2d3f65d6380b8da2755e9933fbef62d9fdfa403e3dd1aff113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williamson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:09:25 compute-0 podman[99011]: 2026-02-02 15:09:25.684483163 +0000 UTC m=+0.176237887 container attach 138800b7f32c1c2d3f65d6380b8da2755e9933fbef62d9fdfa403e3dd1aff113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:09:25 compute-0 ceph-mon[75334]: 10.c scrub starts
Feb 02 15:09:25 compute-0 ceph-mon[75334]: 10.c scrub ok
Feb 02 15:09:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:09:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:09:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:09:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:09:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:09:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:09:25 compute-0 ceph-mon[75334]: 4.0 scrub starts
Feb 02 15:09:25 compute-0 ceph-mon[75334]: 4.0 scrub ok
Feb 02 15:09:25 compute-0 ceph-mon[75334]: osdmap e71: 3 total, 3 up, 3 in
Feb 02 15:09:26 compute-0 elated_williamson[99028]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:09:26 compute-0 elated_williamson[99028]: --> All data devices are unavailable
Feb 02 15:09:26 compute-0 systemd[1]: libpod-138800b7f32c1c2d3f65d6380b8da2755e9933fbef62d9fdfa403e3dd1aff113.scope: Deactivated successfully.
Feb 02 15:09:26 compute-0 podman[99126]: 2026-02-02 15:09:26.16474388 +0000 UTC m=+0.030488058 container died 138800b7f32c1c2d3f65d6380b8da2755e9933fbef62d9fdfa403e3dd1aff113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8985cdafc3f5d46802d5365a837f19a03e61a30656b72c8cc82a61ae5ee4563-merged.mount: Deactivated successfully.
Feb 02 15:09:26 compute-0 podman[99126]: 2026-02-02 15:09:26.203808858 +0000 UTC m=+0.069552926 container remove 138800b7f32c1c2d3f65d6380b8da2755e9933fbef62d9fdfa403e3dd1aff113 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_williamson, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:09:26 compute-0 systemd[1]: libpod-conmon-138800b7f32c1c2d3f65d6380b8da2755e9933fbef62d9fdfa403e3dd1aff113.scope: Deactivated successfully.
Feb 02 15:09:26 compute-0 sudo[98872]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:26 compute-0 sudo[99172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:09:26 compute-0 sudo[99172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:09:26 compute-0 sudo[99172]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:26 compute-0 sudo[99251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivashkikcfjozkitvjllkcubgtfnsiwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044965.8945627-27-262164167192406/AnsiballZ_command.py'
Feb 02 15:09:26 compute-0 sudo[99251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:09:26 compute-0 sudo[99227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:09:26 compute-0 sudo[99227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:09:26 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb 02 15:09:26 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb 02 15:09:26 compute-0 python3.9[99263]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:09:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Feb 02 15:09:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Feb 02 15:09:26 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Feb 02 15:09:26 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 72 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=72 pruub=14.794933319s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=39'483 active pruub 135.784545898s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:26 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 72 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=72 pruub=14.794857025s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=39'483 unknown NOTIFY pruub 135.784545898s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:26 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 72 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=69/70 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=72 pruub=14.792055130s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=58'485 lcod 58'484 active pruub 135.782775879s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:26 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 72 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=69/70 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=72 pruub=14.791926384s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=58'485 lcod 58'484 unknown NOTIFY pruub 135.782775879s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:26 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=72) [2] r=0 lpr=72 pi=[48,72)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:26 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 72 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:26 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 72 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=0/0 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 pct=0'0 crt=58'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:26 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=72) [2] r=0 lpr=72 pi=[48,72)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:26 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 72 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=0/0 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=58'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:26 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 72 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:26 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=69/70 n=7 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=72 pruub=14.789493561s) [2] async=[2] r=-1 lpr=72 pi=[48,72)/1 crt=39'483 lcod 0'0 active pruub 132.002456665s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:26 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=69/70 n=7 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=72 pruub=14.789367676s) [2] r=-1 lpr=72 pi=[48,72)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 132.002456665s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:26 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 72 pg[9.17( v 58'485 (0'0,58'485] local-lis/les=71/72 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=71) [2] r=0 lpr=71 pi=[56,71)/1 crt=58'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:26 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 72 pg[9.7( v 58'487 (0'0,58'487] local-lis/les=71/72 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=71) [2] r=0 lpr=71 pi=[57,71)/1 crt=58'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:26 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 72 pg[9.18( v 58'487 (0'0,58'487] local-lis/les=71/72 n=6 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=58'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:26 compute-0 podman[99284]: 2026-02-02 15:09:26.680390273 +0000 UTC m=+0.071020290 container create 9e05d2fc6f37cce86b9ff741f6cf918c1a192e512e93fa42b7b22f7688e35bf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:09:26 compute-0 podman[99284]: 2026-02-02 15:09:26.643157015 +0000 UTC m=+0.033787092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:09:26 compute-0 systemd[1]: Started libpod-conmon-9e05d2fc6f37cce86b9ff741f6cf918c1a192e512e93fa42b7b22f7688e35bf6.scope.
Feb 02 15:09:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 6 activating+remapped, 4 peering, 295 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37/254 objects misplaced (14.567%); 213 B/s, 6 objects/s recovering
Feb 02 15:09:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:09:26 compute-0 podman[99284]: 2026-02-02 15:09:26.781606843 +0000 UTC m=+0.172236880 container init 9e05d2fc6f37cce86b9ff741f6cf918c1a192e512e93fa42b7b22f7688e35bf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:09:26 compute-0 podman[99284]: 2026-02-02 15:09:26.789970109 +0000 UTC m=+0.180600096 container start 9e05d2fc6f37cce86b9ff741f6cf918c1a192e512e93fa42b7b22f7688e35bf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:09:26 compute-0 podman[99284]: 2026-02-02 15:09:26.79319772 +0000 UTC m=+0.183827747 container attach 9e05d2fc6f37cce86b9ff741f6cf918c1a192e512e93fa42b7b22f7688e35bf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:09:26 compute-0 mystifying_haibt[99301]: 167 167
Feb 02 15:09:26 compute-0 systemd[1]: libpod-9e05d2fc6f37cce86b9ff741f6cf918c1a192e512e93fa42b7b22f7688e35bf6.scope: Deactivated successfully.
Feb 02 15:09:26 compute-0 podman[99284]: 2026-02-02 15:09:26.798186732 +0000 UTC m=+0.188816759 container died 9e05d2fc6f37cce86b9ff741f6cf918c1a192e512e93fa42b7b22f7688e35bf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d1eba1dfd172ded90264e657d87ee02098a83feabae00b7885f52056ef6c490-merged.mount: Deactivated successfully.
Feb 02 15:09:26 compute-0 podman[99284]: 2026-02-02 15:09:26.844778357 +0000 UTC m=+0.235408374 container remove 9e05d2fc6f37cce86b9ff741f6cf918c1a192e512e93fa42b7b22f7688e35bf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:09:26 compute-0 systemd[1]: libpod-conmon-9e05d2fc6f37cce86b9ff741f6cf918c1a192e512e93fa42b7b22f7688e35bf6.scope: Deactivated successfully.
Feb 02 15:09:27 compute-0 podman[99327]: 2026-02-02 15:09:27.023790427 +0000 UTC m=+0.053311706 container create 6ce9fce66d4361701a5cc6a8655f049cdaba4adeaf5340847113dbebfe4d8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:09:27 compute-0 systemd[1]: Started libpod-conmon-6ce9fce66d4361701a5cc6a8655f049cdaba4adeaf5340847113dbebfe4d8223.scope.
Feb 02 15:09:27 compute-0 podman[99327]: 2026-02-02 15:09:26.996014489 +0000 UTC m=+0.025535818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:09:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba88889e2572bbd573dc474dedc388ab2e018ee2e8a34936db05e72c89e878b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba88889e2572bbd573dc474dedc388ab2e018ee2e8a34936db05e72c89e878b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba88889e2572bbd573dc474dedc388ab2e018ee2e8a34936db05e72c89e878b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba88889e2572bbd573dc474dedc388ab2e018ee2e8a34936db05e72c89e878b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:27 compute-0 podman[99327]: 2026-02-02 15:09:27.115188968 +0000 UTC m=+0.144710217 container init 6ce9fce66d4361701a5cc6a8655f049cdaba4adeaf5340847113dbebfe4d8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 02 15:09:27 compute-0 podman[99327]: 2026-02-02 15:09:27.1206759 +0000 UTC m=+0.150197139 container start 6ce9fce66d4361701a5cc6a8655f049cdaba4adeaf5340847113dbebfe4d8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_clarke, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:09:27 compute-0 podman[99327]: 2026-02-02 15:09:27.130525919 +0000 UTC m=+0.160047158 container attach 6ce9fce66d4361701a5cc6a8655f049cdaba4adeaf5340847113dbebfe4d8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_clarke, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:09:27 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Feb 02 15:09:27 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Feb 02 15:09:27 compute-0 agitated_clarke[99345]: {
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:     "0": [
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:         {
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "devices": [
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "/dev/loop3"
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             ],
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_name": "ceph_lv0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_size": "21470642176",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "name": "ceph_lv0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "tags": {
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.cluster_name": "ceph",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.crush_device_class": "",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.encrypted": "0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.objectstore": "bluestore",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.osd_id": "0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.type": "block",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.vdo": "0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.with_tpm": "0"
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             },
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "type": "block",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "vg_name": "ceph_vg0"
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:         }
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:     ],
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:     "1": [
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:         {
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "devices": [
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "/dev/loop4"
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             ],
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_name": "ceph_lv1",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_size": "21470642176",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "name": "ceph_lv1",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "tags": {
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.cluster_name": "ceph",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.crush_device_class": "",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.encrypted": "0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.objectstore": "bluestore",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.osd_id": "1",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.type": "block",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.vdo": "0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.with_tpm": "0"
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             },
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "type": "block",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "vg_name": "ceph_vg1"
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:         }
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:     ],
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:     "2": [
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:         {
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "devices": [
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "/dev/loop5"
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             ],
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_name": "ceph_lv2",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_size": "21470642176",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "name": "ceph_lv2",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "tags": {
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.cluster_name": "ceph",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.crush_device_class": "",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.encrypted": "0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.objectstore": "bluestore",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.osd_id": "2",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.type": "block",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.vdo": "0",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:                 "ceph.with_tpm": "0"
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             },
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "type": "block",
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:             "vg_name": "ceph_vg2"
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:         }
Feb 02 15:09:27 compute-0 agitated_clarke[99345]:     ]
Feb 02 15:09:27 compute-0 agitated_clarke[99345]: }
Feb 02 15:09:27 compute-0 systemd[1]: libpod-6ce9fce66d4361701a5cc6a8655f049cdaba4adeaf5340847113dbebfe4d8223.scope: Deactivated successfully.
Feb 02 15:09:27 compute-0 podman[99327]: 2026-02-02 15:09:27.386433058 +0000 UTC m=+0.415954337 container died 6ce9fce66d4361701a5cc6a8655f049cdaba4adeaf5340847113dbebfe4d8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_clarke, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ba88889e2572bbd573dc474dedc388ab2e018ee2e8a34936db05e72c89e878b-merged.mount: Deactivated successfully.
Feb 02 15:09:27 compute-0 podman[99327]: 2026-02-02 15:09:27.433966525 +0000 UTC m=+0.463487774 container remove 6ce9fce66d4361701a5cc6a8655f049cdaba4adeaf5340847113dbebfe4d8223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:09:27 compute-0 systemd[1]: libpod-conmon-6ce9fce66d4361701a5cc6a8655f049cdaba4adeaf5340847113dbebfe4d8223.scope: Deactivated successfully.
Feb 02 15:09:27 compute-0 sudo[99227]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:27 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb 02 15:09:27 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb 02 15:09:27 compute-0 sudo[99366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:09:27 compute-0 sudo[99366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:09:27 compute-0 sudo[99366]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:27 compute-0 sudo[99391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:09:27 compute-0 sudo[99391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:09:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Feb 02 15:09:27 compute-0 ceph-mon[75334]: 10.a scrub starts
Feb 02 15:09:27 compute-0 ceph-mon[75334]: 10.a scrub ok
Feb 02 15:09:27 compute-0 ceph-mon[75334]: osdmap e72: 3 total, 3 up, 3 in
Feb 02 15:09:27 compute-0 ceph-mon[75334]: pgmap v145: 305 pgs: 6 activating+remapped, 4 peering, 295 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37/254 objects misplaced (14.567%); 213 B/s, 6 objects/s recovering
Feb 02 15:09:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Feb 02 15:09:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Feb 02 15:09:27 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=48/34 lis/c=69/48 les/c/f=70/49/0 sis=72) [2] r=0 lpr=72 pi=[48,72)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:27 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 73 pg[9.f( v 58'485 (0'0,58'485] local-lis/les=72/73 n=7 ec=48/34 lis/c=69/57 les/c/f=70/58/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/1 crt=58'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:27 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 73 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=72/73 n=6 ec=48/34 lis/c=69/56 les/c/f=70/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:27 compute-0 podman[99427]: 2026-02-02 15:09:27.896144159 +0000 UTC m=+0.047697621 container create 4b44811d6e3ebfc3339e305d8bd1427916d0ca89f9679c2b94dc99cd7752270a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:09:27 compute-0 systemd[1]: Started libpod-conmon-4b44811d6e3ebfc3339e305d8bd1427916d0ca89f9679c2b94dc99cd7752270a.scope.
Feb 02 15:09:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:09:27 compute-0 podman[99427]: 2026-02-02 15:09:27.87458595 +0000 UTC m=+0.026139482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:09:27 compute-0 podman[99427]: 2026-02-02 15:09:27.978616133 +0000 UTC m=+0.130169645 container init 4b44811d6e3ebfc3339e305d8bd1427916d0ca89f9679c2b94dc99cd7752270a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:09:27 compute-0 podman[99427]: 2026-02-02 15:09:27.985446245 +0000 UTC m=+0.136999727 container start 4b44811d6e3ebfc3339e305d8bd1427916d0ca89f9679c2b94dc99cd7752270a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:09:27 compute-0 recursing_rhodes[99443]: 167 167
Feb 02 15:09:27 compute-0 systemd[1]: libpod-4b44811d6e3ebfc3339e305d8bd1427916d0ca89f9679c2b94dc99cd7752270a.scope: Deactivated successfully.
Feb 02 15:09:27 compute-0 podman[99427]: 2026-02-02 15:09:27.992206455 +0000 UTC m=+0.143759997 container attach 4b44811d6e3ebfc3339e305d8bd1427916d0ca89f9679c2b94dc99cd7752270a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:09:27 compute-0 podman[99427]: 2026-02-02 15:09:27.993164586 +0000 UTC m=+0.144718068 container died 4b44811d6e3ebfc3339e305d8bd1427916d0ca89f9679c2b94dc99cd7752270a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:09:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc2d68802d6da35060ca9def51c0e9d5e63fe86e097ab113119d6b824c160995-merged.mount: Deactivated successfully.
Feb 02 15:09:28 compute-0 podman[99427]: 2026-02-02 15:09:28.043245349 +0000 UTC m=+0.194798831 container remove 4b44811d6e3ebfc3339e305d8bd1427916d0ca89f9679c2b94dc99cd7752270a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:09:28 compute-0 systemd[1]: libpod-conmon-4b44811d6e3ebfc3339e305d8bd1427916d0ca89f9679c2b94dc99cd7752270a.scope: Deactivated successfully.
Feb 02 15:09:28 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Feb 02 15:09:28 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Feb 02 15:09:28 compute-0 podman[99469]: 2026-02-02 15:09:28.210112058 +0000 UTC m=+0.054591274 container create 79aaf901225bef418b4ae44baf39f9f6f379791752744a08acdcc0808ee753c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:09:28 compute-0 systemd[1]: Started libpod-conmon-79aaf901225bef418b4ae44baf39f9f6f379791752744a08acdcc0808ee753c3.scope.
Feb 02 15:09:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecd3eb1717a71bff7394ffcb6616f4e21994638f37e572444000ca1109ac870/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecd3eb1717a71bff7394ffcb6616f4e21994638f37e572444000ca1109ac870/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecd3eb1717a71bff7394ffcb6616f4e21994638f37e572444000ca1109ac870/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecd3eb1717a71bff7394ffcb6616f4e21994638f37e572444000ca1109ac870/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:09:28 compute-0 podman[99469]: 2026-02-02 15:09:28.185140273 +0000 UTC m=+0.029619559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:09:28 compute-0 podman[99469]: 2026-02-02 15:09:28.313119339 +0000 UTC m=+0.157598595 container init 79aaf901225bef418b4ae44baf39f9f6f379791752744a08acdcc0808ee753c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_brattain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:09:28 compute-0 podman[99469]: 2026-02-02 15:09:28.320171435 +0000 UTC m=+0.164650681 container start 79aaf901225bef418b4ae44baf39f9f6f379791752744a08acdcc0808ee753c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:09:28 compute-0 podman[99469]: 2026-02-02 15:09:28.324020811 +0000 UTC m=+0.168500057 container attach 79aaf901225bef418b4ae44baf39f9f6f379791752744a08acdcc0808ee753c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:09:28 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Feb 02 15:09:28 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Feb 02 15:09:28 compute-0 ceph-mon[75334]: 4.15 scrub starts
Feb 02 15:09:28 compute-0 ceph-mon[75334]: 4.15 scrub ok
Feb 02 15:09:28 compute-0 ceph-mon[75334]: 11.1d scrub starts
Feb 02 15:09:28 compute-0 ceph-mon[75334]: 11.1d scrub ok
Feb 02 15:09:28 compute-0 ceph-mon[75334]: osdmap e73: 3 total, 3 up, 3 in
Feb 02 15:09:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1023 B/s wr, 26 op/s; 422 B/s, 9 objects/s recovering
Feb 02 15:09:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Feb 02 15:09:28 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 02 15:09:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Feb 02 15:09:28 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 02 15:09:28 compute-0 lvm[99563]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:09:28 compute-0 lvm[99563]: VG ceph_vg0 finished
Feb 02 15:09:28 compute-0 lvm[99566]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:09:28 compute-0 lvm[99566]: VG ceph_vg1 finished
Feb 02 15:09:28 compute-0 lvm[99567]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:09:28 compute-0 lvm[99567]: VG ceph_vg0 finished
Feb 02 15:09:28 compute-0 lvm[99569]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:09:28 compute-0 lvm[99569]: VG ceph_vg2 finished
Feb 02 15:09:29 compute-0 priceless_brattain[99487]: {}
Feb 02 15:09:29 compute-0 systemd[1]: libpod-79aaf901225bef418b4ae44baf39f9f6f379791752744a08acdcc0808ee753c3.scope: Deactivated successfully.
Feb 02 15:09:29 compute-0 podman[99469]: 2026-02-02 15:09:29.072434878 +0000 UTC m=+0.916914094 container died 79aaf901225bef418b4ae44baf39f9f6f379791752744a08acdcc0808ee753c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_brattain, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-eecd3eb1717a71bff7394ffcb6616f4e21994638f37e572444000ca1109ac870-merged.mount: Deactivated successfully.
Feb 02 15:09:29 compute-0 podman[99469]: 2026-02-02 15:09:29.125358405 +0000 UTC m=+0.969837611 container remove 79aaf901225bef418b4ae44baf39f9f6f379791752744a08acdcc0808ee753c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_brattain, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:09:29 compute-0 systemd[1]: libpod-conmon-79aaf901225bef418b4ae44baf39f9f6f379791752744a08acdcc0808ee753c3.scope: Deactivated successfully.
Feb 02 15:09:29 compute-0 sudo[99391]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:09:29 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Feb 02 15:09:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:09:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:09:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:09:29 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Feb 02 15:09:29 compute-0 sudo[99584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:09:29 compute-0 sudo[99584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:09:29 compute-0 sudo[99584]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Feb 02 15:09:29 compute-0 ceph-mon[75334]: 4.6 scrub starts
Feb 02 15:09:29 compute-0 ceph-mon[75334]: 4.6 scrub ok
Feb 02 15:09:29 compute-0 ceph-mon[75334]: 8.1e scrub starts
Feb 02 15:09:29 compute-0 ceph-mon[75334]: 8.1e scrub ok
Feb 02 15:09:29 compute-0 ceph-mon[75334]: pgmap v147: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1023 B/s wr, 26 op/s; 422 B/s, 9 objects/s recovering
Feb 02 15:09:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 02 15:09:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 02 15:09:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:09:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:09:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 02 15:09:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 02 15:09:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Feb 02 15:09:29 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Feb 02 15:09:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb 02 15:09:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb 02 15:09:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:30 compute-0 ceph-mon[75334]: 4.17 scrub starts
Feb 02 15:09:30 compute-0 ceph-mon[75334]: 4.17 scrub ok
Feb 02 15:09:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 02 15:09:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 02 15:09:30 compute-0 ceph-mon[75334]: osdmap e74: 3 total, 3 up, 3 in
Feb 02 15:09:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 332 B/s, 7 objects/s recovering
Feb 02 15:09:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Feb 02 15:09:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 02 15:09:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Feb 02 15:09:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 02 15:09:30 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 74 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=14.785111427s) [0] r=-1 lpr=74 pi=[53,74)/1 crt=35'39 lcod 0'0 active pruub 136.213455200s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:30 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 74 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=14.784789085s) [0] r=-1 lpr=74 pi=[53,74)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 136.213455200s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:30 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 74 pg[6.9( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74) [0] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:31 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Feb 02 15:09:31 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb 02 15:09:31 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Feb 02 15:09:31 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb 02 15:09:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Feb 02 15:09:31 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 02 15:09:31 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 02 15:09:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Feb 02 15:09:31 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 75 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=59/61 n=1 ec=46/21 lis/c=59/59 les/c/f=61/61/0 sis=75 pruub=15.017611504s) [0] r=-1 lpr=75 pi=[59,75)/1 crt=35'39 lcod 0'0 active pruub 137.279953003s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:31 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 75 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=59/61 n=1 ec=46/21 lis/c=59/59 les/c/f=61/61/0 sis=75 pruub=15.017551422s) [0] r=-1 lpr=75 pi=[59,75)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 137.279953003s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Feb 02 15:09:31 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 75 pg[6.a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=59/59 les/c/f=61/61/0 sis=75) [0] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:31 compute-0 ceph-mon[75334]: 3.13 scrub starts
Feb 02 15:09:31 compute-0 ceph-mon[75334]: 3.13 scrub ok
Feb 02 15:09:31 compute-0 ceph-mon[75334]: pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 332 B/s, 7 objects/s recovering
Feb 02 15:09:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 02 15:09:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 02 15:09:31 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 75 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=74/75 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74) [0] r=0 lpr=74 pi=[53,74)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Feb 02 15:09:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Feb 02 15:09:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Feb 02 15:09:32 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 76 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=75/76 n=1 ec=46/21 lis/c=59/59 les/c/f=61/61/0 sis=75) [0] r=0 lpr=75 pi=[59,75)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:32 compute-0 ceph-mon[75334]: 5.8 scrub starts
Feb 02 15:09:32 compute-0 ceph-mon[75334]: 7.17 scrub starts
Feb 02 15:09:32 compute-0 ceph-mon[75334]: 5.8 scrub ok
Feb 02 15:09:32 compute-0 ceph-mon[75334]: 7.17 scrub ok
Feb 02 15:09:32 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 02 15:09:32 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 02 15:09:32 compute-0 ceph-mon[75334]: osdmap e75: 3 total, 3 up, 3 in
Feb 02 15:09:32 compute-0 ceph-mon[75334]: osdmap e76: 3 total, 3 up, 3 in
Feb 02 15:09:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 333 B/s, 7 objects/s recovering
Feb 02 15:09:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Feb 02 15:09:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 02 15:09:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Feb 02 15:09:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 02 15:09:32 compute-0 sudo[99251]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:33 compute-0 sshd-session[98612]: Connection closed by 192.168.122.30 port 46466
Feb 02 15:09:33 compute-0 sshd-session[98609]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:09:33 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Feb 02 15:09:33 compute-0 systemd[1]: session-34.scope: Consumed 7.547s CPU time.
Feb 02 15:09:33 compute-0 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Feb 02 15:09:33 compute-0 systemd-logind[786]: Removed session 34.
Feb 02 15:09:33 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Feb 02 15:09:33 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Feb 02 15:09:33 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.a scrub starts
Feb 02 15:09:33 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.a scrub ok
Feb 02 15:09:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Feb 02 15:09:33 compute-0 ceph-mon[75334]: pgmap v152: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 333 B/s, 7 objects/s recovering
Feb 02 15:09:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 02 15:09:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 02 15:09:33 compute-0 ceph-mon[75334]: 7.16 scrub starts
Feb 02 15:09:33 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 02 15:09:33 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 02 15:09:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Feb 02 15:09:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Feb 02 15:09:34 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 77 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=12.040115356s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=35'39 active pruub 141.073257446s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:34 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 77 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=77 pruub=12.040064812s) [1] r=-1 lpr=77 pi=[60,77)/1 crt=35'39 unknown NOTIFY pruub 141.073257446s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:34 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 77 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Feb 02 15:09:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Feb 02 15:09:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 02 15:09:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Feb 02 15:09:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 02 15:09:34 compute-0 ceph-mon[75334]: 7.16 scrub ok
Feb 02 15:09:34 compute-0 ceph-mon[75334]: 5.a scrub starts
Feb 02 15:09:34 compute-0 ceph-mon[75334]: 5.a scrub ok
Feb 02 15:09:34 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 02 15:09:34 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 02 15:09:34 compute-0 ceph-mon[75334]: osdmap e77: 3 total, 3 up, 3 in
Feb 02 15:09:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Feb 02 15:09:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Feb 02 15:09:34 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 78 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=77/78 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=77) [1] r=0 lpr=77 pi=[60,77)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:35 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Feb 02 15:09:35 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Feb 02 15:09:35 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Feb 02 15:09:35 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Feb 02 15:09:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:35 compute-0 ceph-mon[75334]: pgmap v154: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 02 15:09:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 02 15:09:35 compute-0 ceph-mon[75334]: osdmap e78: 3 total, 3 up, 3 in
Feb 02 15:09:35 compute-0 ceph-mon[75334]: 4.19 scrub starts
Feb 02 15:09:35 compute-0 ceph-mon[75334]: 4.19 scrub ok
Feb 02 15:09:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Feb 02 15:09:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 02 15:09:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 02 15:09:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Feb 02 15:09:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Feb 02 15:09:36 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 79 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=79 pruub=13.366340637s) [2] r=-1 lpr=79 pi=[48,79)/1 crt=58'486 lcod 58'486 active pruub 140.497055054s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:36 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 79 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=79 pruub=13.366276741s) [2] r=-1 lpr=79 pi=[48,79)/1 crt=58'486 lcod 58'486 unknown NOTIFY pruub 140.497055054s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:36 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 79 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=79 pruub=13.366157532s) [2] r=-1 lpr=79 pi=[48,79)/1 crt=39'483 lcod 0'0 active pruub 140.496734619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:36 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 79 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=79 pruub=13.365884781s) [2] r=-1 lpr=79 pi=[48,79)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 140.496734619s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:36 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 79 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:36 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 79 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb 02 15:09:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Feb 02 15:09:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 02 15:09:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Feb 02 15:09:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 02 15:09:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Feb 02 15:09:36 compute-0 ceph-mon[75334]: 10.5 scrub starts
Feb 02 15:09:36 compute-0 ceph-mon[75334]: 10.5 scrub ok
Feb 02 15:09:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 02 15:09:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 02 15:09:36 compute-0 ceph-mon[75334]: osdmap e79: 3 total, 3 up, 3 in
Feb 02 15:09:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 02 15:09:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 02 15:09:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 02 15:09:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 02 15:09:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Feb 02 15:09:36 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Feb 02 15:09:36 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] r=-1 lpr=80 pi=[48,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:36 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] r=-1 lpr=80 pi=[48,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:36 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] r=-1 lpr=80 pi=[48,80)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:36 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] r=-1 lpr=80 pi=[48,80)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:36 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 80 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=63/64 n=1 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=80 pruub=13.000941277s) [1] r=-1 lpr=80 pi=[63,80)/1 crt=35'39 active pruub 144.140930176s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:36 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 80 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=63/64 n=1 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=80 pruub=13.000689507s) [1] r=-1 lpr=80 pi=[63,80)/1 crt=35'39 unknown NOTIFY pruub 144.140930176s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:36 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 80 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:36 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 80 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] r=0 lpr=80 pi=[48,80)/1 crt=58'486 lcod 58'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:36 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 80 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=48/49 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] r=0 lpr=80 pi=[48,80)/1 crt=58'486 lcod 58'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:36 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] r=0 lpr=80 pi=[48,80)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:36 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] r=0 lpr=80 pi=[48,80)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:37 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Feb 02 15:09:37 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Feb 02 15:09:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Feb 02 15:09:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Feb 02 15:09:37 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Feb 02 15:09:37 compute-0 ceph-mon[75334]: pgmap v157: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb 02 15:09:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 02 15:09:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 02 15:09:37 compute-0 ceph-mon[75334]: osdmap e80: 3 total, 3 up, 3 in
Feb 02 15:09:37 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 81 pg[6.d( v 35'39 lc 33'13 (0'0,35'39] local-lis/les=80/81 n=1 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:38 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.d scrub starts
Feb 02 15:09:38 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.d scrub ok
Feb 02 15:09:38 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=80/81 n=7 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] async=[2] r=0 lpr=80 pi=[48,80)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:38 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 81 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=80/81 n=6 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=80) [2]/[1] async=[2] r=0 lpr=80 pi=[48,80)/1 crt=58'487 lcod 58'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb 02 15:09:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Feb 02 15:09:38 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 02 15:09:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Feb 02 15:09:38 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 02 15:09:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Feb 02 15:09:38 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 02 15:09:38 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 02 15:09:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Feb 02 15:09:38 compute-0 ceph-mon[75334]: 10.3 scrub starts
Feb 02 15:09:38 compute-0 ceph-mon[75334]: 10.3 scrub ok
Feb 02 15:09:38 compute-0 ceph-mon[75334]: osdmap e81: 3 total, 3 up, 3 in
Feb 02 15:09:38 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 02 15:09:38 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 02 15:09:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Feb 02 15:09:38 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 82 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=80/81 n=6 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82 pruub=15.716554642s) [2] async=[2] r=-1 lpr=82 pi=[48,82)/1 crt=58'487 lcod 58'486 active pruub 145.117980957s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:38 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 82 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=80/81 n=6 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82 pruub=15.716450691s) [2] r=-1 lpr=82 pi=[48,82)/1 crt=58'487 lcod 58'486 unknown NOTIFY pruub 145.117980957s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:38 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=80/81 n=7 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82 pruub=15.715744019s) [2] async=[2] r=-1 lpr=82 pi=[48,82)/1 crt=39'483 lcod 0'0 active pruub 145.117980957s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:38 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=80/81 n=7 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82 pruub=15.715520859s) [2] r=-1 lpr=82 pi=[48,82)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 145.117980957s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:38 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82) [2] r=0 lpr=82 pi=[48,82)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:38 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82) [2] r=0 lpr=82 pi=[48,82)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:38 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 82 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=0/0 n=6 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82) [2] r=0 lpr=82 pi=[48,82)/1 pct=0'0 crt=58'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:38 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 82 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=0/0 n=6 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82) [2] r=0 lpr=82 pi=[48,82)/1 crt=58'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:39 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Feb 02 15:09:39 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Feb 02 15:09:39 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.e scrub starts
Feb 02 15:09:39 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.e scrub ok
Feb 02 15:09:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Feb 02 15:09:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Feb 02 15:09:39 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Feb 02 15:09:39 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=82/83 n=7 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82) [2] r=0 lpr=82 pi=[48,82)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:39 compute-0 ceph-mon[75334]: 5.d scrub starts
Feb 02 15:09:39 compute-0 ceph-mon[75334]: 5.d scrub ok
Feb 02 15:09:39 compute-0 ceph-mon[75334]: pgmap v160: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb 02 15:09:39 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 02 15:09:39 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 02 15:09:39 compute-0 ceph-mon[75334]: osdmap e82: 3 total, 3 up, 3 in
Feb 02 15:09:39 compute-0 ceph-mon[75334]: 4.1f scrub starts
Feb 02 15:09:39 compute-0 ceph-mon[75334]: 4.1f scrub ok
Feb 02 15:09:39 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 83 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=82/83 n=6 ec=48/34 lis/c=80/48 les/c/f=81/49/0 sis=82) [2] r=0 lpr=82 pi=[48,82)/1 crt=58'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Feb 02 15:09:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Feb 02 15:09:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 02 15:09:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Feb 02 15:09:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 02 15:09:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Feb 02 15:09:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 02 15:09:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 02 15:09:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Feb 02 15:09:40 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Feb 02 15:09:40 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 84 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=84 pruub=13.872196198s) [2] r=-1 lpr=84 pi=[60,84)/1 crt=35'39 active pruub 149.073348999s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:40 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 84 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=84 pruub=13.872118950s) [2] r=-1 lpr=84 pi=[60,84)/1 crt=35'39 unknown NOTIFY pruub 149.073348999s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:40 compute-0 ceph-mon[75334]: 5.e scrub starts
Feb 02 15:09:40 compute-0 ceph-mon[75334]: 5.e scrub ok
Feb 02 15:09:40 compute-0 ceph-mon[75334]: osdmap e83: 3 total, 3 up, 3 in
Feb 02 15:09:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 02 15:09:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 02 15:09:40 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 84 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=84) [2] r=0 lpr=84 pi=[60,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Feb 02 15:09:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Feb 02 15:09:41 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Feb 02 15:09:41 compute-0 ceph-mon[75334]: pgmap v163: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Feb 02 15:09:41 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 02 15:09:41 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 02 15:09:41 compute-0 ceph-mon[75334]: osdmap e84: 3 total, 3 up, 3 in
Feb 02 15:09:41 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 85 pg[6.f( v 35'39 lc 33'1 (0'0,35'39] local-lis/les=84/85 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=84) [2] r=0 lpr=84 pi=[60,84)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:42 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Feb 02 15:09:42 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Feb 02 15:09:42 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb 02 15:09:42 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb 02 15:09:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:09:42
Feb 02 15:09:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:09:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:09:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'backups', 'vms', 'images', 'cephfs.cephfs.data', '.rgw.root']
Feb 02 15:09:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:09:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Feb 02 15:09:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb 02 15:09:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb 02 15:09:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Feb 02 15:09:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 02 15:09:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Feb 02 15:09:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Feb 02 15:09:42 compute-0 ceph-mon[75334]: osdmap e85: 3 total, 3 up, 3 in
Feb 02 15:09:42 compute-0 ceph-mon[75334]: 4.1d scrub starts
Feb 02 15:09:42 compute-0 ceph-mon[75334]: 4.1d scrub ok
Feb 02 15:09:42 compute-0 ceph-mon[75334]: pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Feb 02 15:09:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb 02 15:09:43 compute-0 ceph-mon[75334]: 5.10 scrub starts
Feb 02 15:09:43 compute-0 ceph-mon[75334]: 5.10 scrub ok
Feb 02 15:09:43 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 02 15:09:43 compute-0 ceph-mon[75334]: osdmap e86: 3 total, 3 up, 3 in
Feb 02 15:09:44 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb 02 15:09:44 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:09:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 224 B/s, 3 objects/s recovering
Feb 02 15:09:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Feb 02 15:09:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb 02 15:09:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Feb 02 15:09:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 02 15:09:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Feb 02 15:09:44 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Feb 02 15:09:44 compute-0 ceph-mon[75334]: 8.19 scrub starts
Feb 02 15:09:44 compute-0 ceph-mon[75334]: 8.19 scrub ok
Feb 02 15:09:44 compute-0 ceph-mon[75334]: pgmap v168: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 224 B/s, 3 objects/s recovering
Feb 02 15:09:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb 02 15:09:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 02 15:09:46 compute-0 ceph-mon[75334]: osdmap e87: 3 total, 3 up, 3 in
Feb 02 15:09:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 175 B/s, 2 objects/s recovering
Feb 02 15:09:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Feb 02 15:09:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb 02 15:09:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Feb 02 15:09:47 compute-0 ceph-mon[75334]: pgmap v170: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 175 B/s, 2 objects/s recovering
Feb 02 15:09:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb 02 15:09:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 02 15:09:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Feb 02 15:09:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Feb 02 15:09:47 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Feb 02 15:09:47 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Feb 02 15:09:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 02 15:09:48 compute-0 ceph-mon[75334]: osdmap e88: 3 total, 3 up, 3 in
Feb 02 15:09:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 171 B/s, 2 objects/s recovering
Feb 02 15:09:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Feb 02 15:09:48 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb 02 15:09:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Feb 02 15:09:49 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 02 15:09:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Feb 02 15:09:49 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Feb 02 15:09:49 compute-0 ceph-mon[75334]: 10.1d scrub starts
Feb 02 15:09:49 compute-0 ceph-mon[75334]: 10.1d scrub ok
Feb 02 15:09:49 compute-0 ceph-mon[75334]: pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 171 B/s, 2 objects/s recovering
Feb 02 15:09:49 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb 02 15:09:49 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Feb 02 15:09:49 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Feb 02 15:09:49 compute-0 sshd-session[99653]: Accepted publickey for zuul from 192.168.122.30 port 58098 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:09:49 compute-0 systemd-logind[786]: New session 35 of user zuul.
Feb 02 15:09:49 compute-0 systemd[1]: Started Session 35 of User zuul.
Feb 02 15:09:49 compute-0 sshd-session[99653]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:09:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 02 15:09:50 compute-0 ceph-mon[75334]: osdmap e89: 3 total, 3 up, 3 in
Feb 02 15:09:50 compute-0 python3.9[99806]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 02 15:09:50 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb 02 15:09:50 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb 02 15:09:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Feb 02 15:09:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb 02 15:09:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Feb 02 15:09:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 02 15:09:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Feb 02 15:09:51 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Feb 02 15:09:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 89 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=13.614975929s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=57'484 lcod 57'484 active pruub 159.017883301s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:51 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 90 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=89 pruub=13.614562035s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=57'484 lcod 57'484 unknown NOTIFY pruub 159.017883301s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:51 compute-0 ceph-mon[75334]: 10.1b scrub starts
Feb 02 15:09:51 compute-0 ceph-mon[75334]: 10.1b scrub ok
Feb 02 15:09:51 compute-0 ceph-mon[75334]: 3.10 scrub starts
Feb 02 15:09:51 compute-0 ceph-mon[75334]: 3.10 scrub ok
Feb 02 15:09:51 compute-0 ceph-mon[75334]: pgmap v174: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:51 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb 02 15:09:51 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=89) [2] r=0 lpr=90 pi=[56,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:51 compute-0 python3.9[99980]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:09:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Feb 02 15:09:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Feb 02 15:09:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Feb 02 15:09:52 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 91 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=0 lpr=91 pi=[56,91)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:52 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 91 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=56/57 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=0 lpr=91 pi=[56,91)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:52 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=-1 lpr=91 pi=[56,91)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:52 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=-1 lpr=91 pi=[56,91)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 02 15:09:52 compute-0 ceph-mon[75334]: osdmap e90: 3 total, 3 up, 3 in
Feb 02 15:09:52 compute-0 sudo[100134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-earlgrwtuahrthxyhaqpgthgdqmwsmjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044991.923933-40-147867049897536/AnsiballZ_command.py'
Feb 02 15:09:52 compute-0 sudo[100134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:09:52 compute-0 python3.9[100136]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:09:52 compute-0 sudo[100134]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Feb 02 15:09:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb 02 15:09:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Feb 02 15:09:53 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 02 15:09:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Feb 02 15:09:53 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Feb 02 15:09:53 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 92 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=92 pruub=12.572928429s) [1] r=-1 lpr=92 pi=[57,92)/1 crt=39'483 active pruub 160.025268555s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:53 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 92 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=92 pruub=12.572786331s) [1] r=-1 lpr=92 pi=[57,92)/1 crt=39'483 unknown NOTIFY pruub 160.025268555s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:53 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 92 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=92) [1] r=0 lpr=92 pi=[57,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:53 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 92 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=91/92 n=6 ec=48/34 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] async=[2] r=0 lpr=91 pi=[56,91)/1 crt=58'485 lcod 57'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:53 compute-0 ceph-mon[75334]: osdmap e91: 3 total, 3 up, 3 in
Feb 02 15:09:53 compute-0 ceph-mon[75334]: pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb 02 15:09:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 02 15:09:53 compute-0 ceph-mon[75334]: osdmap e92: 3 total, 3 up, 3 in
Feb 02 15:09:53 compute-0 sudo[100287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhrhnvgxksxcbegahtwlcojffywrzfbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044992.82875-52-219077669518807/AnsiballZ_stat.py'
Feb 02 15:09:53 compute-0 sudo[100287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:09:53 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Feb 02 15:09:53 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Feb 02 15:09:53 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Feb 02 15:09:53 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Feb 02 15:09:53 compute-0 python3.9[100289]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:09:53 compute-0 sudo[100287]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.363241323977932e-06 of space, bias 4.0, pg target 0.0016358895887735184 quantized to 16 (current 16)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:09:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:09:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Feb 02 15:09:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Feb 02 15:09:54 compute-0 sudo[100441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxizawtblxpkcivszzqwmfnpwzayxlrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044993.7063186-63-140354678367019/AnsiballZ_file.py'
Feb 02 15:09:54 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 93 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=0/0 n=6 ec=48/34 lis/c=91/56 les/c/f=92/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 pct=0'0 crt=58'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:54 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 93 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=0/0 n=6 ec=48/34 lis/c=91/56 les/c/f=92/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=58'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:54 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Feb 02 15:09:54 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 93 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=91/92 n=6 ec=48/34 lis/c=91/56 les/c/f=92/57/0 sis=93 pruub=15.008460999s) [2] async=[2] r=-1 lpr=93 pi=[56,93)/1 crt=58'485 lcod 57'484 active pruub 163.464218140s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:54 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 93 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=91/92 n=6 ec=48/34 lis/c=91/56 les/c/f=92/57/0 sis=93 pruub=15.008380890s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=58'485 lcod 57'484 unknown NOTIFY pruub 163.464218140s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:54 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 93 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=93) [1]/[0] r=0 lpr=93 pi=[57,93)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:54 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 93 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=93) [1]/[0] r=0 lpr=93 pi=[57,93)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:54 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[57,93)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:54 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=93) [1]/[0] r=-1 lpr=93 pi=[57,93)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:54 compute-0 sudo[100441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:09:54 compute-0 ceph-mon[75334]: 10.1f scrub starts
Feb 02 15:09:54 compute-0 ceph-mon[75334]: 10.1f scrub ok
Feb 02 15:09:54 compute-0 ceph-mon[75334]: 7.14 scrub starts
Feb 02 15:09:54 compute-0 ceph-mon[75334]: 7.14 scrub ok
Feb 02 15:09:54 compute-0 ceph-mon[75334]: osdmap e93: 3 total, 3 up, 3 in
Feb 02 15:09:54 compute-0 python3.9[100443]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:09:54 compute-0 sudo[100441]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Feb 02 15:09:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb 02 15:09:54 compute-0 sudo[100593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htplfwdafjzkdodxrnxsgqsrzonddxna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770044994.6734967-72-271667582221567/AnsiballZ_file.py'
Feb 02 15:09:54 compute-0 sudo[100593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:09:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Feb 02 15:09:55 compute-0 python3.9[100595]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:09:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 02 15:09:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Feb 02 15:09:55 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Feb 02 15:09:55 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 94 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=94 pruub=8.724595070s) [0] r=-1 lpr=94 pi=[68,94)/1 crt=39'483 active pruub 150.803359985s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:55 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 94 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=94 pruub=8.724267960s) [0] r=-1 lpr=94 pi=[68,94)/1 crt=39'483 unknown NOTIFY pruub 150.803359985s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:55 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=94) [0] r=0 lpr=94 pi=[68,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:55 compute-0 sudo[100593]: pam_unix(sudo:session): session closed for user root
Feb 02 15:09:55 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 94 pg[9.13( v 58'485 (0'0,58'485] local-lis/les=93/94 n=6 ec=48/34 lis/c=91/56 les/c/f=92/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=58'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:55 compute-0 ceph-mon[75334]: pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:09:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb 02 15:09:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 02 15:09:55 compute-0 ceph-mon[75334]: osdmap e94: 3 total, 3 up, 3 in
Feb 02 15:09:55 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 94 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=93/94 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=93) [1]/[0] async=[1] r=0 lpr=93 pi=[57,93)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:09:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Feb 02 15:09:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Feb 02 15:09:55 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Feb 02 15:09:55 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 95 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=93/94 n=6 ec=48/34 lis/c=93/57 les/c/f=94/58/0 sis=95 pruub=15.804010391s) [1] async=[1] r=-1 lpr=95 pi=[57,95)/1 crt=39'483 active pruub 165.818893433s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:55 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 95 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=93/94 n=6 ec=48/34 lis/c=93/57 les/c/f=94/58/0 sis=95 pruub=15.803962708s) [1] r=-1 lpr=95 pi=[57,95)/1 crt=39'483 unknown NOTIFY pruub 165.818893433s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:55 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[68,95)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:55 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[68,95)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:55 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 95 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=93/57 les/c/f=94/58/0 sis=95) [1] r=0 lpr=95 pi=[57,95)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:55 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 95 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=93/57 les/c/f=94/58/0 sis=95) [1] r=0 lpr=95 pi=[57,95)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:55 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 95 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=95) [0]/[2] r=0 lpr=95 pi=[68,95)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:55 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 95 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=95) [0]/[2] r=0 lpr=95 pi=[68,95)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:55 compute-0 python3.9[100745]: ansible-ansible.builtin.service_facts Invoked
Feb 02 15:09:55 compute-0 network[100762]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 15:09:55 compute-0 network[100763]: 'network-scripts' will be removed from distribution in near future.
Feb 02 15:09:55 compute-0 network[100764]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 15:09:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Feb 02 15:09:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Feb 02 15:09:56 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Feb 02 15:09:56 compute-0 ceph-mon[75334]: osdmap e95: 3 total, 3 up, 3 in
Feb 02 15:09:56 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 96 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=95/96 n=6 ec=48/34 lis/c=93/57 les/c/f=94/58/0 sis=95) [1] r=0 lpr=95 pi=[57,95)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 1 unknown, 1 activating+remapped, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 4/248 objects misplaced (1.613%)
Feb 02 15:09:57 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb 02 15:09:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 96 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=95/96 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=95) [0]/[2] async=[0] r=0 lpr=95 pi=[68,95)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:57 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb 02 15:09:57 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.b scrub starts
Feb 02 15:09:57 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.b scrub ok
Feb 02 15:09:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Feb 02 15:09:57 compute-0 ceph-mon[75334]: osdmap e96: 3 total, 3 up, 3 in
Feb 02 15:09:57 compute-0 ceph-mon[75334]: pgmap v184: 305 pgs: 1 unknown, 1 activating+remapped, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 4/248 objects misplaced (1.613%)
Feb 02 15:09:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Feb 02 15:09:57 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Feb 02 15:09:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 97 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=95/96 n=6 ec=48/34 lis/c=95/68 les/c/f=96/69/0 sis=97 pruub=15.288984299s) [0] async=[0] r=-1 lpr=97 pi=[68,97)/1 crt=39'483 active pruub 159.958038330s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:57 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 97 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=95/96 n=6 ec=48/34 lis/c=95/68 les/c/f=96/69/0 sis=97 pruub=15.288652420s) [0] r=-1 lpr=97 pi=[68,97)/1 crt=39'483 unknown NOTIFY pruub 159.958038330s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:09:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 97 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=95/68 les/c/f=96/69/0 sis=97) [0] r=0 lpr=97 pi=[68,97)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:09:57 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 97 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=95/68 les/c/f=96/69/0 sis=97) [0] r=0 lpr=97 pi=[68,97)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:09:58 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Feb 02 15:09:58 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Feb 02 15:09:58 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb 02 15:09:58 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb 02 15:09:58 compute-0 ceph-mon[75334]: 11.14 scrub starts
Feb 02 15:09:58 compute-0 ceph-mon[75334]: 11.14 scrub ok
Feb 02 15:09:58 compute-0 ceph-mon[75334]: 7.b scrub starts
Feb 02 15:09:58 compute-0 ceph-mon[75334]: 7.b scrub ok
Feb 02 15:09:58 compute-0 ceph-mon[75334]: osdmap e97: 3 total, 3 up, 3 in
Feb 02 15:09:58 compute-0 ceph-mon[75334]: 11.17 scrub starts
Feb 02 15:09:58 compute-0 ceph-mon[75334]: 11.17 scrub ok
Feb 02 15:09:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Feb 02 15:09:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Feb 02 15:09:58 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Feb 02 15:09:58 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 98 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=97/98 n=6 ec=48/34 lis/c=95/68 les/c/f=96/69/0 sis=97) [0] r=0 lpr=97 pi=[68,97)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:09:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 1 unknown, 1 activating+remapped, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 4/249 objects misplaced (1.606%); 42 B/s, 0 objects/s recovering
Feb 02 15:09:58 compute-0 python3.9[101024]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:09:59 compute-0 python3.9[101174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:09:59 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb 02 15:09:59 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb 02 15:09:59 compute-0 ceph-mon[75334]: 5.1c scrub starts
Feb 02 15:09:59 compute-0 ceph-mon[75334]: 5.1c scrub ok
Feb 02 15:09:59 compute-0 ceph-mon[75334]: osdmap e98: 3 total, 3 up, 3 in
Feb 02 15:09:59 compute-0 ceph-mon[75334]: pgmap v187: 305 pgs: 1 unknown, 1 activating+remapped, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 4/249 objects misplaced (1.606%); 42 B/s, 0 objects/s recovering
Feb 02 15:10:00 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb 02 15:10:00 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb 02 15:10:00 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Feb 02 15:10:00 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Feb 02 15:10:00 compute-0 python3.9[101328]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:10:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 403 B/s wr, 8 op/s; 116 B/s, 3 objects/s recovering
Feb 02 15:10:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Feb 02 15:10:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb 02 15:10:00 compute-0 ceph-mon[75334]: 11.7 scrub starts
Feb 02 15:10:00 compute-0 ceph-mon[75334]: 11.7 scrub ok
Feb 02 15:10:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Feb 02 15:10:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 02 15:10:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Feb 02 15:10:00 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Feb 02 15:10:01 compute-0 sudo[101484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xytivscyvkeivuqpnadlkxtrjeyjtpkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045000.987377-120-142559575232852/AnsiballZ_setup.py'
Feb 02 15:10:01 compute-0 sudo[101484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:01 compute-0 python3.9[101486]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:10:01 compute-0 sudo[101484]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:01 compute-0 ceph-mon[75334]: 7.1a scrub starts
Feb 02 15:10:01 compute-0 ceph-mon[75334]: 7.1a scrub ok
Feb 02 15:10:01 compute-0 ceph-mon[75334]: 8.5 scrub starts
Feb 02 15:10:01 compute-0 ceph-mon[75334]: 8.5 scrub ok
Feb 02 15:10:01 compute-0 ceph-mon[75334]: pgmap v188: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 403 B/s wr, 8 op/s; 116 B/s, 3 objects/s recovering
Feb 02 15:10:01 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb 02 15:10:01 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 02 15:10:01 compute-0 ceph-mon[75334]: osdmap e99: 3 total, 3 up, 3 in
Feb 02 15:10:02 compute-0 sudo[101568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqamfepmhkohhrdinjxoltpmsmflhkui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045000.987377-120-142559575232852/AnsiballZ_dnf.py'
Feb 02 15:10:02 compute-0 sudo[101568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:02 compute-0 python3.9[101570]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:10:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 98 B/s, 2 objects/s recovering
Feb 02 15:10:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Feb 02 15:10:02 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb 02 15:10:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Feb 02 15:10:02 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 02 15:10:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Feb 02 15:10:02 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Feb 02 15:10:02 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb 02 15:10:03 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Feb 02 15:10:03 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Feb 02 15:10:03 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb 02 15:10:03 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb 02 15:10:03 compute-0 ceph-mon[75334]: pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 98 B/s, 2 objects/s recovering
Feb 02 15:10:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 02 15:10:03 compute-0 ceph-mon[75334]: osdmap e100: 3 total, 3 up, 3 in
Feb 02 15:10:03 compute-0 ceph-mon[75334]: 3.1f scrub starts
Feb 02 15:10:03 compute-0 ceph-mon[75334]: 3.1f scrub ok
Feb 02 15:10:03 compute-0 ceph-mon[75334]: 11.5 scrub starts
Feb 02 15:10:03 compute-0 ceph-mon[75334]: 11.5 scrub ok
Feb 02 15:10:04 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.d scrub starts
Feb 02 15:10:04 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.d scrub ok
Feb 02 15:10:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 340 B/s wr, 7 op/s; 54 B/s, 1 objects/s recovering
Feb 02 15:10:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Feb 02 15:10:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb 02 15:10:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Feb 02 15:10:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 02 15:10:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Feb 02 15:10:04 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Feb 02 15:10:05 compute-0 ceph-mon[75334]: 3.d scrub starts
Feb 02 15:10:05 compute-0 ceph-mon[75334]: 3.d scrub ok
Feb 02 15:10:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb 02 15:10:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Feb 02 15:10:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Feb 02 15:10:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 101 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=57/58 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=101 pruub=8.088098526s) [2] r=-1 lpr=101 pi=[57,101)/1 crt=58'486 lcod 58'486 active pruub 168.027389526s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 101 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=57/58 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=101 pruub=8.087994576s) [2] r=-1 lpr=101 pi=[57,101)/1 crt=58'486 lcod 58'486 unknown NOTIFY pruub 168.027389526s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=101) [2] r=0 lpr=101 pi=[57,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Feb 02 15:10:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Feb 02 15:10:05 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Feb 02 15:10:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 102 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=57/58 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=102) [2]/[0] r=0 lpr=102 pi=[57,102)/1 crt=58'486 lcod 58'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:05 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 102 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=57/58 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=102) [2]/[0] r=0 lpr=102 pi=[57,102)/1 crt=58'486 lcod 58'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[57,102)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:05 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[57,102)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:06 compute-0 ceph-mon[75334]: pgmap v192: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 340 B/s wr, 7 op/s; 54 B/s, 1 objects/s recovering
Feb 02 15:10:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 02 15:10:06 compute-0 ceph-mon[75334]: osdmap e101: 3 total, 3 up, 3 in
Feb 02 15:10:06 compute-0 ceph-mon[75334]: osdmap e102: 3 total, 3 up, 3 in
Feb 02 15:10:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Feb 02 15:10:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Feb 02 15:10:06 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Feb 02 15:10:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Feb 02 15:10:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb 02 15:10:06 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Feb 02 15:10:06 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Feb 02 15:10:07 compute-0 ceph-mon[75334]: 11.15 scrub starts
Feb 02 15:10:07 compute-0 ceph-mon[75334]: 11.15 scrub ok
Feb 02 15:10:07 compute-0 ceph-mon[75334]: osdmap e103: 3 total, 3 up, 3 in
Feb 02 15:10:07 compute-0 ceph-mon[75334]: pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb 02 15:10:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 103 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=102/103 n=6 ec=48/34 lis/c=57/57 les/c/f=58/58/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[57,102)/1 crt=58'487 lcod 58'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:10:07 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Feb 02 15:10:07 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Feb 02 15:10:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Feb 02 15:10:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 02 15:10:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Feb 02 15:10:07 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Feb 02 15:10:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 104 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=102/103 n=6 ec=48/34 lis/c=102/57 les/c/f=103/58/0 sis=104 pruub=15.592935562s) [2] async=[2] r=-1 lpr=104 pi=[57,104)/1 crt=58'487 lcod 58'486 active pruub 177.615997314s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:07 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 104 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=102/103 n=6 ec=48/34 lis/c=102/57 les/c/f=103/58/0 sis=104 pruub=15.592700005s) [2] r=-1 lpr=104 pi=[57,104)/1 crt=58'487 lcod 58'486 unknown NOTIFY pruub 177.615997314s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:07 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 104 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=0/0 n=6 ec=48/34 lis/c=102/57 les/c/f=103/58/0 sis=104) [2] r=0 lpr=104 pi=[57,104)/1 pct=0'0 crt=58'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:07 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 104 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=0/0 n=6 ec=48/34 lis/c=102/57 les/c/f=103/58/0 sis=104) [2] r=0 lpr=104 pi=[57,104)/1 crt=58'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:08 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Feb 02 15:10:08 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Feb 02 15:10:08 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb 02 15:10:08 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb 02 15:10:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Feb 02 15:10:08 compute-0 ceph-mon[75334]: 7.1b scrub starts
Feb 02 15:10:08 compute-0 ceph-mon[75334]: 7.1b scrub ok
Feb 02 15:10:08 compute-0 ceph-mon[75334]: 7.10 scrub starts
Feb 02 15:10:08 compute-0 ceph-mon[75334]: 7.10 scrub ok
Feb 02 15:10:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 02 15:10:08 compute-0 ceph-mon[75334]: osdmap e104: 3 total, 3 up, 3 in
Feb 02 15:10:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Feb 02 15:10:08 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Feb 02 15:10:08 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 105 pg[9.19( v 58'487 (0'0,58'487] local-lis/les=104/105 n=6 ec=48/34 lis/c=102/57 les/c/f=103/58/0 sis=104) [2] r=0 lpr=104 pi=[57,104)/1 crt=58'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:10:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 2 objects/s recovering
Feb 02 15:10:09 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Feb 02 15:10:09 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Feb 02 15:10:09 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Feb 02 15:10:09 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Feb 02 15:10:09 compute-0 ceph-mon[75334]: 8.15 scrub starts
Feb 02 15:10:09 compute-0 ceph-mon[75334]: 8.15 scrub ok
Feb 02 15:10:09 compute-0 ceph-mon[75334]: 3.14 scrub starts
Feb 02 15:10:09 compute-0 ceph-mon[75334]: 3.14 scrub ok
Feb 02 15:10:09 compute-0 ceph-mon[75334]: osdmap e105: 3 total, 3 up, 3 in
Feb 02 15:10:09 compute-0 ceph-mon[75334]: pgmap v199: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 2 objects/s recovering
Feb 02 15:10:09 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Feb 02 15:10:09 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Feb 02 15:10:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 81 B/s, 1 objects/s recovering
Feb 02 15:10:10 compute-0 ceph-mon[75334]: 3.1d scrub starts
Feb 02 15:10:10 compute-0 ceph-mon[75334]: 3.1d scrub ok
Feb 02 15:10:10 compute-0 ceph-mon[75334]: 8.7 scrub starts
Feb 02 15:10:10 compute-0 ceph-mon[75334]: 8.7 scrub ok
Feb 02 15:10:10 compute-0 ceph-mon[75334]: 8.10 scrub starts
Feb 02 15:10:10 compute-0 ceph-mon[75334]: 8.10 scrub ok
Feb 02 15:10:11 compute-0 ceph-mon[75334]: pgmap v200: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 81 B/s, 1 objects/s recovering
Feb 02 15:10:11 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Feb 02 15:10:11 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Feb 02 15:10:12 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Feb 02 15:10:12 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Feb 02 15:10:12 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Feb 02 15:10:12 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Feb 02 15:10:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 1 objects/s recovering
Feb 02 15:10:12 compute-0 ceph-mon[75334]: 7.18 scrub starts
Feb 02 15:10:12 compute-0 ceph-mon[75334]: 7.18 scrub ok
Feb 02 15:10:13 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Feb 02 15:10:13 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Feb 02 15:10:13 compute-0 ceph-mon[75334]: 3.1e scrub starts
Feb 02 15:10:13 compute-0 ceph-mon[75334]: 3.1e scrub ok
Feb 02 15:10:13 compute-0 ceph-mon[75334]: 11.0 scrub starts
Feb 02 15:10:13 compute-0 ceph-mon[75334]: 11.0 scrub ok
Feb 02 15:10:13 compute-0 ceph-mon[75334]: pgmap v201: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 1 objects/s recovering
Feb 02 15:10:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:10:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:10:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:10:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:10:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:10:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:10:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 1 objects/s recovering
Feb 02 15:10:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Feb 02 15:10:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb 02 15:10:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Feb 02 15:10:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 02 15:10:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Feb 02 15:10:14 compute-0 ceph-mon[75334]: 11.19 scrub starts
Feb 02 15:10:14 compute-0 ceph-mon[75334]: 11.19 scrub ok
Feb 02 15:10:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb 02 15:10:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Feb 02 15:10:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:15 compute-0 ceph-mon[75334]: pgmap v202: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 1 objects/s recovering
Feb 02 15:10:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 02 15:10:15 compute-0 ceph-mon[75334]: osdmap e106: 3 total, 3 up, 3 in
Feb 02 15:10:16 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Feb 02 15:10:16 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Feb 02 15:10:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Feb 02 15:10:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb 02 15:10:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Feb 02 15:10:16 compute-0 ceph-mon[75334]: pgmap v204: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb 02 15:10:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 02 15:10:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Feb 02 15:10:16 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Feb 02 15:10:17 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb 02 15:10:17 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb 02 15:10:17 compute-0 ceph-mon[75334]: 8.3 scrub starts
Feb 02 15:10:17 compute-0 ceph-mon[75334]: 8.3 scrub ok
Feb 02 15:10:17 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 02 15:10:17 compute-0 ceph-mon[75334]: osdmap e107: 3 total, 3 up, 3 in
Feb 02 15:10:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Feb 02 15:10:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb 02 15:10:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Feb 02 15:10:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 02 15:10:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Feb 02 15:10:18 compute-0 ceph-mon[75334]: 7.12 scrub starts
Feb 02 15:10:18 compute-0 ceph-mon[75334]: 7.12 scrub ok
Feb 02 15:10:18 compute-0 ceph-mon[75334]: pgmap v206: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:18 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb 02 15:10:18 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 107 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=82/83 n=6 ec=48/34 lis/c=82/82 les/c/f=83/83/0 sis=107 pruub=8.934097290s) [0] r=-1 lpr=107 pi=[82,107)/1 crt=58'487 active pruub 174.788101196s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:18 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 108 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=82/83 n=6 ec=48/34 lis/c=82/82 les/c/f=83/83/0 sis=107 pruub=8.934039116s) [0] r=-1 lpr=107 pi=[82,107)/1 crt=58'487 unknown NOTIFY pruub 174.788101196s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Feb 02 15:10:18 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=82/82 les/c/f=83/83/0 sis=107) [0] r=0 lpr=108 pi=[82,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:19 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Feb 02 15:10:19 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Feb 02 15:10:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Feb 02 15:10:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Feb 02 15:10:19 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Feb 02 15:10:19 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Feb 02 15:10:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Feb 02 15:10:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 02 15:10:19 compute-0 ceph-mon[75334]: osdmap e108: 3 total, 3 up, 3 in
Feb 02 15:10:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Feb 02 15:10:19 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Feb 02 15:10:19 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=82/82 les/c/f=83/83/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[82,109)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:19 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=82/82 les/c/f=83/83/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[82,109)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:19 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 109 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=82/83 n=6 ec=48/34 lis/c=82/82 les/c/f=83/83/0 sis=109) [0]/[2] r=0 lpr=109 pi=[82,109)/1 crt=58'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:19 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 109 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=82/83 n=6 ec=48/34 lis/c=82/82 les/c/f=83/83/0 sis=109) [0]/[2] r=0 lpr=109 pi=[82,109)/1 crt=58'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Feb 02 15:10:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb 02 15:10:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Feb 02 15:10:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 02 15:10:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Feb 02 15:10:20 compute-0 ceph-mon[75334]: 7.11 scrub starts
Feb 02 15:10:20 compute-0 ceph-mon[75334]: 7.11 scrub ok
Feb 02 15:10:20 compute-0 ceph-mon[75334]: 8.0 scrub starts
Feb 02 15:10:20 compute-0 ceph-mon[75334]: 8.0 scrub ok
Feb 02 15:10:20 compute-0 ceph-mon[75334]: 8.14 scrub starts
Feb 02 15:10:20 compute-0 ceph-mon[75334]: 8.14 scrub ok
Feb 02 15:10:20 compute-0 ceph-mon[75334]: osdmap e109: 3 total, 3 up, 3 in
Feb 02 15:10:20 compute-0 ceph-mon[75334]: pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb 02 15:10:20 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Feb 02 15:10:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 110 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=68/69 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=110 pruub=14.915988922s) [0] r=-1 lpr=110 pi=[68,110)/1 crt=58'485 active pruub 182.804367065s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 110 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=68/69 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=110 pruub=14.915935516s) [0] r=-1 lpr=110 pi=[68,110)/1 crt=58'485 unknown NOTIFY pruub 182.804367065s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:20 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=110) [0] r=0 lpr=110 pi=[68,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:20 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 110 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=109/110 n=6 ec=48/34 lis/c=82/82 les/c/f=83/83/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[82,109)/1 crt=58'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:10:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Feb 02 15:10:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 02 15:10:21 compute-0 ceph-mon[75334]: osdmap e110: 3 total, 3 up, 3 in
Feb 02 15:10:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Feb 02 15:10:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Feb 02 15:10:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 111 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=109/110 n=6 ec=48/34 lis/c=109/82 les/c/f=110/83/0 sis=111 pruub=14.938586235s) [0] async=[0] r=-1 lpr=111 pi=[82,111)/1 crt=58'487 active pruub 183.896759033s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 111 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=68/69 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=0 lpr=111 pi=[68,111)/1 crt=58'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 111 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=68/69 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=0 lpr=111 pi=[68,111)/1 crt=58'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:22 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 111 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=109/110 n=6 ec=48/34 lis/c=109/82 les/c/f=110/83/0 sis=111 pruub=14.938439369s) [0] r=-1 lpr=111 pi=[82,111)/1 crt=58'487 unknown NOTIFY pruub 183.896759033s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:22 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 111 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=0/0 n=6 ec=48/34 lis/c=109/82 les/c/f=110/83/0 sis=111) [0] r=0 lpr=111 pi=[82,111)/1 pct=0'0 crt=58'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:22 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 111 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=0/0 n=6 ec=48/34 lis/c=109/82 les/c/f=110/83/0 sis=111) [0] r=0 lpr=111 pi=[82,111)/1 crt=58'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:22 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[68,111)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:22 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[68,111)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:22 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.b scrub starts
Feb 02 15:10:22 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.b scrub ok
Feb 02 15:10:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 02 15:10:22 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:10:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Feb 02 15:10:23 compute-0 ceph-mon[75334]: osdmap e111: 3 total, 3 up, 3 in
Feb 02 15:10:23 compute-0 ceph-mon[75334]: 3.b scrub starts
Feb 02 15:10:23 compute-0 ceph-mon[75334]: 3.b scrub ok
Feb 02 15:10:23 compute-0 ceph-mon[75334]: pgmap v212: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:23 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 02 15:10:23 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:10:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Feb 02 15:10:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Feb 02 15:10:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 112 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=72/73 n=6 ec=48/34 lis/c=72/72 les/c/f=73/73/0 sis=112 pruub=8.638838768s) [1] r=-1 lpr=112 pi=[72,112)/1 crt=39'483 active pruub 178.611557007s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 112 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=72/73 n=6 ec=48/34 lis/c=72/72 les/c/f=73/73/0 sis=112 pruub=8.638772964s) [1] r=-1 lpr=112 pi=[72,112)/1 crt=39'483 unknown NOTIFY pruub 178.611557007s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:23 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 112 pg[9.1c( v 58'487 (0'0,58'487] local-lis/les=111/112 n=6 ec=48/34 lis/c=109/82 les/c/f=110/83/0 sis=111) [0] r=0 lpr=111 pi=[82,111)/1 crt=58'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:10:23 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=72/72 les/c/f=73/73/0 sis=112) [1] r=0 lpr=112 pi=[72,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:23 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 112 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=111/112 n=6 ec=48/34 lis/c=68/68 les/c/f=69/69/0 sis=111) [0]/[2] async=[0] r=0 lpr=111 pi=[68,111)/1 crt=58'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:10:23 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Feb 02 15:10:23 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Feb 02 15:10:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Feb 02 15:10:24 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 02 15:10:24 compute-0 ceph-mon[75334]: osdmap e112: 3 total, 3 up, 3 in
Feb 02 15:10:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Feb 02 15:10:24 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Feb 02 15:10:24 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 113 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=72/73 n=6 ec=48/34 lis/c=72/72 les/c/f=73/73/0 sis=113) [1]/[2] r=0 lpr=113 pi=[72,113)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:24 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 113 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=72/73 n=6 ec=48/34 lis/c=72/72 les/c/f=73/73/0 sis=113) [1]/[2] r=0 lpr=113 pi=[72,113)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:24 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 113 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=111/112 n=6 ec=48/34 lis/c=111/68 les/c/f=112/69/0 sis=113 pruub=14.995746613s) [0] async=[0] r=-1 lpr=113 pi=[68,113)/1 crt=58'485 active pruub 185.982162476s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:24 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 113 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=111/112 n=6 ec=48/34 lis/c=111/68 les/c/f=112/69/0 sis=113 pruub=14.995642662s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=58'485 unknown NOTIFY pruub 185.982162476s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:24 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=72/72 les/c/f=73/73/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[72,113)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:24 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=72/72 les/c/f=73/73/0 sis=113) [1]/[2] r=-1 lpr=113 pi=[72,113)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:24 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 113 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=0/0 n=6 ec=48/34 lis/c=111/68 les/c/f=112/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 pct=0'0 crt=58'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:24 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 113 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=0/0 n=6 ec=48/34 lis/c=111/68 les/c/f=112/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=58'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:24 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Feb 02 15:10:24 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Feb 02 15:10:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/253 objects misplaced (1.581%); 37 B/s, 0 objects/s recovering
Feb 02 15:10:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Feb 02 15:10:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Feb 02 15:10:25 compute-0 ceph-mon[75334]: 3.11 scrub starts
Feb 02 15:10:25 compute-0 ceph-mon[75334]: 3.11 scrub ok
Feb 02 15:10:25 compute-0 ceph-mon[75334]: osdmap e113: 3 total, 3 up, 3 in
Feb 02 15:10:25 compute-0 ceph-mon[75334]: 10.16 scrub starts
Feb 02 15:10:25 compute-0 ceph-mon[75334]: pgmap v215: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/253 objects misplaced (1.581%); 37 B/s, 0 objects/s recovering
Feb 02 15:10:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Feb 02 15:10:25 compute-0 ceph-osd[86115]: osd.0 pg_epoch: 114 pg[9.1e( v 58'485 (0'0,58'485] local-lis/les=113/114 n=6 ec=48/34 lis/c=111/68 les/c/f=112/69/0 sis=113) [0] r=0 lpr=113 pi=[68,113)/1 crt=58'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:10:25 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 114 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=113/114 n=6 ec=48/34 lis/c=72/72 les/c/f=73/73/0 sis=113) [1]/[2] async=[1] r=0 lpr=113 pi=[72,113)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:10:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Feb 02 15:10:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Feb 02 15:10:25 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 115 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=113/114 n=6 ec=48/34 lis/c=113/72 les/c/f=114/73/0 sis=115 pruub=15.716917992s) [1] async=[1] r=-1 lpr=115 pi=[72,115)/1 crt=39'483 active pruub 188.320007324s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:25 compute-0 ceph-osd[88227]: osd.2 pg_epoch: 115 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=113/114 n=6 ec=48/34 lis/c=113/72 les/c/f=114/73/0 sis=115 pruub=15.716822624s) [1] r=-1 lpr=115 pi=[72,115)/1 crt=39'483 unknown NOTIFY pruub 188.320007324s@ mbc={}] state<Start>: transitioning to Stray
Feb 02 15:10:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Feb 02 15:10:25 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 115 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=113/72 les/c/f=114/73/0 sis=115) [1] r=0 lpr=115 pi=[72,115)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 02 15:10:25 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 115 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/34 lis/c=113/72 les/c/f=114/73/0 sis=115) [1] r=0 lpr=115 pi=[72,115)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 02 15:10:25 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Feb 02 15:10:25 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Feb 02 15:10:26 compute-0 ceph-mon[75334]: 10.16 scrub ok
Feb 02 15:10:26 compute-0 ceph-mon[75334]: osdmap e114: 3 total, 3 up, 3 in
Feb 02 15:10:26 compute-0 ceph-mon[75334]: osdmap e115: 3 total, 3 up, 3 in
Feb 02 15:10:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Feb 02 15:10:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Feb 02 15:10:26 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Feb 02 15:10:26 compute-0 ceph-osd[87170]: osd.1 pg_epoch: 116 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=115/116 n=6 ec=48/34 lis/c=113/72 les/c/f=114/73/0 sis=115) [1] r=0 lpr=115 pi=[72,115)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 02 15:10:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Feb 02 15:10:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 152 B/s, 4 objects/s recovering
Feb 02 15:10:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Feb 02 15:10:27 compute-0 ceph-mon[75334]: 7.1f scrub starts
Feb 02 15:10:27 compute-0 ceph-mon[75334]: 7.1f scrub ok
Feb 02 15:10:27 compute-0 ceph-mon[75334]: osdmap e116: 3 total, 3 up, 3 in
Feb 02 15:10:27 compute-0 ceph-mon[75334]: pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 152 B/s, 4 objects/s recovering
Feb 02 15:10:28 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.c scrub starts
Feb 02 15:10:28 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.c scrub ok
Feb 02 15:10:28 compute-0 ceph-mon[75334]: 5.15 scrub starts
Feb 02 15:10:28 compute-0 ceph-mon[75334]: 5.15 scrub ok
Feb 02 15:10:28 compute-0 ceph-mon[75334]: 11.c scrub starts
Feb 02 15:10:28 compute-0 ceph-mon[75334]: 11.c scrub ok
Feb 02 15:10:28 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb 02 15:10:28 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb 02 15:10:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 88 B/s, 2 objects/s recovering
Feb 02 15:10:29 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb 02 15:10:29 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb 02 15:10:29 compute-0 sudo[101698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:10:29 compute-0 sudo[101698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:10:29 compute-0 sudo[101698]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:29 compute-0 sudo[101723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:10:29 compute-0 sudo[101723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:10:29 compute-0 ceph-mon[75334]: pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 88 B/s, 2 objects/s recovering
Feb 02 15:10:29 compute-0 sudo[101723]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:10:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:10:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:10:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:10:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:10:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:10:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:10:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:10:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:10:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:10:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:10:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:10:29 compute-0 sudo[101789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:10:29 compute-0 sudo[101789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:10:29 compute-0 sudo[101789]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:29 compute-0 sudo[101814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:10:29 compute-0 sudo[101814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:10:30 compute-0 podman[101859]: 2026-02-02 15:10:30.30126856 +0000 UTC m=+0.090967283 container create de902b00d0ce0fd98393ba32ae12154a7d65b5b3e2983d47ea11c94525792d82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:10:30 compute-0 podman[101859]: 2026-02-02 15:10:30.24442241 +0000 UTC m=+0.034121143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:10:30 compute-0 systemd[1]: Started libpod-conmon-de902b00d0ce0fd98393ba32ae12154a7d65b5b3e2983d47ea11c94525792d82.scope.
Feb 02 15:10:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:10:30 compute-0 podman[101859]: 2026-02-02 15:10:30.402565574 +0000 UTC m=+0.192264347 container init de902b00d0ce0fd98393ba32ae12154a7d65b5b3e2983d47ea11c94525792d82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:10:30 compute-0 podman[101859]: 2026-02-02 15:10:30.414424962 +0000 UTC m=+0.204123695 container start de902b00d0ce0fd98393ba32ae12154a7d65b5b3e2983d47ea11c94525792d82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:10:30 compute-0 beautiful_keller[101875]: 167 167
Feb 02 15:10:30 compute-0 systemd[1]: libpod-de902b00d0ce0fd98393ba32ae12154a7d65b5b3e2983d47ea11c94525792d82.scope: Deactivated successfully.
Feb 02 15:10:30 compute-0 podman[101859]: 2026-02-02 15:10:30.430862695 +0000 UTC m=+0.220561428 container attach de902b00d0ce0fd98393ba32ae12154a7d65b5b3e2983d47ea11c94525792d82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keller, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:10:30 compute-0 podman[101859]: 2026-02-02 15:10:30.431529025 +0000 UTC m=+0.221227748 container died de902b00d0ce0fd98393ba32ae12154a7d65b5b3e2983d47ea11c94525792d82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b9b0185658ef998f7bfe626ff6b9680d635da8a7ad575f25bf9e909610a22c-merged.mount: Deactivated successfully.
Feb 02 15:10:30 compute-0 podman[101859]: 2026-02-02 15:10:30.498357667 +0000 UTC m=+0.288056400 container remove de902b00d0ce0fd98393ba32ae12154a7d65b5b3e2983d47ea11c94525792d82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keller, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:10:30 compute-0 systemd[1]: libpod-conmon-de902b00d0ce0fd98393ba32ae12154a7d65b5b3e2983d47ea11c94525792d82.scope: Deactivated successfully.
Feb 02 15:10:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:30 compute-0 podman[101897]: 2026-02-02 15:10:30.699492015 +0000 UTC m=+0.082530695 container create 4d7b04b3f847d4d6d8db7970a961d349aaf076c6933714dd627477bf94d6fb33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:10:30 compute-0 podman[101897]: 2026-02-02 15:10:30.641494182 +0000 UTC m=+0.024532962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:10:30 compute-0 systemd[1]: Started libpod-conmon-4d7b04b3f847d4d6d8db7970a961d349aaf076c6933714dd627477bf94d6fb33.scope.
Feb 02 15:10:30 compute-0 ceph-mon[75334]: 8.18 scrub starts
Feb 02 15:10:30 compute-0 ceph-mon[75334]: 8.18 scrub ok
Feb 02 15:10:30 compute-0 ceph-mon[75334]: 8.4 scrub starts
Feb 02 15:10:30 compute-0 ceph-mon[75334]: 8.4 scrub ok
Feb 02 15:10:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:10:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:10:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:10:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:10:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:10:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:10:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773601acb30eca20be1cedd4335b7e298f63f90042cbe5a5f9b6141330cb7a0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773601acb30eca20be1cedd4335b7e298f63f90042cbe5a5f9b6141330cb7a0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773601acb30eca20be1cedd4335b7e298f63f90042cbe5a5f9b6141330cb7a0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773601acb30eca20be1cedd4335b7e298f63f90042cbe5a5f9b6141330cb7a0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773601acb30eca20be1cedd4335b7e298f63f90042cbe5a5f9b6141330cb7a0e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 2 objects/s recovering
Feb 02 15:10:30 compute-0 podman[101897]: 2026-02-02 15:10:30.831261575 +0000 UTC m=+0.214300325 container init 4d7b04b3f847d4d6d8db7970a961d349aaf076c6933714dd627477bf94d6fb33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cori, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:10:30 compute-0 podman[101897]: 2026-02-02 15:10:30.838221049 +0000 UTC m=+0.221259749 container start 4d7b04b3f847d4d6d8db7970a961d349aaf076c6933714dd627477bf94d6fb33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:10:30 compute-0 podman[101897]: 2026-02-02 15:10:30.882126598 +0000 UTC m=+0.265165368 container attach 4d7b04b3f847d4d6d8db7970a961d349aaf076c6933714dd627477bf94d6fb33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cori, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:10:31 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb 02 15:10:31 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb 02 15:10:31 compute-0 peaceful_cori[101914]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:10:31 compute-0 peaceful_cori[101914]: --> All data devices are unavailable
Feb 02 15:10:31 compute-0 systemd[1]: libpod-4d7b04b3f847d4d6d8db7970a961d349aaf076c6933714dd627477bf94d6fb33.scope: Deactivated successfully.
Feb 02 15:10:31 compute-0 podman[101897]: 2026-02-02 15:10:31.285201306 +0000 UTC m=+0.668240026 container died 4d7b04b3f847d4d6d8db7970a961d349aaf076c6933714dd627477bf94d6fb33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cori, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-773601acb30eca20be1cedd4335b7e298f63f90042cbe5a5f9b6141330cb7a0e-merged.mount: Deactivated successfully.
Feb 02 15:10:31 compute-0 podman[101897]: 2026-02-02 15:10:31.36292166 +0000 UTC m=+0.745960350 container remove 4d7b04b3f847d4d6d8db7970a961d349aaf076c6933714dd627477bf94d6fb33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cori, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 02 15:10:31 compute-0 systemd[1]: libpod-conmon-4d7b04b3f847d4d6d8db7970a961d349aaf076c6933714dd627477bf94d6fb33.scope: Deactivated successfully.
Feb 02 15:10:31 compute-0 sudo[101814]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:31 compute-0 sudo[101944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:10:31 compute-0 sudo[101944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:10:31 compute-0 sudo[101944]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:31 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb 02 15:10:31 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb 02 15:10:31 compute-0 sudo[101969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:10:31 compute-0 sudo[101969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:10:31 compute-0 ceph-mon[75334]: pgmap v221: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 2 objects/s recovering
Feb 02 15:10:31 compute-0 ceph-mon[75334]: 7.0 scrub starts
Feb 02 15:10:31 compute-0 ceph-mon[75334]: 7.0 scrub ok
Feb 02 15:10:31 compute-0 podman[102006]: 2026-02-02 15:10:31.780480023 +0000 UTC m=+0.052285907 container create 3f6c97d03264613f8b79b83e66d12f41ccd0aab2cee48c9521b67a56c8c8e8ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_boyd, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:10:31 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Feb 02 15:10:31 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Feb 02 15:10:31 compute-0 systemd[1]: Started libpod-conmon-3f6c97d03264613f8b79b83e66d12f41ccd0aab2cee48c9521b67a56c8c8e8ee.scope.
Feb 02 15:10:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:10:31 compute-0 podman[102006]: 2026-02-02 15:10:31.754458359 +0000 UTC m=+0.026264333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:10:31 compute-0 podman[102006]: 2026-02-02 15:10:31.856155205 +0000 UTC m=+0.127961139 container init 3f6c97d03264613f8b79b83e66d12f41ccd0aab2cee48c9521b67a56c8c8e8ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:10:31 compute-0 podman[102006]: 2026-02-02 15:10:31.862514842 +0000 UTC m=+0.134320726 container start 3f6c97d03264613f8b79b83e66d12f41ccd0aab2cee48c9521b67a56c8c8e8ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:10:31 compute-0 podman[102006]: 2026-02-02 15:10:31.866356575 +0000 UTC m=+0.138162499 container attach 3f6c97d03264613f8b79b83e66d12f41ccd0aab2cee48c9521b67a56c8c8e8ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:10:31 compute-0 kind_boyd[102023]: 167 167
Feb 02 15:10:31 compute-0 systemd[1]: libpod-3f6c97d03264613f8b79b83e66d12f41ccd0aab2cee48c9521b67a56c8c8e8ee.scope: Deactivated successfully.
Feb 02 15:10:31 compute-0 podman[102006]: 2026-02-02 15:10:31.868829747 +0000 UTC m=+0.140635661 container died 3f6c97d03264613f8b79b83e66d12f41ccd0aab2cee48c9521b67a56c8c8e8ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_boyd, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a20226a55dae5f1b7a7dee142d4c5fcb87d1f21c1b1b22b7882e63df8369d07b-merged.mount: Deactivated successfully.
Feb 02 15:10:31 compute-0 podman[102006]: 2026-02-02 15:10:31.913374546 +0000 UTC m=+0.185180470 container remove 3f6c97d03264613f8b79b83e66d12f41ccd0aab2cee48c9521b67a56c8c8e8ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:10:31 compute-0 systemd[1]: libpod-conmon-3f6c97d03264613f8b79b83e66d12f41ccd0aab2cee48c9521b67a56c8c8e8ee.scope: Deactivated successfully.
Feb 02 15:10:32 compute-0 podman[102047]: 2026-02-02 15:10:32.082896515 +0000 UTC m=+0.049750392 container create a1d46f0d02eae0050a2f85114e9443595d2fe2e612369126a80c83b6371b72aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 02 15:10:32 compute-0 systemd[1]: Started libpod-conmon-a1d46f0d02eae0050a2f85114e9443595d2fe2e612369126a80c83b6371b72aa.scope.
Feb 02 15:10:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5dc0b61d0f3caaefc28fc88d9404848bd5cee95025f8d1e22c40c70b58b2f7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5dc0b61d0f3caaefc28fc88d9404848bd5cee95025f8d1e22c40c70b58b2f7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5dc0b61d0f3caaefc28fc88d9404848bd5cee95025f8d1e22c40c70b58b2f7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5dc0b61d0f3caaefc28fc88d9404848bd5cee95025f8d1e22c40c70b58b2f7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:32 compute-0 podman[102047]: 2026-02-02 15:10:32.05788167 +0000 UTC m=+0.024735607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:10:32 compute-0 podman[102047]: 2026-02-02 15:10:32.168416356 +0000 UTC m=+0.135270233 container init a1d46f0d02eae0050a2f85114e9443595d2fe2e612369126a80c83b6371b72aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:10:32 compute-0 podman[102047]: 2026-02-02 15:10:32.178335407 +0000 UTC m=+0.145189254 container start a1d46f0d02eae0050a2f85114e9443595d2fe2e612369126a80c83b6371b72aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_meninsky, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:10:32 compute-0 podman[102047]: 2026-02-02 15:10:32.186629761 +0000 UTC m=+0.153483638 container attach a1d46f0d02eae0050a2f85114e9443595d2fe2e612369126a80c83b6371b72aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]: {
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:     "0": [
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:         {
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "devices": [
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "/dev/loop3"
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             ],
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_name": "ceph_lv0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_size": "21470642176",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "name": "ceph_lv0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "tags": {
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.cluster_name": "ceph",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.crush_device_class": "",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.encrypted": "0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.objectstore": "bluestore",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.osd_id": "0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.type": "block",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.vdo": "0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.with_tpm": "0"
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             },
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "type": "block",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "vg_name": "ceph_vg0"
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:         }
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:     ],
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:     "1": [
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:         {
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "devices": [
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "/dev/loop4"
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             ],
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_name": "ceph_lv1",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_size": "21470642176",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "name": "ceph_lv1",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "tags": {
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.cluster_name": "ceph",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.crush_device_class": "",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.encrypted": "0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.objectstore": "bluestore",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.osd_id": "1",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.type": "block",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.vdo": "0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.with_tpm": "0"
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             },
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "type": "block",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "vg_name": "ceph_vg1"
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:         }
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:     ],
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:     "2": [
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:         {
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "devices": [
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "/dev/loop5"
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             ],
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_name": "ceph_lv2",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_size": "21470642176",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "name": "ceph_lv2",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "tags": {
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.cluster_name": "ceph",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.crush_device_class": "",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.encrypted": "0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.objectstore": "bluestore",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.osd_id": "2",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.type": "block",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.vdo": "0",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:                 "ceph.with_tpm": "0"
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             },
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "type": "block",
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:             "vg_name": "ceph_vg2"
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:         }
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]:     ]
Feb 02 15:10:32 compute-0 pedantic_meninsky[102065]: }
Feb 02 15:10:32 compute-0 systemd[1]: libpod-a1d46f0d02eae0050a2f85114e9443595d2fe2e612369126a80c83b6371b72aa.scope: Deactivated successfully.
Feb 02 15:10:32 compute-0 podman[102047]: 2026-02-02 15:10:32.44908023 +0000 UTC m=+0.415934087 container died a1d46f0d02eae0050a2f85114e9443595d2fe2e612369126a80c83b6371b72aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_meninsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5dc0b61d0f3caaefc28fc88d9404848bd5cee95025f8d1e22c40c70b58b2f7a-merged.mount: Deactivated successfully.
Feb 02 15:10:32 compute-0 podman[102047]: 2026-02-02 15:10:32.492926457 +0000 UTC m=+0.459780344 container remove a1d46f0d02eae0050a2f85114e9443595d2fe2e612369126a80c83b6371b72aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:10:32 compute-0 systemd[1]: libpod-conmon-a1d46f0d02eae0050a2f85114e9443595d2fe2e612369126a80c83b6371b72aa.scope: Deactivated successfully.
Feb 02 15:10:32 compute-0 sudo[101969]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:32 compute-0 sudo[102090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:10:32 compute-0 sudo[102090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:10:32 compute-0 sudo[102090]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:32 compute-0 sudo[102115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:10:32 compute-0 sudo[102115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:10:32 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Feb 02 15:10:32 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Feb 02 15:10:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 2 objects/s recovering
Feb 02 15:10:32 compute-0 ceph-mon[75334]: 11.1b scrub starts
Feb 02 15:10:32 compute-0 ceph-mon[75334]: 11.1b scrub ok
Feb 02 15:10:32 compute-0 ceph-mon[75334]: 8.1f scrub starts
Feb 02 15:10:32 compute-0 ceph-mon[75334]: 8.1f scrub ok
Feb 02 15:10:32 compute-0 podman[102152]: 2026-02-02 15:10:32.916613271 +0000 UTC m=+0.048484306 container create e897549e1b4cbb539ca081042e596d90b41b30a10a5579b884a984a2a5a46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bell, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:10:32 compute-0 systemd[1]: Started libpod-conmon-e897549e1b4cbb539ca081042e596d90b41b30a10a5579b884a984a2a5a46b65.scope.
Feb 02 15:10:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:10:32 compute-0 podman[102152]: 2026-02-02 15:10:32.979705264 +0000 UTC m=+0.111576309 container init e897549e1b4cbb539ca081042e596d90b41b30a10a5579b884a984a2a5a46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:10:32 compute-0 podman[102152]: 2026-02-02 15:10:32.988610005 +0000 UTC m=+0.120481020 container start e897549e1b4cbb539ca081042e596d90b41b30a10a5579b884a984a2a5a46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:10:32 compute-0 podman[102152]: 2026-02-02 15:10:32.893392458 +0000 UTC m=+0.025263533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:10:32 compute-0 inspiring_bell[102168]: 167 167
Feb 02 15:10:32 compute-0 systemd[1]: libpod-e897549e1b4cbb539ca081042e596d90b41b30a10a5579b884a984a2a5a46b65.scope: Deactivated successfully.
Feb 02 15:10:32 compute-0 podman[102152]: 2026-02-02 15:10:32.997524856 +0000 UTC m=+0.129395891 container attach e897549e1b4cbb539ca081042e596d90b41b30a10a5579b884a984a2a5a46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:10:32 compute-0 podman[102152]: 2026-02-02 15:10:32.998208657 +0000 UTC m=+0.130079692 container died e897549e1b4cbb539ca081042e596d90b41b30a10a5579b884a984a2a5a46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-af062c06ecf6f5ba6b754bcbb4c3702f5e1138e6af8b35b50811c666ccb3f40b-merged.mount: Deactivated successfully.
Feb 02 15:10:33 compute-0 podman[102152]: 2026-02-02 15:10:33.047902356 +0000 UTC m=+0.179773381 container remove e897549e1b4cbb539ca081042e596d90b41b30a10a5579b884a984a2a5a46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bell, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:10:33 compute-0 systemd[1]: libpod-conmon-e897549e1b4cbb539ca081042e596d90b41b30a10a5579b884a984a2a5a46b65.scope: Deactivated successfully.
Feb 02 15:10:33 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb 02 15:10:33 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb 02 15:10:33 compute-0 podman[102192]: 2026-02-02 15:10:33.191883085 +0000 UTC m=+0.044277371 container create 211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:10:33 compute-0 systemd[1]: Started libpod-conmon-211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988.scope.
Feb 02 15:10:33 compute-0 podman[102192]: 2026-02-02 15:10:33.170238029 +0000 UTC m=+0.022632325 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:10:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4775897888031d5fee60684d0602390c61e1516302b503de8a44a17d8a6fb30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4775897888031d5fee60684d0602390c61e1516302b503de8a44a17d8a6fb30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4775897888031d5fee60684d0602390c61e1516302b503de8a44a17d8a6fb30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4775897888031d5fee60684d0602390c61e1516302b503de8a44a17d8a6fb30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:10:33 compute-0 podman[102192]: 2026-02-02 15:10:33.326759156 +0000 UTC m=+0.179153452 container init 211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 15:10:33 compute-0 podman[102192]: 2026-02-02 15:10:33.335527134 +0000 UTC m=+0.187921420 container start 211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:10:33 compute-0 podman[102192]: 2026-02-02 15:10:33.339198291 +0000 UTC m=+0.191592577 container attach 211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:10:33 compute-0 ceph-mon[75334]: 3.15 scrub starts
Feb 02 15:10:33 compute-0 ceph-mon[75334]: 3.15 scrub ok
Feb 02 15:10:33 compute-0 ceph-mon[75334]: pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 2 objects/s recovering
Feb 02 15:10:33 compute-0 lvm[102285]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:10:33 compute-0 lvm[102285]: VG ceph_vg0 finished
Feb 02 15:10:33 compute-0 lvm[102288]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:10:33 compute-0 lvm[102288]: VG ceph_vg1 finished
Feb 02 15:10:34 compute-0 lvm[102290]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:10:34 compute-0 lvm[102290]: VG ceph_vg2 finished
Feb 02 15:10:34 compute-0 interesting_mahavira[102209]: {}
Feb 02 15:10:34 compute-0 systemd[1]: libpod-211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988.scope: Deactivated successfully.
Feb 02 15:10:34 compute-0 systemd[1]: libpod-211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988.scope: Consumed 1.223s CPU time.
Feb 02 15:10:34 compute-0 podman[102192]: 2026-02-02 15:10:34.169597799 +0000 UTC m=+1.021992095 container died 211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mahavira, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:10:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4775897888031d5fee60684d0602390c61e1516302b503de8a44a17d8a6fb30-merged.mount: Deactivated successfully.
Feb 02 15:10:34 compute-0 podman[102192]: 2026-02-02 15:10:34.224967966 +0000 UTC m=+1.077362242 container remove 211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:10:34 compute-0 systemd[1]: libpod-conmon-211c24022b00f6f1b216c980d55a57f158ff482cd6978320ad0213400c47b988.scope: Deactivated successfully.
Feb 02 15:10:34 compute-0 sudo[102115]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:10:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:10:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:10:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:10:34 compute-0 sudo[102305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:10:34 compute-0 sudo[102305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:10:34 compute-0 sudo[102305]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Feb 02 15:10:34 compute-0 ceph-mon[75334]: 4.a scrub starts
Feb 02 15:10:34 compute-0 ceph-mon[75334]: 4.a scrub ok
Feb 02 15:10:34 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:10:34 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:10:35 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Feb 02 15:10:35 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Feb 02 15:10:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:35 compute-0 ceph-mon[75334]: pgmap v223: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Feb 02 15:10:35 compute-0 ceph-mon[75334]: 7.5 scrub starts
Feb 02 15:10:35 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Feb 02 15:10:35 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Feb 02 15:10:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Feb 02 15:10:36 compute-0 ceph-mon[75334]: 7.5 scrub ok
Feb 02 15:10:36 compute-0 ceph-mon[75334]: 8.1a scrub starts
Feb 02 15:10:36 compute-0 ceph-mon[75334]: 8.1a scrub ok
Feb 02 15:10:36 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Feb 02 15:10:36 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Feb 02 15:10:37 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Feb 02 15:10:37 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Feb 02 15:10:37 compute-0 ceph-mon[75334]: pgmap v224: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Feb 02 15:10:37 compute-0 ceph-mon[75334]: 8.1d scrub starts
Feb 02 15:10:37 compute-0 ceph-mon[75334]: 8.1d scrub ok
Feb 02 15:10:37 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Feb 02 15:10:37 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Feb 02 15:10:38 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Feb 02 15:10:38 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Feb 02 15:10:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Feb 02 15:10:38 compute-0 ceph-mon[75334]: 3.4 scrub starts
Feb 02 15:10:38 compute-0 ceph-mon[75334]: 3.4 scrub ok
Feb 02 15:10:38 compute-0 ceph-mon[75334]: 11.10 scrub starts
Feb 02 15:10:38 compute-0 ceph-mon[75334]: 11.10 scrub ok
Feb 02 15:10:38 compute-0 ceph-mon[75334]: 11.9 scrub starts
Feb 02 15:10:38 compute-0 ceph-mon[75334]: 11.9 scrub ok
Feb 02 15:10:38 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.a scrub starts
Feb 02 15:10:38 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.a scrub ok
Feb 02 15:10:39 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Feb 02 15:10:39 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Feb 02 15:10:39 compute-0 ceph-mon[75334]: pgmap v225: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Feb 02 15:10:39 compute-0 ceph-mon[75334]: 3.a scrub starts
Feb 02 15:10:39 compute-0 ceph-mon[75334]: 3.a scrub ok
Feb 02 15:10:39 compute-0 ceph-mon[75334]: 4.1 scrub starts
Feb 02 15:10:39 compute-0 ceph-mon[75334]: 4.1 scrub ok
Feb 02 15:10:40 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Feb 02 15:10:40 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Feb 02 15:10:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Feb 02 15:10:40 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Feb 02 15:10:40 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Feb 02 15:10:40 compute-0 ceph-mon[75334]: 7.2 scrub starts
Feb 02 15:10:40 compute-0 ceph-mon[75334]: 7.2 scrub ok
Feb 02 15:10:41 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Feb 02 15:10:41 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Feb 02 15:10:41 compute-0 ceph-mon[75334]: pgmap v226: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Feb 02 15:10:41 compute-0 ceph-mon[75334]: 3.17 scrub starts
Feb 02 15:10:41 compute-0 ceph-mon[75334]: 3.17 scrub ok
Feb 02 15:10:42 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.a scrub starts
Feb 02 15:10:42 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 11.a scrub ok
Feb 02 15:10:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:10:42
Feb 02 15:10:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:10:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:10:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['images', 'volumes', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log']
Feb 02 15:10:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:10:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:42 compute-0 ceph-mon[75334]: 3.0 scrub starts
Feb 02 15:10:42 compute-0 ceph-mon[75334]: 3.0 scrub ok
Feb 02 15:10:43 compute-0 ceph-mon[75334]: 11.a scrub starts
Feb 02 15:10:43 compute-0 ceph-mon[75334]: 11.a scrub ok
Feb 02 15:10:43 compute-0 ceph-mon[75334]: pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:44 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Feb 02 15:10:44 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Feb 02 15:10:44 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Feb 02 15:10:44 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:10:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:44 compute-0 ceph-mon[75334]: 11.8 scrub starts
Feb 02 15:10:44 compute-0 ceph-mon[75334]: 11.8 scrub ok
Feb 02 15:10:45 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb 02 15:10:45 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb 02 15:10:45 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Feb 02 15:10:45 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Feb 02 15:10:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:45 compute-0 ceph-mon[75334]: 3.2 scrub starts
Feb 02 15:10:45 compute-0 ceph-mon[75334]: 3.2 scrub ok
Feb 02 15:10:45 compute-0 ceph-mon[75334]: pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:45 compute-0 ceph-mon[75334]: 3.5 scrub starts
Feb 02 15:10:45 compute-0 ceph-mon[75334]: 3.5 scrub ok
Feb 02 15:10:45 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Feb 02 15:10:45 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Feb 02 15:10:46 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Feb 02 15:10:46 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Feb 02 15:10:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:46 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Feb 02 15:10:46 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Feb 02 15:10:46 compute-0 ceph-mon[75334]: 3.1a scrub starts
Feb 02 15:10:46 compute-0 ceph-mon[75334]: 3.1a scrub ok
Feb 02 15:10:46 compute-0 ceph-mon[75334]: 8.2 scrub starts
Feb 02 15:10:46 compute-0 ceph-mon[75334]: 8.2 scrub ok
Feb 02 15:10:47 compute-0 sudo[101568]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:47 compute-0 sudo[102479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awsgsqswpdwsfhjsitfxqfkurlmbnwsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045047.2701418-132-253881162615861/AnsiballZ_command.py'
Feb 02 15:10:47 compute-0 sudo[102479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:47 compute-0 python3.9[102481]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:10:47 compute-0 ceph-mon[75334]: 7.1e scrub starts
Feb 02 15:10:47 compute-0 ceph-mon[75334]: 7.1e scrub ok
Feb 02 15:10:47 compute-0 ceph-mon[75334]: pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:47 compute-0 ceph-mon[75334]: 7.13 scrub starts
Feb 02 15:10:47 compute-0 ceph-mon[75334]: 7.13 scrub ok
Feb 02 15:10:48 compute-0 sudo[102479]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:48 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Feb 02 15:10:48 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Feb 02 15:10:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:48 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb 02 15:10:48 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb 02 15:10:48 compute-0 ceph-mon[75334]: pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:49 compute-0 sudo[102766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfnllkqmgodmfatofyfffrflrgdyofbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045048.5037494-140-170334603912976/AnsiballZ_selinux.py'
Feb 02 15:10:49 compute-0 sudo[102766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:49 compute-0 python3.9[102768]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 02 15:10:49 compute-0 sudo[102766]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:49 compute-0 ceph-mon[75334]: 8.1 scrub starts
Feb 02 15:10:49 compute-0 ceph-mon[75334]: 8.1 scrub ok
Feb 02 15:10:49 compute-0 ceph-mon[75334]: 10.1e scrub starts
Feb 02 15:10:49 compute-0 ceph-mon[75334]: 10.1e scrub ok
Feb 02 15:10:50 compute-0 sudo[102918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvifwayqimamatocqnsxlrbyvdbzqrtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045049.7549276-151-121397199734577/AnsiballZ_command.py'
Feb 02 15:10:50 compute-0 sudo[102918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:50 compute-0 python3.9[102920]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 02 15:10:50 compute-0 sudo[102918]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:50 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.a scrub starts
Feb 02 15:10:50 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.a scrub ok
Feb 02 15:10:50 compute-0 sudo[103070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttfzciqjuwqcsjtcktnhcbhsbxlqqnck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045050.4406686-159-168769768312098/AnsiballZ_file.py'
Feb 02 15:10:50 compute-0 sudo[103070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:50 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb 02 15:10:50 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb 02 15:10:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:50 compute-0 ceph-mon[75334]: pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:50 compute-0 python3.9[103072]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:10:50 compute-0 sudo[103070]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:51 compute-0 sudo[103222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reabiwdvqkoqjcfmatkxlfxxkwulwnrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045051.1539893-167-179109545236032/AnsiballZ_mount.py'
Feb 02 15:10:51 compute-0 sudo[103222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:51 compute-0 python3.9[103224]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 02 15:10:51 compute-0 sudo[103222]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:51 compute-0 ceph-mon[75334]: 8.a scrub starts
Feb 02 15:10:51 compute-0 ceph-mon[75334]: 8.a scrub ok
Feb 02 15:10:51 compute-0 ceph-mon[75334]: 10.7 scrub starts
Feb 02 15:10:51 compute-0 ceph-mon[75334]: 10.7 scrub ok
Feb 02 15:10:52 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb 02 15:10:52 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb 02 15:10:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:52 compute-0 ceph-mon[75334]: pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:53 compute-0 sudo[103374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpgoowkgvmikxynjvqwhoeejyknwxxcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045052.7682729-195-218271669242651/AnsiballZ_file.py'
Feb 02 15:10:53 compute-0 sudo[103374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:53 compute-0 python3.9[103376]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:10:53 compute-0 sudo[103374]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:53 compute-0 sudo[103526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pspmkemlpxokrmbqfbqwceszgbsaooox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045053.4995723-203-44054639772490/AnsiballZ_stat.py'
Feb 02 15:10:53 compute-0 sudo[103526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:53 compute-0 python3.9[103528]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:10:53 compute-0 sudo[103526]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:10:53 compute-0 ceph-mon[75334]: 10.17 scrub starts
Feb 02 15:10:53 compute-0 ceph-mon[75334]: 10.17 scrub ok
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:10:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:10:54 compute-0 sudo[103604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrolhskzafsorfwnfimuukdqadpwciqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045053.4995723-203-44054639772490/AnsiballZ_file.py'
Feb 02 15:10:54 compute-0 sudo[103604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:54 compute-0 python3.9[103606]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:10:54 compute-0 sudo[103604]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:54 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Feb 02 15:10:54 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Feb 02 15:10:54 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Feb 02 15:10:54 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Feb 02 15:10:54 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Feb 02 15:10:54 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Feb 02 15:10:54 compute-0 ceph-mon[75334]: pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:55 compute-0 sudo[103756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvxhibysihvnjhgvjefiolnpwmwwzums ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045054.927176-224-106280322681532/AnsiballZ_stat.py'
Feb 02 15:10:55 compute-0 sudo[103756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:55 compute-0 python3.9[103758]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:10:55 compute-0 sudo[103756]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:10:55 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Feb 02 15:10:55 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Feb 02 15:10:55 compute-0 ceph-mon[75334]: 5.5 scrub starts
Feb 02 15:10:55 compute-0 ceph-mon[75334]: 7.7 scrub starts
Feb 02 15:10:55 compute-0 ceph-mon[75334]: 5.5 scrub ok
Feb 02 15:10:55 compute-0 ceph-mon[75334]: 7.7 scrub ok
Feb 02 15:10:55 compute-0 ceph-mon[75334]: 3.7 scrub starts
Feb 02 15:10:55 compute-0 ceph-mon[75334]: 3.7 scrub ok
Feb 02 15:10:56 compute-0 sudo[103910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eciucachglvxqztnhcxmzhpphqshbnkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045056.0711012-237-198892451296015/AnsiballZ_getent.py'
Feb 02 15:10:56 compute-0 sudo[103910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:56 compute-0 python3.9[103912]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 02 15:10:56 compute-0 sudo[103910]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:56 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb 02 15:10:56 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb 02 15:10:56 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb 02 15:10:56 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb 02 15:10:56 compute-0 ceph-mon[75334]: 8.8 scrub starts
Feb 02 15:10:56 compute-0 ceph-mon[75334]: 8.8 scrub ok
Feb 02 15:10:56 compute-0 ceph-mon[75334]: pgmap v234: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:57 compute-0 sudo[104063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjoondivrveqfmagqqnpmtjnyfdtwpir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045056.9265997-247-166497996940710/AnsiballZ_getent.py'
Feb 02 15:10:57 compute-0 sudo[104063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:57 compute-0 python3.9[104065]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 02 15:10:57 compute-0 sudo[104063]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:57 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Feb 02 15:10:57 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Feb 02 15:10:58 compute-0 ceph-mon[75334]: 11.1 scrub starts
Feb 02 15:10:58 compute-0 ceph-mon[75334]: 11.1 scrub ok
Feb 02 15:10:58 compute-0 ceph-mon[75334]: 7.1 scrub starts
Feb 02 15:10:58 compute-0 ceph-mon[75334]: 7.1 scrub ok
Feb 02 15:10:58 compute-0 sudo[104216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obsqvehdoczdkvwqiudmsunedturhkck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045057.5754733-255-11931303491050/AnsiballZ_group.py'
Feb 02 15:10:58 compute-0 sudo[104216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:58 compute-0 python3.9[104218]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 15:10:58 compute-0 sudo[104216]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:58 compute-0 sudo[104368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihrcmyhhpjflgnplusapwlfacqjerurp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045058.4571335-264-105467047658918/AnsiballZ_file.py'
Feb 02 15:10:58 compute-0 sudo[104368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:58 compute-0 python3.9[104370]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 02 15:10:58 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.e scrub starts
Feb 02 15:10:58 compute-0 sudo[104368]: pam_unix(sudo:session): session closed for user root
Feb 02 15:10:58 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.e scrub ok
Feb 02 15:10:59 compute-0 ceph-mon[75334]: 11.6 scrub starts
Feb 02 15:10:59 compute-0 ceph-mon[75334]: 11.6 scrub ok
Feb 02 15:10:59 compute-0 ceph-mon[75334]: pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:10:59 compute-0 sudo[104520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzhhelqnpxnhcyqlkwrveqyfqilhumyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045059.2654197-275-165245420618015/AnsiballZ_dnf.py'
Feb 02 15:10:59 compute-0 sudo[104520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:10:59 compute-0 python3.9[104522]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:10:59 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb 02 15:10:59 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb 02 15:11:00 compute-0 ceph-mon[75334]: 3.e scrub starts
Feb 02 15:11:00 compute-0 ceph-mon[75334]: 3.e scrub ok
Feb 02 15:11:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:00 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Feb 02 15:11:00 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Feb 02 15:11:00 compute-0 sudo[104520]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:01 compute-0 ceph-mon[75334]: 7.d scrub starts
Feb 02 15:11:01 compute-0 ceph-mon[75334]: 7.d scrub ok
Feb 02 15:11:01 compute-0 ceph-mon[75334]: pgmap v236: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:01 compute-0 sudo[104673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mygltyhxuuwrljmqofqzhbyhlsqfxpfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045061.1256256-283-129979373905016/AnsiballZ_file.py'
Feb 02 15:11:01 compute-0 sudo[104673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:01 compute-0 python3.9[104675]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:11:01 compute-0 sudo[104673]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:01 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Feb 02 15:11:01 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Feb 02 15:11:01 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.d scrub starts
Feb 02 15:11:01 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.d scrub ok
Feb 02 15:11:02 compute-0 sudo[104825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrrmcaqkedxkezpjfbwmbdqlqavqmeup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045061.7447238-291-137812927303613/AnsiballZ_stat.py'
Feb 02 15:11:02 compute-0 sudo[104825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:02 compute-0 ceph-mon[75334]: 5.19 scrub starts
Feb 02 15:11:02 compute-0 ceph-mon[75334]: 5.19 scrub ok
Feb 02 15:11:02 compute-0 python3.9[104827]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:11:02 compute-0 sudo[104825]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:02 compute-0 sudo[104903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipgzburajvjkusqpsdolhwqvzuifjudt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045061.7447238-291-137812927303613/AnsiballZ_file.py'
Feb 02 15:11:02 compute-0 sudo[104903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:02 compute-0 python3.9[104905]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:11:02 compute-0 sudo[104903]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:03 compute-0 sudo[105055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlktmydlsaqdbuszjshtudiscrefmsio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045062.8464484-304-277731250146961/AnsiballZ_stat.py'
Feb 02 15:11:03 compute-0 ceph-mon[75334]: 3.9 scrub starts
Feb 02 15:11:03 compute-0 ceph-mon[75334]: 3.9 scrub ok
Feb 02 15:11:03 compute-0 ceph-mon[75334]: 11.d scrub starts
Feb 02 15:11:03 compute-0 ceph-mon[75334]: 11.d scrub ok
Feb 02 15:11:03 compute-0 ceph-mon[75334]: pgmap v237: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:03 compute-0 sudo[105055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:03 compute-0 python3.9[105057]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:11:03 compute-0 sudo[105055]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:03 compute-0 sudo[105133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywtzcdyoctfuawsgqabfjdslkudonfyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045062.8464484-304-277731250146961/AnsiballZ_file.py'
Feb 02 15:11:03 compute-0 sudo[105133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:03 compute-0 python3.9[105135]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:11:03 compute-0 sudo[105133]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:03 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb 02 15:11:03 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb 02 15:11:04 compute-0 sudo[105285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-armhgskgzxbytsazhizicioqfrrhwosy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045064.0268052-319-212400190567995/AnsiballZ_dnf.py'
Feb 02 15:11:04 compute-0 sudo[105285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:04 compute-0 python3.9[105287]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:11:04 compute-0 ceph-mon[75334]: 5.18 scrub starts
Feb 02 15:11:04 compute-0 ceph-mon[75334]: 5.18 scrub ok
Feb 02 15:11:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:04 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Feb 02 15:11:04 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Feb 02 15:11:05 compute-0 ceph-mon[75334]: pgmap v238: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:05 compute-0 sudo[105285]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:05 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Feb 02 15:11:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.a scrub starts
Feb 02 15:11:05 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Feb 02 15:11:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.a scrub ok
Feb 02 15:11:06 compute-0 python3.9[105438]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:11:06 compute-0 ceph-mon[75334]: 3.1 scrub starts
Feb 02 15:11:06 compute-0 ceph-mon[75334]: 3.1 scrub ok
Feb 02 15:11:06 compute-0 ceph-mon[75334]: 10.11 scrub starts
Feb 02 15:11:06 compute-0 ceph-mon[75334]: 7.a scrub starts
Feb 02 15:11:06 compute-0 ceph-mon[75334]: 10.11 scrub ok
Feb 02 15:11:06 compute-0 ceph-mon[75334]: 7.a scrub ok
Feb 02 15:11:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:07 compute-0 python3.9[105590]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 02 15:11:07 compute-0 ceph-mon[75334]: pgmap v239: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:07 compute-0 python3.9[105740]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:11:08 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Feb 02 15:11:08 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Feb 02 15:11:08 compute-0 ceph-mon[75334]: 10.8 scrub starts
Feb 02 15:11:08 compute-0 ceph-mon[75334]: 10.8 scrub ok
Feb 02 15:11:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:08 compute-0 sudo[105890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaazwztictsgcwtppocdtwggncuruayu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045068.2902775-360-78691848331623/AnsiballZ_systemd.py'
Feb 02 15:11:08 compute-0 sudo[105890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:09 compute-0 python3.9[105892]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:11:09 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 02 15:11:09 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Feb 02 15:11:09 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 02 15:11:09 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 02 15:11:09 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 02 15:11:09 compute-0 sudo[105890]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:09 compute-0 ceph-mon[75334]: pgmap v240: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:10 compute-0 python3.9[106053]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 02 15:11:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:10 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.b scrub starts
Feb 02 15:11:10 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.b scrub ok
Feb 02 15:11:11 compute-0 ceph-mon[75334]: pgmap v241: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:11 compute-0 ceph-mon[75334]: 11.b scrub starts
Feb 02 15:11:11 compute-0 ceph-mon[75334]: 11.b scrub ok
Feb 02 15:11:11 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Feb 02 15:11:11 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Feb 02 15:11:11 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.e scrub starts
Feb 02 15:11:11 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.e scrub ok
Feb 02 15:11:12 compute-0 sudo[106203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhvvzijexcilhvgfznmsliyerxqshwsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045072.2712278-417-280215212105016/AnsiballZ_systemd.py'
Feb 02 15:11:12 compute-0 sudo[106203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:12 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb 02 15:11:12 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb 02 15:11:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:12 compute-0 ceph-mon[75334]: 4.10 scrub starts
Feb 02 15:11:12 compute-0 ceph-mon[75334]: 4.10 scrub ok
Feb 02 15:11:12 compute-0 ceph-mon[75334]: 7.e scrub starts
Feb 02 15:11:12 compute-0 ceph-mon[75334]: 7.e scrub ok
Feb 02 15:11:12 compute-0 ceph-mon[75334]: 5.7 scrub starts
Feb 02 15:11:12 compute-0 ceph-mon[75334]: 5.7 scrub ok
Feb 02 15:11:12 compute-0 python3.9[106205]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:11:12 compute-0 sudo[106203]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:13 compute-0 sudo[106357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoxmlxavapmiaulqvnclyrxybeanadbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045073.0751293-417-106448140143410/AnsiballZ_systemd.py'
Feb 02 15:11:13 compute-0 sudo[106357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:13 compute-0 python3.9[106359]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:11:13 compute-0 sudo[106357]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:13 compute-0 ceph-mon[75334]: pgmap v242: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:14 compute-0 sshd-session[99656]: Connection closed by 192.168.122.30 port 58098
Feb 02 15:11:14 compute-0 sshd-session[99653]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:11:14 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Feb 02 15:11:14 compute-0 systemd[1]: session-35.scope: Consumed 1min 1.093s CPU time.
Feb 02 15:11:14 compute-0 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Feb 02 15:11:14 compute-0 systemd-logind[786]: Removed session 35.
Feb 02 15:11:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:11:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:11:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:11:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:11:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:11:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:11:14 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.e scrub starts
Feb 02 15:11:14 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.e scrub ok
Feb 02 15:11:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:14 compute-0 ceph-mon[75334]: 8.e scrub starts
Feb 02 15:11:14 compute-0 ceph-mon[75334]: 8.e scrub ok
Feb 02 15:11:14 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Feb 02 15:11:14 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Feb 02 15:11:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:15 compute-0 ceph-mon[75334]: pgmap v243: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:15 compute-0 ceph-mon[75334]: 11.2 scrub starts
Feb 02 15:11:15 compute-0 ceph-mon[75334]: 11.2 scrub ok
Feb 02 15:11:15 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Feb 02 15:11:15 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Feb 02 15:11:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:16 compute-0 ceph-mon[75334]: 4.12 scrub starts
Feb 02 15:11:16 compute-0 ceph-mon[75334]: 4.12 scrub ok
Feb 02 15:11:16 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb 02 15:11:16 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb 02 15:11:16 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb 02 15:11:16 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb 02 15:11:17 compute-0 ceph-mon[75334]: pgmap v244: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:17 compute-0 ceph-mon[75334]: 7.8 scrub starts
Feb 02 15:11:17 compute-0 ceph-mon[75334]: 7.8 scrub ok
Feb 02 15:11:17 compute-0 ceph-mon[75334]: 10.10 scrub starts
Feb 02 15:11:17 compute-0 ceph-mon[75334]: 10.10 scrub ok
Feb 02 15:11:17 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb 02 15:11:17 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb 02 15:11:18 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Feb 02 15:11:18 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Feb 02 15:11:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:18 compute-0 ceph-mon[75334]: 7.c scrub starts
Feb 02 15:11:18 compute-0 ceph-mon[75334]: 7.c scrub ok
Feb 02 15:11:18 compute-0 ceph-mon[75334]: pgmap v245: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:18 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Feb 02 15:11:18 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Feb 02 15:11:19 compute-0 sshd-session[106386]: Accepted publickey for zuul from 192.168.122.30 port 51582 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:11:19 compute-0 systemd-logind[786]: New session 36 of user zuul.
Feb 02 15:11:19 compute-0 systemd[1]: Started Session 36 of User zuul.
Feb 02 15:11:19 compute-0 sshd-session[106386]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:11:19 compute-0 ceph-mon[75334]: 5.3 scrub starts
Feb 02 15:11:19 compute-0 ceph-mon[75334]: 5.3 scrub ok
Feb 02 15:11:19 compute-0 ceph-mon[75334]: 5.13 scrub starts
Feb 02 15:11:19 compute-0 ceph-mon[75334]: 5.13 scrub ok
Feb 02 15:11:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Feb 02 15:11:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Feb 02 15:11:20 compute-0 python3.9[106539]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:11:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:20 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Feb 02 15:11:20 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Feb 02 15:11:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:20 compute-0 ceph-mon[75334]: 10.1a scrub starts
Feb 02 15:11:20 compute-0 ceph-mon[75334]: 10.1a scrub ok
Feb 02 15:11:20 compute-0 ceph-mon[75334]: 3.12 scrub starts
Feb 02 15:11:20 compute-0 ceph-mon[75334]: 3.12 scrub ok
Feb 02 15:11:20 compute-0 ceph-mon[75334]: pgmap v246: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:21 compute-0 sudo[106693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovloucgqinfegkzxpmdlaanfjbjfqbzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045081.2498946-31-53821137759951/AnsiballZ_getent.py'
Feb 02 15:11:21 compute-0 sudo[106693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:21 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Feb 02 15:11:21 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Feb 02 15:11:21 compute-0 python3.9[106695]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 02 15:11:21 compute-0 sudo[106693]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:22 compute-0 ceph-mon[75334]: 11.3 scrub starts
Feb 02 15:11:22 compute-0 ceph-mon[75334]: 11.3 scrub ok
Feb 02 15:11:22 compute-0 sudo[106846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmveydlsgtwphlfmtxhofkpodbmkcion ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045082.31136-43-146667503803855/AnsiballZ_setup.py'
Feb 02 15:11:22 compute-0 sudo[106846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:22 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Feb 02 15:11:22 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Feb 02 15:11:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:22 compute-0 python3.9[106848]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:11:23 compute-0 sudo[106846]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:23 compute-0 ceph-mon[75334]: pgmap v247: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:23 compute-0 sudo[106930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjfmxooawjrdrckktfuwfvzvtbhwtiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045082.31136-43-146667503803855/AnsiballZ_dnf.py'
Feb 02 15:11:23 compute-0 sudo[106930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:23 compute-0 python3.9[106932]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 02 15:11:24 compute-0 ceph-mon[75334]: 5.14 scrub starts
Feb 02 15:11:24 compute-0 ceph-mon[75334]: 5.14 scrub ok
Feb 02 15:11:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:25 compute-0 sudo[106930]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:25 compute-0 ceph-mon[75334]: pgmap v248: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:25 compute-0 sudo[107083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ablyuivzghfxdnfovftrnayhhuyviznj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045085.3205762-57-587293715058/AnsiballZ_dnf.py'
Feb 02 15:11:25 compute-0 sudo[107083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:25 compute-0 python3.9[107085]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:11:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Feb 02 15:11:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Feb 02 15:11:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:27 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Feb 02 15:11:27 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Feb 02 15:11:27 compute-0 sudo[107083]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:27 compute-0 ceph-mon[75334]: 8.9 scrub starts
Feb 02 15:11:27 compute-0 ceph-mon[75334]: 8.9 scrub ok
Feb 02 15:11:27 compute-0 ceph-mon[75334]: pgmap v249: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:27 compute-0 ceph-mon[75334]: 3.8 scrub starts
Feb 02 15:11:27 compute-0 ceph-mon[75334]: 3.8 scrub ok
Feb 02 15:11:27 compute-0 sudo[107236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cahdnjnuqpjhxklkodhzicatbgvnbfcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045087.252016-65-23070392995187/AnsiballZ_systemd.py'
Feb 02 15:11:27 compute-0 sudo[107236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:28 compute-0 python3.9[107238]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 15:11:28 compute-0 sudo[107236]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:28 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb 02 15:11:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:28 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb 02 15:11:29 compute-0 python3.9[107391]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:11:29 compute-0 sudo[107541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shgqjugrbpstkldedaicxzigzzrntyeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045089.4389434-83-24897551886351/AnsiballZ_sefcontext.py'
Feb 02 15:11:29 compute-0 sudo[107541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:29 compute-0 ceph-mon[75334]: 3.3 scrub starts
Feb 02 15:11:29 compute-0 ceph-mon[75334]: pgmap v250: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:29 compute-0 ceph-mon[75334]: 3.3 scrub ok
Feb 02 15:11:30 compute-0 python3.9[107543]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 02 15:11:30 compute-0 sudo[107541]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Feb 02 15:11:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Feb 02 15:11:30 compute-0 python3.9[107693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:11:31 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb 02 15:11:31 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb 02 15:11:31 compute-0 sudo[107849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhyhdimkyieuoeopgviwansorrpsffde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045091.3886447-101-281271724329122/AnsiballZ_dnf.py'
Feb 02 15:11:31 compute-0 sudo[107849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:31 compute-0 python3.9[107851]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:11:31 compute-0 ceph-mon[75334]: pgmap v251: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:31 compute-0 ceph-mon[75334]: 5.1d scrub starts
Feb 02 15:11:31 compute-0 ceph-mon[75334]: 5.1d scrub ok
Feb 02 15:11:31 compute-0 ceph-mon[75334]: 11.1e scrub starts
Feb 02 15:11:31 compute-0 ceph-mon[75334]: 11.1e scrub ok
Feb 02 15:11:31 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Feb 02 15:11:31 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Feb 02 15:11:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:32 compute-0 ceph-mon[75334]: 4.8 scrub starts
Feb 02 15:11:32 compute-0 ceph-mon[75334]: 4.8 scrub ok
Feb 02 15:11:33 compute-0 sudo[107849]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:33 compute-0 sudo[108002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqypbslxeugailjtlkmuzikedgrmnlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045093.2473178-109-11584789482689/AnsiballZ_command.py'
Feb 02 15:11:33 compute-0 sudo[108002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:33 compute-0 python3.9[108004]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:11:33 compute-0 ceph-mon[75334]: pgmap v252: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:34 compute-0 sudo[108140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:11:34 compute-0 sudo[108140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:11:34 compute-0 sudo[108140]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:34 compute-0 sudo[108165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:11:34 compute-0 sudo[108002]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:34 compute-0 sudo[108165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:11:34 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Feb 02 15:11:34 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Feb 02 15:11:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:34 compute-0 ceph-mon[75334]: pgmap v253: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:34 compute-0 sudo[108165]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:11:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:11:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:11:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:11:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:11:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:11:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:11:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:11:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:11:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:11:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:11:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:11:35 compute-0 sudo[108320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:11:35 compute-0 sudo[108320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:11:35 compute-0 sudo[108320]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:35 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Feb 02 15:11:35 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Feb 02 15:11:35 compute-0 sudo[108368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:11:35 compute-0 sudo[108368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:11:35 compute-0 sudo[108420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqsuzgzqnqxdywhwmtzdvyqazzugidij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045094.7106173-117-164860117118000/AnsiballZ_file.py'
Feb 02 15:11:35 compute-0 sudo[108420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:35 compute-0 python3.9[108422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 02 15:11:35 compute-0 sudo[108420]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:35 compute-0 podman[108435]: 2026-02-02 15:11:35.428089551 +0000 UTC m=+0.053204177 container create d7e6b414def6108ecdd9b538e27b2fdd4bb1ebf89b8d38e1186158cd8cffa7c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:11:35 compute-0 systemd[1]: Started libpod-conmon-d7e6b414def6108ecdd9b538e27b2fdd4bb1ebf89b8d38e1186158cd8cffa7c0.scope.
Feb 02 15:11:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:11:35 compute-0 podman[108435]: 2026-02-02 15:11:35.407798412 +0000 UTC m=+0.032913058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:11:35 compute-0 podman[108435]: 2026-02-02 15:11:35.509003098 +0000 UTC m=+0.134117724 container init d7e6b414def6108ecdd9b538e27b2fdd4bb1ebf89b8d38e1186158cd8cffa7c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:11:35 compute-0 podman[108435]: 2026-02-02 15:11:35.514893052 +0000 UTC m=+0.140007678 container start d7e6b414def6108ecdd9b538e27b2fdd4bb1ebf89b8d38e1186158cd8cffa7c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:11:35 compute-0 podman[108435]: 2026-02-02 15:11:35.518577783 +0000 UTC m=+0.143692439 container attach d7e6b414def6108ecdd9b538e27b2fdd4bb1ebf89b8d38e1186158cd8cffa7c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:11:35 compute-0 funny_agnesi[108473]: 167 167
Feb 02 15:11:35 compute-0 systemd[1]: libpod-d7e6b414def6108ecdd9b538e27b2fdd4bb1ebf89b8d38e1186158cd8cffa7c0.scope: Deactivated successfully.
Feb 02 15:11:35 compute-0 podman[108435]: 2026-02-02 15:11:35.520194123 +0000 UTC m=+0.145308759 container died d7e6b414def6108ecdd9b538e27b2fdd4bb1ebf89b8d38e1186158cd8cffa7c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_agnesi, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-252aef7686fcec5a6c18a2752075aed4f132da2bfc8930f794826e98d66dc185-merged.mount: Deactivated successfully.
Feb 02 15:11:35 compute-0 podman[108435]: 2026-02-02 15:11:35.554941066 +0000 UTC m=+0.180055702 container remove d7e6b414def6108ecdd9b538e27b2fdd4bb1ebf89b8d38e1186158cd8cffa7c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_agnesi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:11:35 compute-0 systemd[1]: libpod-conmon-d7e6b414def6108ecdd9b538e27b2fdd4bb1ebf89b8d38e1186158cd8cffa7c0.scope: Deactivated successfully.
Feb 02 15:11:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:35 compute-0 podman[108551]: 2026-02-02 15:11:35.691435716 +0000 UTC m=+0.045759994 container create 800a8e05a5dd259c4a91f207f741673790e2f6baf027979143b27474c7979652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:11:35 compute-0 systemd[1]: Started libpod-conmon-800a8e05a5dd259c4a91f207f741673790e2f6baf027979143b27474c7979652.scope.
Feb 02 15:11:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac153848b09713102b79ba312426f44b0ad6dc5d5e276e442eae0defb9024c40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac153848b09713102b79ba312426f44b0ad6dc5d5e276e442eae0defb9024c40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac153848b09713102b79ba312426f44b0ad6dc5d5e276e442eae0defb9024c40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac153848b09713102b79ba312426f44b0ad6dc5d5e276e442eae0defb9024c40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac153848b09713102b79ba312426f44b0ad6dc5d5e276e442eae0defb9024c40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:35 compute-0 podman[108551]: 2026-02-02 15:11:35.668237036 +0000 UTC m=+0.022561324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:11:35 compute-0 podman[108551]: 2026-02-02 15:11:35.78321621 +0000 UTC m=+0.137540528 container init 800a8e05a5dd259c4a91f207f741673790e2f6baf027979143b27474c7979652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_cray, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:11:35 compute-0 podman[108551]: 2026-02-02 15:11:35.788220883 +0000 UTC m=+0.142545161 container start 800a8e05a5dd259c4a91f207f741673790e2f6baf027979143b27474c7979652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_cray, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:11:35 compute-0 podman[108551]: 2026-02-02 15:11:35.792579669 +0000 UTC m=+0.146903987 container attach 800a8e05a5dd259c4a91f207f741673790e2f6baf027979143b27474c7979652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_cray, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:11:35 compute-0 ceph-mon[75334]: 7.6 scrub starts
Feb 02 15:11:35 compute-0 ceph-mon[75334]: 7.6 scrub ok
Feb 02 15:11:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:11:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:11:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:11:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:11:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:11:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:11:35 compute-0 ceph-mon[75334]: 8.1c scrub starts
Feb 02 15:11:35 compute-0 ceph-mon[75334]: 8.1c scrub ok
Feb 02 15:11:36 compute-0 python3.9[108645]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:11:36 compute-0 lucid_cray[108567]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:11:36 compute-0 lucid_cray[108567]: --> All data devices are unavailable
Feb 02 15:11:36 compute-0 systemd[1]: libpod-800a8e05a5dd259c4a91f207f741673790e2f6baf027979143b27474c7979652.scope: Deactivated successfully.
Feb 02 15:11:36 compute-0 podman[108687]: 2026-02-02 15:11:36.305437392 +0000 UTC m=+0.032395338 container died 800a8e05a5dd259c4a91f207f741673790e2f6baf027979143b27474c7979652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac153848b09713102b79ba312426f44b0ad6dc5d5e276e442eae0defb9024c40-merged.mount: Deactivated successfully.
Feb 02 15:11:36 compute-0 podman[108687]: 2026-02-02 15:11:36.349436681 +0000 UTC m=+0.076394607 container remove 800a8e05a5dd259c4a91f207f741673790e2f6baf027979143b27474c7979652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:11:36 compute-0 systemd[1]: libpod-conmon-800a8e05a5dd259c4a91f207f741673790e2f6baf027979143b27474c7979652.scope: Deactivated successfully.
Feb 02 15:11:36 compute-0 sudo[108368]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:36 compute-0 sudo[108777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:11:36 compute-0 sudo[108777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:11:36 compute-0 sudo[108777]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:36 compute-0 sudo[108826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:11:36 compute-0 sudo[108826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:11:36 compute-0 sudo[108877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vttlkewxriktawkvqsdggxsmtohpwnpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045096.2987485-133-228427208538460/AnsiballZ_dnf.py'
Feb 02 15:11:36 compute-0 sudo[108877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:36 compute-0 python3.9[108879]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:11:36 compute-0 podman[108892]: 2026-02-02 15:11:36.792992092 +0000 UTC m=+0.052025309 container create 67a057007348eb234dce96e1f86f275f8e7c3a1db20a88e8d7d3ca1b7c5e3b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:11:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:36 compute-0 systemd[1]: Started libpod-conmon-67a057007348eb234dce96e1f86f275f8e7c3a1db20a88e8d7d3ca1b7c5e3b21.scope.
Feb 02 15:11:36 compute-0 podman[108892]: 2026-02-02 15:11:36.764557113 +0000 UTC m=+0.023590360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:11:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:11:36 compute-0 podman[108892]: 2026-02-02 15:11:36.880015128 +0000 UTC m=+0.139048335 container init 67a057007348eb234dce96e1f86f275f8e7c3a1db20a88e8d7d3ca1b7c5e3b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:11:36 compute-0 podman[108892]: 2026-02-02 15:11:36.885108013 +0000 UTC m=+0.144141220 container start 67a057007348eb234dce96e1f86f275f8e7c3a1db20a88e8d7d3ca1b7c5e3b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:11:36 compute-0 podman[108892]: 2026-02-02 15:11:36.89030441 +0000 UTC m=+0.149337607 container attach 67a057007348eb234dce96e1f86f275f8e7c3a1db20a88e8d7d3ca1b7c5e3b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:11:36 compute-0 blissful_bell[108909]: 167 167
Feb 02 15:11:36 compute-0 systemd[1]: libpod-67a057007348eb234dce96e1f86f275f8e7c3a1db20a88e8d7d3ca1b7c5e3b21.scope: Deactivated successfully.
Feb 02 15:11:36 compute-0 podman[108892]: 2026-02-02 15:11:36.899181729 +0000 UTC m=+0.158214926 container died 67a057007348eb234dce96e1f86f275f8e7c3a1db20a88e8d7d3ca1b7c5e3b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9947ccffdd922879522ab1b271e9f1d2daa6f7343553055fe2880309f4f8b464-merged.mount: Deactivated successfully.
Feb 02 15:11:36 compute-0 podman[108892]: 2026-02-02 15:11:36.93751135 +0000 UTC m=+0.196544527 container remove 67a057007348eb234dce96e1f86f275f8e7c3a1db20a88e8d7d3ca1b7c5e3b21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bell, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:11:36 compute-0 systemd[1]: libpod-conmon-67a057007348eb234dce96e1f86f275f8e7c3a1db20a88e8d7d3ca1b7c5e3b21.scope: Deactivated successfully.
Feb 02 15:11:36 compute-0 ceph-mon[75334]: pgmap v254: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:36 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb 02 15:11:36 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb 02 15:11:37 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb 02 15:11:37 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb 02 15:11:37 compute-0 podman[108933]: 2026-02-02 15:11:37.122645635 +0000 UTC m=+0.059572883 container create dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cohen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:11:37 compute-0 systemd[1]: Started libpod-conmon-dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee.scope.
Feb 02 15:11:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7d5d2aded782b32a8f3fcfa65236097c3f0ffe26dec7d714acf0bac0864543/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7d5d2aded782b32a8f3fcfa65236097c3f0ffe26dec7d714acf0bac0864543/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7d5d2aded782b32a8f3fcfa65236097c3f0ffe26dec7d714acf0bac0864543/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7d5d2aded782b32a8f3fcfa65236097c3f0ffe26dec7d714acf0bac0864543/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:37 compute-0 podman[108933]: 2026-02-02 15:11:37.098516362 +0000 UTC m=+0.035443690 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:11:37 compute-0 podman[108933]: 2026-02-02 15:11:37.210520922 +0000 UTC m=+0.147448180 container init dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cohen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:11:37 compute-0 podman[108933]: 2026-02-02 15:11:37.216460178 +0000 UTC m=+0.153387416 container start dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cohen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:11:37 compute-0 podman[108933]: 2026-02-02 15:11:37.220568469 +0000 UTC m=+0.157495717 container attach dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cohen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:11:37 compute-0 zen_cohen[108950]: {
Feb 02 15:11:37 compute-0 zen_cohen[108950]:     "0": [
Feb 02 15:11:37 compute-0 zen_cohen[108950]:         {
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "devices": [
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "/dev/loop3"
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             ],
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_name": "ceph_lv0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_size": "21470642176",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "name": "ceph_lv0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "tags": {
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.cluster_name": "ceph",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.crush_device_class": "",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.encrypted": "0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.objectstore": "bluestore",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.osd_id": "0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.type": "block",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.vdo": "0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.with_tpm": "0"
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             },
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "type": "block",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "vg_name": "ceph_vg0"
Feb 02 15:11:37 compute-0 zen_cohen[108950]:         }
Feb 02 15:11:37 compute-0 zen_cohen[108950]:     ],
Feb 02 15:11:37 compute-0 zen_cohen[108950]:     "1": [
Feb 02 15:11:37 compute-0 zen_cohen[108950]:         {
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "devices": [
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "/dev/loop4"
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             ],
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_name": "ceph_lv1",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_size": "21470642176",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "name": "ceph_lv1",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "tags": {
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.cluster_name": "ceph",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.crush_device_class": "",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.encrypted": "0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.objectstore": "bluestore",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.osd_id": "1",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.type": "block",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.vdo": "0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.with_tpm": "0"
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             },
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "type": "block",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "vg_name": "ceph_vg1"
Feb 02 15:11:37 compute-0 zen_cohen[108950]:         }
Feb 02 15:11:37 compute-0 zen_cohen[108950]:     ],
Feb 02 15:11:37 compute-0 zen_cohen[108950]:     "2": [
Feb 02 15:11:37 compute-0 zen_cohen[108950]:         {
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "devices": [
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "/dev/loop5"
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             ],
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_name": "ceph_lv2",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_size": "21470642176",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "name": "ceph_lv2",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "tags": {
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.cluster_name": "ceph",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.crush_device_class": "",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.encrypted": "0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.objectstore": "bluestore",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.osd_id": "2",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.type": "block",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.vdo": "0",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:                 "ceph.with_tpm": "0"
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             },
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "type": "block",
Feb 02 15:11:37 compute-0 zen_cohen[108950]:             "vg_name": "ceph_vg2"
Feb 02 15:11:37 compute-0 zen_cohen[108950]:         }
Feb 02 15:11:37 compute-0 zen_cohen[108950]:     ]
Feb 02 15:11:37 compute-0 zen_cohen[108950]: }
Feb 02 15:11:37 compute-0 systemd[1]: libpod-dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee.scope: Deactivated successfully.
Feb 02 15:11:37 compute-0 conmon[108950]: conmon dfff786c9aab520db8c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee.scope/container/memory.events
Feb 02 15:11:37 compute-0 podman[108933]: 2026-02-02 15:11:37.515591772 +0000 UTC m=+0.452519010 container died dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Feb 02 15:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe7d5d2aded782b32a8f3fcfa65236097c3f0ffe26dec7d714acf0bac0864543-merged.mount: Deactivated successfully.
Feb 02 15:11:37 compute-0 podman[108933]: 2026-02-02 15:11:37.560165767 +0000 UTC m=+0.497093005 container remove dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_cohen, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:11:37 compute-0 systemd[1]: libpod-conmon-dfff786c9aab520db8c2ebda7aa40b2e277c3a410f9768e7eecee587a9d8b0ee.scope: Deactivated successfully.
Feb 02 15:11:37 compute-0 sudo[108826]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:37 compute-0 sudo[108971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:11:37 compute-0 sudo[108971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:11:37 compute-0 sudo[108971]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:37 compute-0 sudo[108996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:11:37 compute-0 sudo[108996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:11:37 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Feb 02 15:11:37 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Feb 02 15:11:37 compute-0 ceph-mon[75334]: 10.6 scrub starts
Feb 02 15:11:37 compute-0 ceph-mon[75334]: 10.6 scrub ok
Feb 02 15:11:37 compute-0 ceph-mon[75334]: 4.e scrub starts
Feb 02 15:11:37 compute-0 ceph-mon[75334]: 4.e scrub ok
Feb 02 15:11:38 compute-0 podman[109034]: 2026-02-02 15:11:38.03805317 +0000 UTC m=+0.040703141 container create db4c3650ca5cb98b699ec4d528d8788c43afe8292290f344157a2047075df498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:11:38 compute-0 systemd[1]: Started libpod-conmon-db4c3650ca5cb98b699ec4d528d8788c43afe8292290f344157a2047075df498.scope.
Feb 02 15:11:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:11:38 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Feb 02 15:11:38 compute-0 podman[109034]: 2026-02-02 15:11:38.020739675 +0000 UTC m=+0.023389646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:11:38 compute-0 podman[109034]: 2026-02-02 15:11:38.118840063 +0000 UTC m=+0.121490064 container init db4c3650ca5cb98b699ec4d528d8788c43afe8292290f344157a2047075df498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:11:38 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Feb 02 15:11:38 compute-0 podman[109034]: 2026-02-02 15:11:38.125529197 +0000 UTC m=+0.128179178 container start db4c3650ca5cb98b699ec4d528d8788c43afe8292290f344157a2047075df498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:11:38 compute-0 podman[109034]: 2026-02-02 15:11:38.12890086 +0000 UTC m=+0.131550861 container attach db4c3650ca5cb98b699ec4d528d8788c43afe8292290f344157a2047075df498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:11:38 compute-0 nice_bell[109050]: 167 167
Feb 02 15:11:38 compute-0 systemd[1]: libpod-db4c3650ca5cb98b699ec4d528d8788c43afe8292290f344157a2047075df498.scope: Deactivated successfully.
Feb 02 15:11:38 compute-0 podman[109034]: 2026-02-02 15:11:38.132996571 +0000 UTC m=+0.135646552 container died db4c3650ca5cb98b699ec4d528d8788c43afe8292290f344157a2047075df498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f57ba8b57ae5ff2678d024a87287589c06aa98792d830cc249ffa09252f0b7-merged.mount: Deactivated successfully.
Feb 02 15:11:38 compute-0 podman[109034]: 2026-02-02 15:11:38.183868699 +0000 UTC m=+0.186518660 container remove db4c3650ca5cb98b699ec4d528d8788c43afe8292290f344157a2047075df498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:11:38 compute-0 systemd[1]: libpod-conmon-db4c3650ca5cb98b699ec4d528d8788c43afe8292290f344157a2047075df498.scope: Deactivated successfully.
Feb 02 15:11:38 compute-0 sudo[108877]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:38 compute-0 podman[109099]: 2026-02-02 15:11:38.340174327 +0000 UTC m=+0.049734772 container create 4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:11:38 compute-0 systemd[1]: Started libpod-conmon-4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5.scope.
Feb 02 15:11:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663a1bed2f2fce629a46ccbf48fcaf23237934ecfce1287a65e208eb5cbc9adb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663a1bed2f2fce629a46ccbf48fcaf23237934ecfce1287a65e208eb5cbc9adb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663a1bed2f2fce629a46ccbf48fcaf23237934ecfce1287a65e208eb5cbc9adb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663a1bed2f2fce629a46ccbf48fcaf23237934ecfce1287a65e208eb5cbc9adb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:11:38 compute-0 podman[109099]: 2026-02-02 15:11:38.323339734 +0000 UTC m=+0.032900199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:11:38 compute-0 podman[109099]: 2026-02-02 15:11:38.419291939 +0000 UTC m=+0.128852404 container init 4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:11:38 compute-0 podman[109099]: 2026-02-02 15:11:38.427096821 +0000 UTC m=+0.136657286 container start 4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_chatterjee, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:11:38 compute-0 podman[109099]: 2026-02-02 15:11:38.431284524 +0000 UTC m=+0.140844969 container attach 4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:11:38 compute-0 sudo[109255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtcvibsknkpnzypuwjsuzfifqmwykekv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045098.4501314-142-233611493611496/AnsiballZ_dnf.py'
Feb 02 15:11:38 compute-0 sudo[109255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:38 compute-0 python3.9[109257]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:11:38 compute-0 ceph-mon[75334]: 7.3 scrub starts
Feb 02 15:11:38 compute-0 ceph-mon[75334]: 7.3 scrub ok
Feb 02 15:11:38 compute-0 ceph-mon[75334]: 11.1f scrub starts
Feb 02 15:11:38 compute-0 ceph-mon[75334]: 11.1f scrub ok
Feb 02 15:11:38 compute-0 ceph-mon[75334]: pgmap v255: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:39 compute-0 lvm[109320]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:11:39 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Feb 02 15:11:39 compute-0 lvm[109320]: VG ceph_vg0 finished
Feb 02 15:11:39 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Feb 02 15:11:39 compute-0 lvm[109323]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:11:39 compute-0 lvm[109323]: VG ceph_vg1 finished
Feb 02 15:11:39 compute-0 lvm[109325]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:11:39 compute-0 lvm[109325]: VG ceph_vg2 finished
Feb 02 15:11:39 compute-0 distracted_chatterjee[109115]: {}
Feb 02 15:11:39 compute-0 systemd[1]: libpod-4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5.scope: Deactivated successfully.
Feb 02 15:11:39 compute-0 systemd[1]: libpod-4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5.scope: Consumed 1.179s CPU time.
Feb 02 15:11:39 compute-0 podman[109099]: 2026-02-02 15:11:39.269729929 +0000 UTC m=+0.979290374 container died 4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-663a1bed2f2fce629a46ccbf48fcaf23237934ecfce1287a65e208eb5cbc9adb-merged.mount: Deactivated successfully.
Feb 02 15:11:39 compute-0 podman[109099]: 2026-02-02 15:11:39.316759283 +0000 UTC m=+1.026319728 container remove 4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:11:39 compute-0 systemd[1]: libpod-conmon-4f2708a24b3f7f2aafefe52cbb2bce199b4ab63c9de89809454d85642f780fd5.scope: Deactivated successfully.
Feb 02 15:11:39 compute-0 sudo[108996]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:11:39 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:11:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:11:39 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:11:39 compute-0 sudo[109341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:11:39 compute-0 sudo[109341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:11:39 compute-0 sudo[109341]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:39 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Feb 02 15:11:39 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Feb 02 15:11:39 compute-0 ceph-mon[75334]: 4.11 scrub starts
Feb 02 15:11:39 compute-0 ceph-mon[75334]: 4.11 scrub ok
Feb 02 15:11:39 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:11:39 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:11:40 compute-0 sudo[109255]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:40 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb 02 15:11:40 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb 02 15:11:40 compute-0 sudo[109515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyuenzgtowdruvofrclrphksfxgequnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045100.6739767-154-154142509446638/AnsiballZ_stat.py'
Feb 02 15:11:40 compute-0 sudo[109515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:41 compute-0 ceph-mon[75334]: 10.19 scrub starts
Feb 02 15:11:41 compute-0 ceph-mon[75334]: 10.19 scrub ok
Feb 02 15:11:41 compute-0 ceph-mon[75334]: pgmap v256: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:41 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Feb 02 15:11:41 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Feb 02 15:11:41 compute-0 python3.9[109517]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:11:41 compute-0 sudo[109515]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:41 compute-0 sudo[109669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mskzfnxcxaqiasvuzwfitgzowpavywdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045101.3519537-162-197006526762376/AnsiballZ_slurp.py'
Feb 02 15:11:41 compute-0 sudo[109669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:11:42 compute-0 python3.9[109671]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Feb 02 15:11:42 compute-0 ceph-mon[75334]: 11.f scrub starts
Feb 02 15:11:42 compute-0 ceph-mon[75334]: 11.f scrub ok
Feb 02 15:11:42 compute-0 ceph-mon[75334]: 4.9 scrub starts
Feb 02 15:11:42 compute-0 ceph-mon[75334]: 4.9 scrub ok
Feb 02 15:11:42 compute-0 sudo[109669]: pam_unix(sudo:session): session closed for user root
Feb 02 15:11:42 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb 02 15:11:42 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb 02 15:11:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:11:42
Feb 02 15:11:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:11:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:11:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images', 'vms', 'cephfs.cephfs.meta']
Feb 02 15:11:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:11:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:43 compute-0 ceph-mon[75334]: 11.1c scrub starts
Feb 02 15:11:43 compute-0 ceph-mon[75334]: 11.1c scrub ok
Feb 02 15:11:43 compute-0 ceph-mon[75334]: pgmap v257: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:43 compute-0 sshd-session[106389]: Connection closed by 192.168.122.30 port 51582
Feb 02 15:11:43 compute-0 sshd-session[106386]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:11:43 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Feb 02 15:11:43 compute-0 systemd[1]: session-36.scope: Consumed 17.034s CPU time.
Feb 02 15:11:43 compute-0 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Feb 02 15:11:43 compute-0 systemd-logind[786]: Removed session 36.
Feb 02 15:11:43 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Feb 02 15:11:43 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Feb 02 15:11:44 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb 02 15:11:44 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb 02 15:11:44 compute-0 ceph-mon[75334]: 4.13 scrub starts
Feb 02 15:11:44 compute-0 ceph-mon[75334]: 4.13 scrub ok
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:11:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:45 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Feb 02 15:11:45 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Feb 02 15:11:45 compute-0 ceph-mon[75334]: 3.6 scrub starts
Feb 02 15:11:45 compute-0 ceph-mon[75334]: 3.6 scrub ok
Feb 02 15:11:45 compute-0 ceph-mon[75334]: pgmap v258: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:45 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.e scrub starts
Feb 02 15:11:45 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.e scrub ok
Feb 02 15:11:46 compute-0 ceph-mon[75334]: 11.18 scrub starts
Feb 02 15:11:46 compute-0 ceph-mon[75334]: 11.18 scrub ok
Feb 02 15:11:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:47 compute-0 ceph-mon[75334]: 11.e scrub starts
Feb 02 15:11:47 compute-0 ceph-mon[75334]: 11.e scrub ok
Feb 02 15:11:47 compute-0 ceph-mon[75334]: pgmap v259: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:48 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb 02 15:11:48 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb 02 15:11:48 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb 02 15:11:48 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb 02 15:11:48 compute-0 ceph-mon[75334]: 7.15 scrub starts
Feb 02 15:11:48 compute-0 sshd-session[109696]: Accepted publickey for zuul from 192.168.122.30 port 33344 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:11:48 compute-0 systemd-logind[786]: New session 37 of user zuul.
Feb 02 15:11:48 compute-0 systemd[1]: Started Session 37 of User zuul.
Feb 02 15:11:48 compute-0 sshd-session[109696]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:11:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:49 compute-0 ceph-mon[75334]: 5.2 scrub starts
Feb 02 15:11:49 compute-0 ceph-mon[75334]: 5.2 scrub ok
Feb 02 15:11:49 compute-0 ceph-mon[75334]: 7.15 scrub ok
Feb 02 15:11:49 compute-0 ceph-mon[75334]: pgmap v260: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:49 compute-0 python3.9[109849]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:11:50 compute-0 python3.9[110003]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:11:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:51 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Feb 02 15:11:51 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Feb 02 15:11:51 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb 02 15:11:51 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb 02 15:11:51 compute-0 python3.9[110196]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:11:51 compute-0 ceph-mon[75334]: pgmap v261: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:51 compute-0 ceph-mon[75334]: 5.4 scrub starts
Feb 02 15:11:51 compute-0 ceph-mon[75334]: 5.4 scrub ok
Feb 02 15:11:52 compute-0 sshd-session[109699]: Connection closed by 192.168.122.30 port 33344
Feb 02 15:11:52 compute-0 sshd-session[109696]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:11:52 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Feb 02 15:11:52 compute-0 systemd[1]: session-37.scope: Consumed 2.269s CPU time.
Feb 02 15:11:52 compute-0 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Feb 02 15:11:52 compute-0 systemd-logind[786]: Removed session 37.
Feb 02 15:11:52 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb 02 15:11:52 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb 02 15:11:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:52 compute-0 ceph-mon[75334]: 3.16 scrub starts
Feb 02 15:11:52 compute-0 ceph-mon[75334]: 3.16 scrub ok
Feb 02 15:11:52 compute-0 ceph-mon[75334]: 5.9 scrub starts
Feb 02 15:11:52 compute-0 ceph-mon[75334]: 5.9 scrub ok
Feb 02 15:11:53 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.b scrub starts
Feb 02 15:11:53 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.b scrub ok
Feb 02 15:11:53 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb 02 15:11:53 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb 02 15:11:53 compute-0 ceph-mon[75334]: pgmap v262: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:53 compute-0 ceph-mon[75334]: 8.b scrub starts
Feb 02 15:11:53 compute-0 ceph-mon[75334]: 8.b scrub ok
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:11:53 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:11:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:54 compute-0 ceph-mon[75334]: 4.1b scrub starts
Feb 02 15:11:54 compute-0 ceph-mon[75334]: 4.1b scrub ok
Feb 02 15:11:54 compute-0 ceph-mon[75334]: pgmap v263: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:55 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.f scrub starts
Feb 02 15:11:55 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.f scrub ok
Feb 02 15:11:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:11:55 compute-0 ceph-mon[75334]: 3.f scrub starts
Feb 02 15:11:55 compute-0 ceph-mon[75334]: 3.f scrub ok
Feb 02 15:11:56 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.f scrub starts
Feb 02 15:11:56 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.f scrub ok
Feb 02 15:11:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:56 compute-0 ceph-mon[75334]: 4.f scrub starts
Feb 02 15:11:56 compute-0 ceph-mon[75334]: 4.f scrub ok
Feb 02 15:11:56 compute-0 ceph-mon[75334]: pgmap v264: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:57 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb 02 15:11:57 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb 02 15:11:57 compute-0 sshd-session[110222]: Accepted publickey for zuul from 192.168.122.30 port 55560 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:11:57 compute-0 systemd-logind[786]: New session 38 of user zuul.
Feb 02 15:11:57 compute-0 systemd[1]: Started Session 38 of User zuul.
Feb 02 15:11:57 compute-0 sshd-session[110222]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:11:57 compute-0 ceph-mon[75334]: 5.1 scrub starts
Feb 02 15:11:57 compute-0 ceph-mon[75334]: 5.1 scrub ok
Feb 02 15:11:58 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.c scrub starts
Feb 02 15:11:58 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.c scrub ok
Feb 02 15:11:58 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.f scrub starts
Feb 02 15:11:58 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.f scrub ok
Feb 02 15:11:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:58 compute-0 python3.9[110375]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:11:58 compute-0 ceph-mon[75334]: 8.c scrub starts
Feb 02 15:11:58 compute-0 ceph-mon[75334]: 8.c scrub ok
Feb 02 15:11:58 compute-0 ceph-mon[75334]: 10.f scrub starts
Feb 02 15:11:58 compute-0 ceph-mon[75334]: 10.f scrub ok
Feb 02 15:11:58 compute-0 ceph-mon[75334]: pgmap v265: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:11:59 compute-0 python3.9[110529]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:12:00 compute-0 sudo[110683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otqkfuhhyroaspkuntdunvlyoaybsftp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045120.1467245-35-50048706438771/AnsiballZ_setup.py'
Feb 02 15:12:00 compute-0 sudo[110683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:00 compute-0 python3.9[110685]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:12:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:01 compute-0 sudo[110683]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:01 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb 02 15:12:01 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb 02 15:12:01 compute-0 sudo[110767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxapokcdutydzihbdvvfpxjilanwybwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045120.1467245-35-50048706438771/AnsiballZ_dnf.py'
Feb 02 15:12:01 compute-0 sudo[110767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:01 compute-0 python3.9[110769]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:12:01 compute-0 ceph-mon[75334]: pgmap v266: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:01 compute-0 ceph-mon[75334]: 5.11 scrub starts
Feb 02 15:12:01 compute-0 ceph-mon[75334]: 5.11 scrub ok
Feb 02 15:12:02 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Feb 02 15:12:02 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Feb 02 15:12:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:02 compute-0 sudo[110767]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:02 compute-0 ceph-mon[75334]: 4.7 scrub starts
Feb 02 15:12:02 compute-0 ceph-mon[75334]: 4.7 scrub ok
Feb 02 15:12:03 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb 02 15:12:03 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb 02 15:12:03 compute-0 sudo[110920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnsucaxnuzhoqkwrdqxrzhnwvpaberqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045123.0291257-47-148500089556899/AnsiballZ_setup.py'
Feb 02 15:12:03 compute-0 sudo[110920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:03 compute-0 python3.9[110922]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:12:03 compute-0 sudo[110920]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:03 compute-0 ceph-mon[75334]: pgmap v267: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:03 compute-0 ceph-mon[75334]: 4.5 scrub starts
Feb 02 15:12:03 compute-0 ceph-mon[75334]: 4.5 scrub ok
Feb 02 15:12:04 compute-0 sudo[111115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fojbhddsarvdvogrjfwiotilzytmwftx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045124.1908891-58-109747842940142/AnsiballZ_file.py'
Feb 02 15:12:04 compute-0 sudo[111115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:04 compute-0 python3.9[111117]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:04 compute-0 sudo[111115]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Feb 02 15:12:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Feb 02 15:12:05 compute-0 sudo[111267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykufhmcrnbocloxykabmyzlulflvozkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045124.987652-66-1216834064908/AnsiballZ_command.py'
Feb 02 15:12:05 compute-0 sudo[111267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:05 compute-0 python3.9[111269]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:12:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:05 compute-0 sudo[111267]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:05 compute-0 ceph-mon[75334]: pgmap v268: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:06 compute-0 sudo[111432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mokddumoyqbfckxrveaqgeesdtqrrmlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045125.835641-74-42253476165152/AnsiballZ_stat.py'
Feb 02 15:12:06 compute-0 sudo[111432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:06 compute-0 python3.9[111434]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:06 compute-0 sudo[111432]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:06 compute-0 sudo[111510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yintgrutnncguvhnldeunhchxtdgjbjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045125.835641-74-42253476165152/AnsiballZ_file.py'
Feb 02 15:12:06 compute-0 sudo[111510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:06 compute-0 ceph-mon[75334]: 8.11 scrub starts
Feb 02 15:12:06 compute-0 ceph-mon[75334]: 8.11 scrub ok
Feb 02 15:12:06 compute-0 ceph-mon[75334]: pgmap v269: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:06 compute-0 python3.9[111512]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:06 compute-0 sudo[111510]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:07 compute-0 sudo[111662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtyqjqniywrllqhpnvmubzwsesmjyuwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045127.1384225-86-119254324633786/AnsiballZ_stat.py'
Feb 02 15:12:07 compute-0 sudo[111662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:07 compute-0 python3.9[111664]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:07 compute-0 sudo[111662]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:07 compute-0 sudo[111740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awdhtvtrytcqgdpcicanhrdhsoemqmsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045127.1384225-86-119254324633786/AnsiballZ_file.py'
Feb 02 15:12:07 compute-0 sudo[111740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:08 compute-0 python3.9[111742]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:12:08 compute-0 sudo[111740]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:08 compute-0 sudo[111892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sagwqdrzmznkcgmncipkokymbzovihnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045128.2044587-99-110500655210712/AnsiballZ_ini_file.py'
Feb 02 15:12:08 compute-0 sudo[111892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:08 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Feb 02 15:12:08 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Feb 02 15:12:09 compute-0 python3.9[111894]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:12:09 compute-0 sudo[111892]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:09 compute-0 sudo[112044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfozpvrxolpvldlrvqhnzyodohxgyhze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045129.1970136-99-142663333111785/AnsiballZ_ini_file.py'
Feb 02 15:12:09 compute-0 sudo[112044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:09 compute-0 python3.9[112046]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:12:09 compute-0 sudo[112044]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:09 compute-0 ceph-mon[75334]: pgmap v270: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:09 compute-0 ceph-mon[75334]: 10.4 scrub starts
Feb 02 15:12:09 compute-0 ceph-mon[75334]: 10.4 scrub ok
Feb 02 15:12:09 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Feb 02 15:12:09 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Feb 02 15:12:10 compute-0 sudo[112196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyzmxrihimnzherbewehnryxisepyvzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045129.7492516-99-133918951210163/AnsiballZ_ini_file.py'
Feb 02 15:12:10 compute-0 sudo[112196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:10 compute-0 python3.9[112198]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:12:10 compute-0 sudo[112196]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:10 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb 02 15:12:10 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb 02 15:12:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:10 compute-0 sudo[112348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmxyiovbrnypahvtbwipnnadvuwyjdif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045130.3955913-99-244017225930877/AnsiballZ_ini_file.py'
Feb 02 15:12:10 compute-0 sudo[112348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:10 compute-0 python3.9[112350]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:12:10 compute-0 ceph-mon[75334]: 11.4 scrub starts
Feb 02 15:12:10 compute-0 ceph-mon[75334]: 11.4 scrub ok
Feb 02 15:12:10 compute-0 ceph-mon[75334]: 10.b scrub starts
Feb 02 15:12:10 compute-0 ceph-mon[75334]: 10.b scrub ok
Feb 02 15:12:10 compute-0 sudo[112348]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:11 compute-0 sudo[112500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsstthkuamxamdayyncjrzkbfjoiemdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045131.1243522-130-241923934693919/AnsiballZ_dnf.py'
Feb 02 15:12:11 compute-0 sudo[112500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:11 compute-0 python3.9[112502]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:12:11 compute-0 ceph-mon[75334]: pgmap v271: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:11 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Feb 02 15:12:11 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Feb 02 15:12:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:12 compute-0 sudo[112500]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:12 compute-0 ceph-mon[75334]: 7.9 scrub starts
Feb 02 15:12:12 compute-0 ceph-mon[75334]: 7.9 scrub ok
Feb 02 15:12:13 compute-0 sudo[112653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-notzpkxxsbznlnehslhxnmnvokirzguw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045133.28862-141-218515294334997/AnsiballZ_setup.py'
Feb 02 15:12:13 compute-0 sudo[112653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:13 compute-0 python3.9[112655]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:12:13 compute-0 sudo[112653]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:13 compute-0 ceph-mon[75334]: pgmap v272: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:14 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb 02 15:12:14 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb 02 15:12:14 compute-0 sudo[112807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dutnmnktmqwskcohezmwirtelhhjvxiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045134.0380235-149-13368841012233/AnsiballZ_stat.py'
Feb 02 15:12:14 compute-0 sudo[112807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:14 compute-0 python3.9[112809]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:12:14 compute-0 sudo[112807]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:12:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:12:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:12:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:12:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:12:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:12:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:14 compute-0 ceph-mon[75334]: pgmap v273: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:14 compute-0 sudo[112959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-istajcijdcpddghnvflqwlhcxnvampjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045134.6822846-158-197355602745547/AnsiballZ_stat.py'
Feb 02 15:12:14 compute-0 sudo[112959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:15 compute-0 python3.9[112961]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:12:15 compute-0 sudo[112959]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:15 compute-0 sudo[113111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbndbueifcaypnspolvoxajnzhfoyvrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045135.4005935-168-69882786069569/AnsiballZ_command.py'
Feb 02 15:12:15 compute-0 sudo[113111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:15 compute-0 python3.9[113113]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:12:15 compute-0 sudo[113111]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:15 compute-0 ceph-mon[75334]: 11.12 scrub starts
Feb 02 15:12:15 compute-0 ceph-mon[75334]: 11.12 scrub ok
Feb 02 15:12:16 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Feb 02 15:12:16 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Feb 02 15:12:16 compute-0 sudo[113264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuciibxuybtgpdkdipoahzlekiexspbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045136.0313437-178-148906637507511/AnsiballZ_service_facts.py'
Feb 02 15:12:16 compute-0 sudo[113264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:16 compute-0 python3.9[113266]: ansible-service_facts Invoked
Feb 02 15:12:16 compute-0 network[113283]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 15:12:16 compute-0 network[113284]: 'network-scripts' will be removed from distribution in near future.
Feb 02 15:12:16 compute-0 network[113285]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 15:12:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:16 compute-0 ceph-mon[75334]: pgmap v274: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:17 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Feb 02 15:12:17 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Feb 02 15:12:17 compute-0 ceph-mon[75334]: 3.18 scrub starts
Feb 02 15:12:17 compute-0 ceph-mon[75334]: 3.18 scrub ok
Feb 02 15:12:17 compute-0 ceph-mon[75334]: 10.2 scrub starts
Feb 02 15:12:17 compute-0 ceph-mon[75334]: 10.2 scrub ok
Feb 02 15:12:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:18 compute-0 ceph-mon[75334]: pgmap v275: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:18 compute-0 sudo[113264]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Feb 02 15:12:19 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Feb 02 15:12:19 compute-0 ceph-mon[75334]: 10.13 scrub starts
Feb 02 15:12:19 compute-0 ceph-mon[75334]: 10.13 scrub ok
Feb 02 15:12:20 compute-0 sudo[113568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmqmxmgnsivcefdjapzmoqiqhaopprzu ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1770045139.7570662-193-225328625215758/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1770045139.7570662-193-225328625215758/args'
Feb 02 15:12:20 compute-0 sudo[113568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:20 compute-0 sudo[113568]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:20 compute-0 sudo[113735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxcumznhoegzhwpqeswzqvzvcjfohujn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045140.5068114-204-258240677568227/AnsiballZ_dnf.py'
Feb 02 15:12:20 compute-0 sudo[113735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:20 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Feb 02 15:12:20 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Feb 02 15:12:20 compute-0 ceph-mon[75334]: pgmap v276: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:21 compute-0 python3.9[113737]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:12:22 compute-0 ceph-mon[75334]: 7.4 scrub starts
Feb 02 15:12:22 compute-0 ceph-mon[75334]: 7.4 scrub ok
Feb 02 15:12:22 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb 02 15:12:22 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb 02 15:12:22 compute-0 sudo[113735]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:23 compute-0 ceph-mon[75334]: 8.12 scrub starts
Feb 02 15:12:23 compute-0 ceph-mon[75334]: 8.12 scrub ok
Feb 02 15:12:23 compute-0 ceph-mon[75334]: pgmap v277: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:23 compute-0 sudo[113888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqiwvnbwblrjbcjfxclcvuilncpsyezb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045142.5866978-217-174166109892843/AnsiballZ_package_facts.py'
Feb 02 15:12:23 compute-0 sudo[113888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:23 compute-0 python3.9[113890]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 02 15:12:23 compute-0 sudo[113888]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:24 compute-0 sudo[114040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cicriraskctsbfwhxhtxmyypbiyuzodr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045144.2317147-227-86163400952744/AnsiballZ_stat.py'
Feb 02 15:12:24 compute-0 sudo[114040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:24 compute-0 python3.9[114042]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:24 compute-0 sudo[114040]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:24 compute-0 sudo[114118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnikwsldhearepvclmlfczqeiirfadbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045144.2317147-227-86163400952744/AnsiballZ_file.py'
Feb 02 15:12:24 compute-0 sudo[114118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:25 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Feb 02 15:12:25 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Feb 02 15:12:25 compute-0 python3.9[114120]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:25 compute-0 sudo[114118]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:25 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb 02 15:12:25 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb 02 15:12:25 compute-0 sudo[114270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxkehszjhixwoaredhxssagzbkqvkbwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045145.3940732-239-195966541566286/AnsiballZ_stat.py'
Feb 02 15:12:25 compute-0 sudo[114270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:25 compute-0 ceph-mon[75334]: pgmap v278: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:25 compute-0 ceph-mon[75334]: 7.1c scrub starts
Feb 02 15:12:25 compute-0 ceph-mon[75334]: 7.1c scrub ok
Feb 02 15:12:25 compute-0 ceph-mon[75334]: 4.d scrub starts
Feb 02 15:12:25 compute-0 ceph-mon[75334]: 4.d scrub ok
Feb 02 15:12:25 compute-0 python3.9[114272]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:25 compute-0 sudo[114270]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.f scrub starts
Feb 02 15:12:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 7.f scrub ok
Feb 02 15:12:26 compute-0 sudo[114348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdmemgtbczngxhkghazvnuwcrzxvdnaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045145.3940732-239-195966541566286/AnsiballZ_file.py'
Feb 02 15:12:26 compute-0 sudo[114348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:26 compute-0 python3.9[114350]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:26 compute-0 sudo[114348]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:26 compute-0 ceph-mon[75334]: 7.f scrub starts
Feb 02 15:12:26 compute-0 ceph-mon[75334]: 7.f scrub ok
Feb 02 15:12:27 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Feb 02 15:12:27 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Feb 02 15:12:27 compute-0 sudo[114500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deopcvlflzxaoupiolprmudgwhaqidir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045146.8615344-257-84156047324580/AnsiballZ_lineinfile.py'
Feb 02 15:12:27 compute-0 sudo[114500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:27 compute-0 python3.9[114502]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:27 compute-0 sudo[114500]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:27 compute-0 ceph-mon[75334]: pgmap v279: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:27 compute-0 ceph-mon[75334]: 11.11 scrub starts
Feb 02 15:12:27 compute-0 ceph-mon[75334]: 11.11 scrub ok
Feb 02 15:12:27 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb 02 15:12:28 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb 02 15:12:28 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Feb 02 15:12:28 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Feb 02 15:12:28 compute-0 sudo[114652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucsotzrfpsclkuhnggzbdnzulyxmflbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045148.0843236-272-63592606366683/AnsiballZ_setup.py'
Feb 02 15:12:28 compute-0 sudo[114652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:28 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.c scrub starts
Feb 02 15:12:28 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.c scrub ok
Feb 02 15:12:28 compute-0 python3.9[114654]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:12:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:28 compute-0 sudo[114652]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:28 compute-0 ceph-mon[75334]: 8.1b scrub starts
Feb 02 15:12:28 compute-0 ceph-mon[75334]: 8.1b scrub ok
Feb 02 15:12:28 compute-0 ceph-mon[75334]: 10.1 scrub starts
Feb 02 15:12:28 compute-0 ceph-mon[75334]: 10.1 scrub ok
Feb 02 15:12:28 compute-0 ceph-mon[75334]: 5.c scrub starts
Feb 02 15:12:28 compute-0 ceph-mon[75334]: 5.c scrub ok
Feb 02 15:12:28 compute-0 ceph-mon[75334]: pgmap v280: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:29 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb 02 15:12:29 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb 02 15:12:29 compute-0 sudo[114736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrowulsnazcsjpwjrpibladtvunekdjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045148.0843236-272-63592606366683/AnsiballZ_systemd.py'
Feb 02 15:12:29 compute-0 sudo[114736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:29 compute-0 python3.9[114738]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:12:29 compute-0 sudo[114736]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:29 compute-0 ceph-mon[75334]: 3.c scrub starts
Feb 02 15:12:29 compute-0 ceph-mon[75334]: 3.c scrub ok
Feb 02 15:12:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb 02 15:12:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb 02 15:12:30 compute-0 sshd-session[110225]: Connection closed by 192.168.122.30 port 55560
Feb 02 15:12:30 compute-0 sshd-session[110222]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:12:30 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Feb 02 15:12:30 compute-0 systemd[1]: session-38.scope: Consumed 21.557s CPU time.
Feb 02 15:12:30 compute-0 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Feb 02 15:12:30 compute-0 systemd-logind[786]: Removed session 38.
Feb 02 15:12:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:30 compute-0 ceph-mon[75334]: 5.f scrub starts
Feb 02 15:12:30 compute-0 ceph-mon[75334]: 5.f scrub ok
Feb 02 15:12:30 compute-0 ceph-mon[75334]: pgmap v281: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:31 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb 02 15:12:31 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb 02 15:12:31 compute-0 ceph-mon[75334]: 11.1a scrub starts
Feb 02 15:12:31 compute-0 ceph-mon[75334]: 11.1a scrub ok
Feb 02 15:12:32 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Feb 02 15:12:32 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Feb 02 15:12:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:32 compute-0 ceph-mon[75334]: 4.4 scrub starts
Feb 02 15:12:32 compute-0 ceph-mon[75334]: 4.4 scrub ok
Feb 02 15:12:32 compute-0 ceph-mon[75334]: pgmap v282: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:33 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb 02 15:12:33 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb 02 15:12:33 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Feb 02 15:12:33 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Feb 02 15:12:33 compute-0 ceph-mon[75334]: 4.1a scrub starts
Feb 02 15:12:33 compute-0 ceph-mon[75334]: 4.1a scrub ok
Feb 02 15:12:33 compute-0 ceph-mon[75334]: 4.2 scrub starts
Feb 02 15:12:33 compute-0 ceph-mon[75334]: 4.2 scrub ok
Feb 02 15:12:34 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Feb 02 15:12:34 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Feb 02 15:12:34 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Feb 02 15:12:34 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Feb 02 15:12:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:34 compute-0 ceph-mon[75334]: 10.15 scrub starts
Feb 02 15:12:34 compute-0 ceph-mon[75334]: 10.15 scrub ok
Feb 02 15:12:34 compute-0 ceph-mon[75334]: 10.12 scrub starts
Feb 02 15:12:34 compute-0 ceph-mon[75334]: 10.12 scrub ok
Feb 02 15:12:34 compute-0 ceph-mon[75334]: pgmap v283: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:36 compute-0 sshd-session[114765]: Accepted publickey for zuul from 192.168.122.30 port 41852 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:12:36 compute-0 systemd-logind[786]: New session 39 of user zuul.
Feb 02 15:12:36 compute-0 systemd[1]: Started Session 39 of User zuul.
Feb 02 15:12:36 compute-0 sshd-session[114765]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:12:36 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Feb 02 15:12:36 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Feb 02 15:12:36 compute-0 ceph-mon[75334]: 10.14 scrub starts
Feb 02 15:12:36 compute-0 ceph-mon[75334]: 10.14 scrub ok
Feb 02 15:12:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:36 compute-0 sudo[114918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzlwpiupzucmhzilzlzmragpzzbjrcej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045156.396514-17-259770424724806/AnsiballZ_file.py'
Feb 02 15:12:36 compute-0 sudo[114918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:37 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb 02 15:12:37 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb 02 15:12:37 compute-0 python3.9[114920]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:37 compute-0 sudo[114918]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:37 compute-0 systemd[76706]: Created slice User Background Tasks Slice.
Feb 02 15:12:37 compute-0 systemd[76706]: Starting Cleanup of User's Temporary Files and Directories...
Feb 02 15:12:37 compute-0 systemd[76706]: Finished Cleanup of User's Temporary Files and Directories.
Feb 02 15:12:37 compute-0 ceph-mon[75334]: pgmap v284: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:37 compute-0 sudo[115071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccryydvfuaclxwwuswyeqpcainjhvfcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045157.2542686-29-98593157082758/AnsiballZ_stat.py'
Feb 02 15:12:37 compute-0 sudo[115071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:37 compute-0 python3.9[115073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:37 compute-0 sudo[115071]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:37 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.f scrub starts
Feb 02 15:12:38 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.f scrub ok
Feb 02 15:12:38 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.d scrub starts
Feb 02 15:12:38 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 8.d scrub ok
Feb 02 15:12:38 compute-0 sudo[115149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wczrpoqygrduvhsbrxftppsldfpgrakq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045157.2542686-29-98593157082758/AnsiballZ_file.py'
Feb 02 15:12:38 compute-0 sudo[115149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:38 compute-0 python3.9[115151]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:38 compute-0 sudo[115149]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:38 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.b scrub starts
Feb 02 15:12:38 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.b scrub ok
Feb 02 15:12:38 compute-0 ceph-mon[75334]: 10.d scrub starts
Feb 02 15:12:38 compute-0 ceph-mon[75334]: 10.d scrub ok
Feb 02 15:12:38 compute-0 ceph-mon[75334]: 8.d scrub starts
Feb 02 15:12:38 compute-0 ceph-mon[75334]: 8.d scrub ok
Feb 02 15:12:38 compute-0 ceph-mon[75334]: 6.b scrub starts
Feb 02 15:12:38 compute-0 ceph-mon[75334]: 6.b scrub ok
Feb 02 15:12:38 compute-0 sshd-session[114768]: Connection closed by 192.168.122.30 port 41852
Feb 02 15:12:38 compute-0 sshd-session[114765]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:12:38 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Feb 02 15:12:38 compute-0 systemd-logind[786]: Session 39 logged out. Waiting for processes to exit.
Feb 02 15:12:38 compute-0 systemd[1]: session-39.scope: Consumed 1.380s CPU time.
Feb 02 15:12:38 compute-0 systemd-logind[786]: Removed session 39.
Feb 02 15:12:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:39 compute-0 sudo[115177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:12:39 compute-0 sudo[115177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:12:39 compute-0 sudo[115177]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:39 compute-0 sudo[115202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:12:39 compute-0 sudo[115202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:12:39 compute-0 ceph-mon[75334]: 8.f scrub starts
Feb 02 15:12:39 compute-0 ceph-mon[75334]: 8.f scrub ok
Feb 02 15:12:39 compute-0 ceph-mon[75334]: pgmap v285: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:40 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.e scrub starts
Feb 02 15:12:40 compute-0 sudo[115202]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:40 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.e scrub ok
Feb 02 15:12:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:12:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:12:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:12:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:12:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:12:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:12:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:12:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:12:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:12:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:12:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:12:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:12:40 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb 02 15:12:40 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb 02 15:12:40 compute-0 sudo[115259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:12:40 compute-0 sudo[115259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:12:40 compute-0 sudo[115259]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:40 compute-0 sudo[115284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:12:40 compute-0 sudo[115284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:12:40 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb 02 15:12:40 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb 02 15:12:40 compute-0 podman[115322]: 2026-02-02 15:12:40.492479928 +0000 UTC m=+0.041917860 container create 99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:12:40 compute-0 systemd[1]: Started libpod-conmon-99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de.scope.
Feb 02 15:12:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:12:40 compute-0 podman[115322]: 2026-02-02 15:12:40.558504089 +0000 UTC m=+0.107942051 container init 99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:12:40 compute-0 podman[115322]: 2026-02-02 15:12:40.564903647 +0000 UTC m=+0.114341579 container start 99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:12:40 compute-0 podman[115322]: 2026-02-02 15:12:40.568376251 +0000 UTC m=+0.117814243 container attach 99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:12:40 compute-0 podman[115322]: 2026-02-02 15:12:40.473498402 +0000 UTC m=+0.022936344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:12:40 compute-0 mystifying_bell[115339]: 167 167
Feb 02 15:12:40 compute-0 systemd[1]: libpod-99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de.scope: Deactivated successfully.
Feb 02 15:12:40 compute-0 conmon[115339]: conmon 99b906106cfa0fc20998 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de.scope/container/memory.events
Feb 02 15:12:40 compute-0 podman[115322]: 2026-02-02 15:12:40.585396639 +0000 UTC m=+0.134834571 container died 99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdb10d1ecf254d6cc301328ba1a731b48c87193848dd469588aa6e25ee951196-merged.mount: Deactivated successfully.
Feb 02 15:12:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:12:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:12:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:12:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:12:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:12:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:12:40 compute-0 ceph-mon[75334]: 6.8 scrub starts
Feb 02 15:12:40 compute-0 ceph-mon[75334]: 6.8 scrub ok
Feb 02 15:12:40 compute-0 ceph-mon[75334]: 6.d scrub starts
Feb 02 15:12:40 compute-0 ceph-mon[75334]: 6.d scrub ok
Feb 02 15:12:40 compute-0 podman[115322]: 2026-02-02 15:12:40.630372043 +0000 UTC m=+0.179809935 container remove 99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:12:40 compute-0 systemd[1]: libpod-conmon-99b906106cfa0fc2099839acc3644539bf5b3231ffb2b828d21505b5d648a9de.scope: Deactivated successfully.
Feb 02 15:12:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:40 compute-0 podman[115366]: 2026-02-02 15:12:40.79925268 +0000 UTC m=+0.040962887 container create c863de36263de76dcf56be2db773b3af2376371d33fd3ee0e5052fdeaac3e33d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 15:12:40 compute-0 systemd[1]: Started libpod-conmon-c863de36263de76dcf56be2db773b3af2376371d33fd3ee0e5052fdeaac3e33d.scope.
Feb 02 15:12:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:12:40 compute-0 podman[115366]: 2026-02-02 15:12:40.782510949 +0000 UTC m=+0.024221156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f5a931eea55a023ffbf3e7fffd514cd1512dcab69cadeb0f8ebb192420e801/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f5a931eea55a023ffbf3e7fffd514cd1512dcab69cadeb0f8ebb192420e801/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f5a931eea55a023ffbf3e7fffd514cd1512dcab69cadeb0f8ebb192420e801/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f5a931eea55a023ffbf3e7fffd514cd1512dcab69cadeb0f8ebb192420e801/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f5a931eea55a023ffbf3e7fffd514cd1512dcab69cadeb0f8ebb192420e801/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:40 compute-0 podman[115366]: 2026-02-02 15:12:40.902341241 +0000 UTC m=+0.144051438 container init c863de36263de76dcf56be2db773b3af2376371d33fd3ee0e5052fdeaac3e33d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:12:40 compute-0 podman[115366]: 2026-02-02 15:12:40.919214865 +0000 UTC m=+0.160925032 container start c863de36263de76dcf56be2db773b3af2376371d33fd3ee0e5052fdeaac3e33d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:12:40 compute-0 podman[115366]: 2026-02-02 15:12:40.92308042 +0000 UTC m=+0.164790617 container attach c863de36263de76dcf56be2db773b3af2376371d33fd3ee0e5052fdeaac3e33d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_turing, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:12:41 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Feb 02 15:12:41 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Feb 02 15:12:41 compute-0 brave_turing[115382]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:12:41 compute-0 brave_turing[115382]: --> All data devices are unavailable
Feb 02 15:12:41 compute-0 systemd[1]: libpod-c863de36263de76dcf56be2db773b3af2376371d33fd3ee0e5052fdeaac3e33d.scope: Deactivated successfully.
Feb 02 15:12:41 compute-0 podman[115366]: 2026-02-02 15:12:41.367280286 +0000 UTC m=+0.608990543 container died c863de36263de76dcf56be2db773b3af2376371d33fd3ee0e5052fdeaac3e33d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-79f5a931eea55a023ffbf3e7fffd514cd1512dcab69cadeb0f8ebb192420e801-merged.mount: Deactivated successfully.
Feb 02 15:12:41 compute-0 podman[115366]: 2026-02-02 15:12:41.410185689 +0000 UTC m=+0.651895876 container remove c863de36263de76dcf56be2db773b3af2376371d33fd3ee0e5052fdeaac3e33d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_turing, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:12:41 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb 02 15:12:41 compute-0 systemd[1]: libpod-conmon-c863de36263de76dcf56be2db773b3af2376371d33fd3ee0e5052fdeaac3e33d.scope: Deactivated successfully.
Feb 02 15:12:41 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb 02 15:12:41 compute-0 sudo[115284]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:41 compute-0 sudo[115414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:12:41 compute-0 sudo[115414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:12:41 compute-0 sudo[115414]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:41 compute-0 sudo[115439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:12:41 compute-0 sudo[115439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:12:41 compute-0 ceph-mon[75334]: 10.e scrub starts
Feb 02 15:12:41 compute-0 ceph-mon[75334]: 10.e scrub ok
Feb 02 15:12:41 compute-0 ceph-mon[75334]: pgmap v286: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:41 compute-0 ceph-mon[75334]: 6.2 scrub starts
Feb 02 15:12:41 compute-0 ceph-mon[75334]: 6.2 scrub ok
Feb 02 15:12:41 compute-0 podman[115476]: 2026-02-02 15:12:41.821324453 +0000 UTC m=+0.031739940 container create 62376f8104655030a969e411adc6448396648cbaf6d57ee2532871ec7df386de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:12:41 compute-0 systemd[1]: Started libpod-conmon-62376f8104655030a969e411adc6448396648cbaf6d57ee2532871ec7df386de.scope.
Feb 02 15:12:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:12:41 compute-0 podman[115476]: 2026-02-02 15:12:41.807074074 +0000 UTC m=+0.017489571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:12:41 compute-0 podman[115476]: 2026-02-02 15:12:41.910346668 +0000 UTC m=+0.120762165 container init 62376f8104655030a969e411adc6448396648cbaf6d57ee2532871ec7df386de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:12:41 compute-0 podman[115476]: 2026-02-02 15:12:41.918420647 +0000 UTC m=+0.128836124 container start 62376f8104655030a969e411adc6448396648cbaf6d57ee2532871ec7df386de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:12:41 compute-0 podman[115476]: 2026-02-02 15:12:41.920964029 +0000 UTC m=+0.131379536 container attach 62376f8104655030a969e411adc6448396648cbaf6d57ee2532871ec7df386de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:12:41 compute-0 naughty_swartz[115492]: 167 167
Feb 02 15:12:41 compute-0 systemd[1]: libpod-62376f8104655030a969e411adc6448396648cbaf6d57ee2532871ec7df386de.scope: Deactivated successfully.
Feb 02 15:12:41 compute-0 podman[115476]: 2026-02-02 15:12:41.92384662 +0000 UTC m=+0.134262107 container died 62376f8104655030a969e411adc6448396648cbaf6d57ee2532871ec7df386de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_swartz, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c950ff9680ccb977043cad3cfac0d1bbb67dfb02be5f0344066e09068b364745-merged.mount: Deactivated successfully.
Feb 02 15:12:41 compute-0 podman[115476]: 2026-02-02 15:12:41.963647108 +0000 UTC m=+0.174062595 container remove 62376f8104655030a969e411adc6448396648cbaf6d57ee2532871ec7df386de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:12:41 compute-0 systemd[1]: libpod-conmon-62376f8104655030a969e411adc6448396648cbaf6d57ee2532871ec7df386de.scope: Deactivated successfully.
Feb 02 15:12:42 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Feb 02 15:12:42 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Feb 02 15:12:42 compute-0 podman[115514]: 2026-02-02 15:12:42.107296874 +0000 UTC m=+0.036090947 container create 79168c24e0e1abe547c92ee481a1185454b546351ed2f806aa6011a6d8b3fe92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:12:42 compute-0 systemd[1]: Started libpod-conmon-79168c24e0e1abe547c92ee481a1185454b546351ed2f806aa6011a6d8b3fe92.scope.
Feb 02 15:12:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed7ea60aedf61c7dabe36b2603bd0777df4f1fc16560e6b4f3161fcbd0d5726c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed7ea60aedf61c7dabe36b2603bd0777df4f1fc16560e6b4f3161fcbd0d5726c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed7ea60aedf61c7dabe36b2603bd0777df4f1fc16560e6b4f3161fcbd0d5726c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed7ea60aedf61c7dabe36b2603bd0777df4f1fc16560e6b4f3161fcbd0d5726c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:42 compute-0 podman[115514]: 2026-02-02 15:12:42.092557023 +0000 UTC m=+0.021351106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:12:42 compute-0 podman[115514]: 2026-02-02 15:12:42.198184196 +0000 UTC m=+0.126978289 container init 79168c24e0e1abe547c92ee481a1185454b546351ed2f806aa6011a6d8b3fe92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moser, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:12:42 compute-0 podman[115514]: 2026-02-02 15:12:42.203364463 +0000 UTC m=+0.132158546 container start 79168c24e0e1abe547c92ee481a1185454b546351ed2f806aa6011a6d8b3fe92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moser, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:12:42 compute-0 podman[115514]: 2026-02-02 15:12:42.207332741 +0000 UTC m=+0.136126824 container attach 79168c24e0e1abe547c92ee481a1185454b546351ed2f806aa6011a6d8b3fe92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 02 15:12:42 compute-0 amazing_moser[115530]: {
Feb 02 15:12:42 compute-0 amazing_moser[115530]:     "0": [
Feb 02 15:12:42 compute-0 amazing_moser[115530]:         {
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "devices": [
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "/dev/loop3"
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             ],
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_name": "ceph_lv0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_size": "21470642176",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "name": "ceph_lv0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "tags": {
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.cluster_name": "ceph",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.crush_device_class": "",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.encrypted": "0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.objectstore": "bluestore",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.osd_id": "0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.type": "block",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.vdo": "0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.with_tpm": "0"
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             },
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "type": "block",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "vg_name": "ceph_vg0"
Feb 02 15:12:42 compute-0 amazing_moser[115530]:         }
Feb 02 15:12:42 compute-0 amazing_moser[115530]:     ],
Feb 02 15:12:42 compute-0 amazing_moser[115530]:     "1": [
Feb 02 15:12:42 compute-0 amazing_moser[115530]:         {
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "devices": [
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "/dev/loop4"
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             ],
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_name": "ceph_lv1",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_size": "21470642176",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "name": "ceph_lv1",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "tags": {
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.cluster_name": "ceph",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.crush_device_class": "",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.encrypted": "0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.objectstore": "bluestore",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.osd_id": "1",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.type": "block",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.vdo": "0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.with_tpm": "0"
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             },
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "type": "block",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "vg_name": "ceph_vg1"
Feb 02 15:12:42 compute-0 amazing_moser[115530]:         }
Feb 02 15:12:42 compute-0 amazing_moser[115530]:     ],
Feb 02 15:12:42 compute-0 amazing_moser[115530]:     "2": [
Feb 02 15:12:42 compute-0 amazing_moser[115530]:         {
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "devices": [
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "/dev/loop5"
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             ],
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_name": "ceph_lv2",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_size": "21470642176",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "name": "ceph_lv2",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "tags": {
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.cluster_name": "ceph",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.crush_device_class": "",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.encrypted": "0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.objectstore": "bluestore",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.osd_id": "2",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.type": "block",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.vdo": "0",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:                 "ceph.with_tpm": "0"
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             },
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "type": "block",
Feb 02 15:12:42 compute-0 amazing_moser[115530]:             "vg_name": "ceph_vg2"
Feb 02 15:12:42 compute-0 amazing_moser[115530]:         }
Feb 02 15:12:42 compute-0 amazing_moser[115530]:     ]
Feb 02 15:12:42 compute-0 amazing_moser[115530]: }
Feb 02 15:12:42 compute-0 systemd[1]: libpod-79168c24e0e1abe547c92ee481a1185454b546351ed2f806aa6011a6d8b3fe92.scope: Deactivated successfully.
Feb 02 15:12:42 compute-0 podman[115514]: 2026-02-02 15:12:42.485854839 +0000 UTC m=+0.414648922 container died 79168c24e0e1abe547c92ee481a1185454b546351ed2f806aa6011a6d8b3fe92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed7ea60aedf61c7dabe36b2603bd0777df4f1fc16560e6b4f3161fcbd0d5726c-merged.mount: Deactivated successfully.
Feb 02 15:12:42 compute-0 podman[115514]: 2026-02-02 15:12:42.586677094 +0000 UTC m=+0.515471177 container remove 79168c24e0e1abe547c92ee481a1185454b546351ed2f806aa6011a6d8b3fe92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:12:42 compute-0 systemd[1]: libpod-conmon-79168c24e0e1abe547c92ee481a1185454b546351ed2f806aa6011a6d8b3fe92.scope: Deactivated successfully.
Feb 02 15:12:42 compute-0 sudo[115439]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:42 compute-0 ceph-mon[75334]: 10.9 scrub starts
Feb 02 15:12:42 compute-0 ceph-mon[75334]: 10.9 scrub ok
Feb 02 15:12:42 compute-0 sudo[115553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:12:42 compute-0 sudo[115553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:12:42 compute-0 sudo[115553]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:12:42
Feb 02 15:12:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:12:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:12:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes']
Feb 02 15:12:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:12:42 compute-0 sudo[115578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:12:42 compute-0 sudo[115578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:12:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:43 compute-0 podman[115614]: 2026-02-02 15:12:43.081633886 +0000 UTC m=+0.053375252 container create 16ffe54a4733d085113cccf3ac0d6355f6aeda26085cbe2070cb89104a5169bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:12:43 compute-0 systemd[1]: Started libpod-conmon-16ffe54a4733d085113cccf3ac0d6355f6aeda26085cbe2070cb89104a5169bc.scope.
Feb 02 15:12:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:12:43 compute-0 podman[115614]: 2026-02-02 15:12:43.14535347 +0000 UTC m=+0.117094876 container init 16ffe54a4733d085113cccf3ac0d6355f6aeda26085cbe2070cb89104a5169bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:12:43 compute-0 podman[115614]: 2026-02-02 15:12:43.060073377 +0000 UTC m=+0.031814793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:12:43 compute-0 podman[115614]: 2026-02-02 15:12:43.152429404 +0000 UTC m=+0.124170770 container start 16ffe54a4733d085113cccf3ac0d6355f6aeda26085cbe2070cb89104a5169bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:12:43 compute-0 podman[115614]: 2026-02-02 15:12:43.15635273 +0000 UTC m=+0.128094106 container attach 16ffe54a4733d085113cccf3ac0d6355f6aeda26085cbe2070cb89104a5169bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:12:43 compute-0 blissful_wilbur[115630]: 167 167
Feb 02 15:12:43 compute-0 systemd[1]: libpod-16ffe54a4733d085113cccf3ac0d6355f6aeda26085cbe2070cb89104a5169bc.scope: Deactivated successfully.
Feb 02 15:12:43 compute-0 podman[115614]: 2026-02-02 15:12:43.158194046 +0000 UTC m=+0.129935442 container died 16ffe54a4733d085113cccf3ac0d6355f6aeda26085cbe2070cb89104a5169bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_wilbur, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-970a642378d021d1798c2a549aba8ae09dda774c59d52d389f9d27b24c98a923-merged.mount: Deactivated successfully.
Feb 02 15:12:43 compute-0 podman[115614]: 2026-02-02 15:12:43.198979597 +0000 UTC m=+0.170720953 container remove 16ffe54a4733d085113cccf3ac0d6355f6aeda26085cbe2070cb89104a5169bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:12:43 compute-0 systemd[1]: libpod-conmon-16ffe54a4733d085113cccf3ac0d6355f6aeda26085cbe2070cb89104a5169bc.scope: Deactivated successfully.
Feb 02 15:12:43 compute-0 podman[115654]: 2026-02-02 15:12:43.64024159 +0000 UTC m=+0.039028529 container create 1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:12:43 compute-0 ceph-mon[75334]: 8.6 scrub starts
Feb 02 15:12:43 compute-0 ceph-mon[75334]: 8.6 scrub ok
Feb 02 15:12:43 compute-0 ceph-mon[75334]: pgmap v287: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:43 compute-0 systemd[1]: Started libpod-conmon-1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4.scope.
Feb 02 15:12:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca27b8cc2a35bad7fed9b6caf31489a388adce9839ec61505fd4e46b36395c98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca27b8cc2a35bad7fed9b6caf31489a388adce9839ec61505fd4e46b36395c98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca27b8cc2a35bad7fed9b6caf31489a388adce9839ec61505fd4e46b36395c98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca27b8cc2a35bad7fed9b6caf31489a388adce9839ec61505fd4e46b36395c98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:12:43 compute-0 podman[115654]: 2026-02-02 15:12:43.707134693 +0000 UTC m=+0.105921682 container init 1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:12:43 compute-0 podman[115654]: 2026-02-02 15:12:43.713320234 +0000 UTC m=+0.112107183 container start 1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rosalind, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 02 15:12:43 compute-0 podman[115654]: 2026-02-02 15:12:43.716842581 +0000 UTC m=+0.115629570 container attach 1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rosalind, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:12:43 compute-0 podman[115654]: 2026-02-02 15:12:43.623799836 +0000 UTC m=+0.022586795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:12:44 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 6.f scrub starts
Feb 02 15:12:44 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 6.f scrub ok
Feb 02 15:12:44 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Feb 02 15:12:44 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Feb 02 15:12:44 compute-0 lvm[115749]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:12:44 compute-0 lvm[115749]: VG ceph_vg1 finished
Feb 02 15:12:44 compute-0 lvm[115747]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:12:44 compute-0 lvm[115747]: VG ceph_vg0 finished
Feb 02 15:12:44 compute-0 lvm[115751]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:12:44 compute-0 lvm[115751]: VG ceph_vg2 finished
Feb 02 15:12:44 compute-0 tender_rosalind[115670]: {}
Feb 02 15:12:44 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb 02 15:12:44 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb 02 15:12:44 compute-0 systemd[1]: libpod-1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4.scope: Deactivated successfully.
Feb 02 15:12:44 compute-0 podman[115654]: 2026-02-02 15:12:44.44465659 +0000 UTC m=+0.843443569 container died 1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:12:44 compute-0 systemd[1]: libpod-1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4.scope: Consumed 1.038s CPU time.
Feb 02 15:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca27b8cc2a35bad7fed9b6caf31489a388adce9839ec61505fd4e46b36395c98-merged.mount: Deactivated successfully.
Feb 02 15:12:44 compute-0 podman[115654]: 2026-02-02 15:12:44.501875754 +0000 UTC m=+0.900662733 container remove 1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:12:44 compute-0 sshd-session[115754]: Accepted publickey for zuul from 192.168.122.30 port 44452 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:12:44 compute-0 systemd[1]: libpod-conmon-1fb8cef25566f95571c5d631872659a1a999917b7dfe109420d17f55d47a2ee4.scope: Deactivated successfully.
Feb 02 15:12:44 compute-0 systemd-logind[786]: New session 40 of user zuul.
Feb 02 15:12:44 compute-0 systemd[1]: Started Session 40 of User zuul.
Feb 02 15:12:44 compute-0 sshd-session[115754]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:12:44 compute-0 sudo[115578]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:12:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:12:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:12:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:12:44 compute-0 sudo[115772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:12:44 compute-0 sudo[115772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:12:44 compute-0 sudo[115772]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:44 compute-0 ceph-mon[75334]: 6.f scrub starts
Feb 02 15:12:44 compute-0 ceph-mon[75334]: 6.f scrub ok
Feb 02 15:12:44 compute-0 ceph-mon[75334]: 6.e scrub starts
Feb 02 15:12:44 compute-0 ceph-mon[75334]: 6.e scrub ok
Feb 02 15:12:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:12:44 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:12:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:45 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb 02 15:12:45 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb 02 15:12:45 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb 02 15:12:45 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb 02 15:12:45 compute-0 python3.9[115946]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:12:45 compute-0 ceph-mon[75334]: 6.0 scrub starts
Feb 02 15:12:45 compute-0 ceph-mon[75334]: 6.0 scrub ok
Feb 02 15:12:45 compute-0 ceph-mon[75334]: pgmap v288: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:45 compute-0 ceph-mon[75334]: 9.e scrub starts
Feb 02 15:12:45 compute-0 ceph-mon[75334]: 9.e scrub ok
Feb 02 15:12:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:46 compute-0 sudo[116100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huzrzriibesyhhpvcobejykyyxhoqsky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045166.0631273-28-187509879184207/AnsiballZ_file.py'
Feb 02 15:12:46 compute-0 sudo[116100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:46 compute-0 ceph-mon[75334]: 6.7 scrub starts
Feb 02 15:12:46 compute-0 ceph-mon[75334]: 6.7 scrub ok
Feb 02 15:12:46 compute-0 python3.9[116102]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:46 compute-0 sudo[116100]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:47 compute-0 sudo[116275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuimuqcfobmauuknfoselbqslarzxjys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045166.9350507-36-20177966681548/AnsiballZ_stat.py'
Feb 02 15:12:47 compute-0 sudo[116275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:47 compute-0 python3.9[116277]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:47 compute-0 sudo[116275]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:47 compute-0 ceph-mon[75334]: pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:47 compute-0 sudo[116353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryispqwoohccwkuhflvivdfgwbkuwfih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045166.9350507-36-20177966681548/AnsiballZ_file.py'
Feb 02 15:12:47 compute-0 sudo[116353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:48 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Feb 02 15:12:48 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Feb 02 15:12:48 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb 02 15:12:48 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb 02 15:12:48 compute-0 python3.9[116355]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.0oaelaej recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:48 compute-0 sudo[116353]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:48 compute-0 ceph-mon[75334]: 9.8 scrub starts
Feb 02 15:12:48 compute-0 ceph-mon[75334]: 9.8 scrub ok
Feb 02 15:12:48 compute-0 sudo[116505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qackhjbqftsfujmderoplndfhvmowmgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045168.5130153-56-269673119467113/AnsiballZ_stat.py'
Feb 02 15:12:48 compute-0 sudo[116505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:48 compute-0 python3.9[116507]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:49 compute-0 sudo[116505]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:49 compute-0 sudo[116583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yslvddxzksdmsaugrdskhimoxbnmurga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045168.5130153-56-269673119467113/AnsiballZ_file.py'
Feb 02 15:12:49 compute-0 sudo[116583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:49 compute-0 python3.9[116585]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.nwcie3sa recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:49 compute-0 sudo[116583]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:49 compute-0 ceph-mon[75334]: 6.3 scrub starts
Feb 02 15:12:49 compute-0 ceph-mon[75334]: 6.3 scrub ok
Feb 02 15:12:49 compute-0 ceph-mon[75334]: pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:49 compute-0 sudo[116735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbjqyyhvdszudtzckbbebshfrxxijqmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045169.5800545-69-108348514587787/AnsiballZ_file.py'
Feb 02 15:12:49 compute-0 sudo[116735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:50 compute-0 python3.9[116737]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:12:50 compute-0 sudo[116735]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:50 compute-0 sudo[116887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hitvromlpwmldntxgimbmkegfyrrsgzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045170.192471-77-182802801839280/AnsiballZ_stat.py'
Feb 02 15:12:50 compute-0 sudo[116887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:50 compute-0 python3.9[116889]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:50 compute-0 sudo[116887]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:50 compute-0 sudo[116965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylgdtcixjwodanqmblyercfavtaabbem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045170.192471-77-182802801839280/AnsiballZ_file.py'
Feb 02 15:12:50 compute-0 sudo[116965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:50 compute-0 python3.9[116967]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:12:51 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Feb 02 15:12:51 compute-0 sudo[116965]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:51 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Feb 02 15:12:51 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb 02 15:12:51 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb 02 15:12:51 compute-0 sudo[117117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejexqnjdstecseodjovprwroidqzwqyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045171.1361825-77-275292841308425/AnsiballZ_stat.py'
Feb 02 15:12:51 compute-0 sudo[117117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:51 compute-0 python3.9[117119]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:51 compute-0 sudo[117117]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:51 compute-0 sudo[117195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tljblbhtjeosovudinyiwfergxrqzeev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045171.1361825-77-275292841308425/AnsiballZ_file.py'
Feb 02 15:12:51 compute-0 sudo[117195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:51 compute-0 ceph-mon[75334]: pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:51 compute-0 ceph-mon[75334]: 9.19 scrub starts
Feb 02 15:12:51 compute-0 ceph-mon[75334]: 9.19 scrub ok
Feb 02 15:12:51 compute-0 ceph-mon[75334]: 6.5 scrub starts
Feb 02 15:12:51 compute-0 ceph-mon[75334]: 6.5 scrub ok
Feb 02 15:12:51 compute-0 python3.9[117197]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:12:51 compute-0 sudo[117195]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:52 compute-0 sudo[117347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siqwtpbirxqqaapltpsitxopvsokwoai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045172.1532233-100-263859004695344/AnsiballZ_file.py'
Feb 02 15:12:52 compute-0 sudo[117347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:52 compute-0 python3.9[117349]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:52 compute-0 sudo[117347]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:53 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Feb 02 15:12:53 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Feb 02 15:12:53 compute-0 sudo[117499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irpobmjyraagknxpzumcdvvfmuaydqlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045172.8242571-108-258207694426293/AnsiballZ_stat.py'
Feb 02 15:12:53 compute-0 sudo[117499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:53 compute-0 python3.9[117501]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:53 compute-0 sudo[117499]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:53 compute-0 sudo[117577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blbtmnuxmcelstrimcmvfbuohfexhavh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045172.8242571-108-258207694426293/AnsiballZ_file.py'
Feb 02 15:12:53 compute-0 sudo[117577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:53 compute-0 python3.9[117579]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:53 compute-0 sudo[117577]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:53 compute-0 ceph-mon[75334]: pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:53 compute-0 ceph-mon[75334]: 9.7 scrub starts
Feb 02 15:12:53 compute-0 ceph-mon[75334]: 9.7 scrub ok
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:12:54 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.c scrub starts
Feb 02 15:12:54 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.c scrub ok
Feb 02 15:12:54 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.a scrub starts
Feb 02 15:12:54 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.a scrub ok
Feb 02 15:12:54 compute-0 sudo[117729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouhnxrqftfbpicxqzwhguvxtuiectdyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045173.9241052-120-208814875437739/AnsiballZ_stat.py'
Feb 02 15:12:54 compute-0 sudo[117729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:54 compute-0 python3.9[117731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:54 compute-0 sudo[117729]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:54 compute-0 sudo[117807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apmilvomkefgqjnsjlsddjimasaeuwuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045173.9241052-120-208814875437739/AnsiballZ_file.py'
Feb 02 15:12:54 compute-0 sudo[117807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:54 compute-0 python3.9[117809]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:54 compute-0 sudo[117807]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:54 compute-0 ceph-mon[75334]: 9.c scrub starts
Feb 02 15:12:54 compute-0 ceph-mon[75334]: 9.c scrub ok
Feb 02 15:12:54 compute-0 ceph-mon[75334]: 6.a scrub starts
Feb 02 15:12:54 compute-0 ceph-mon[75334]: 6.a scrub ok
Feb 02 15:12:55 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Feb 02 15:12:55 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Feb 02 15:12:55 compute-0 sudo[117959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iitxlcgyaowphhfjkjutsvdbthxuyxns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045174.9921618-132-161341574335813/AnsiballZ_systemd.py'
Feb 02 15:12:55 compute-0 sudo[117959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:12:55 compute-0 python3.9[117961]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:12:55 compute-0 systemd[1]: Reloading.
Feb 02 15:12:55 compute-0 ceph-mon[75334]: pgmap v293: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:55 compute-0 ceph-mon[75334]: 6.1 scrub starts
Feb 02 15:12:55 compute-0 ceph-mon[75334]: 6.1 scrub ok
Feb 02 15:12:55 compute-0 systemd-sysv-generator[117988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:12:55 compute-0 systemd-rc-local-generator[117983]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:12:56 compute-0 sudo[117959]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:56 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Feb 02 15:12:56 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Feb 02 15:12:56 compute-0 sudo[118148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggetrhiqpqwaoctigavpkhllkqzpeyqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045176.3316133-140-183814959112013/AnsiballZ_stat.py'
Feb 02 15:12:56 compute-0 sudo[118148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:56 compute-0 python3.9[118150]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:56 compute-0 sudo[118148]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:56 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.f scrub starts
Feb 02 15:12:56 compute-0 ceph-mon[75334]: 6.9 scrub starts
Feb 02 15:12:56 compute-0 ceph-mon[75334]: 6.9 scrub ok
Feb 02 15:12:56 compute-0 ceph-mon[75334]: pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:56 compute-0 sudo[118226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbrszhkpbcjeozyikfqlbvdconynxfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045176.3316133-140-183814959112013/AnsiballZ_file.py'
Feb 02 15:12:56 compute-0 sudo[118226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:56 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.f scrub ok
Feb 02 15:12:57 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Feb 02 15:12:57 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Feb 02 15:12:57 compute-0 python3.9[118228]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:57 compute-0 sudo[118226]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:57 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Feb 02 15:12:57 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Feb 02 15:12:57 compute-0 sudo[118378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzxwajuiwysfalwzprcxpkaqdqsmipam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045177.3723917-152-86953181776333/AnsiballZ_stat.py'
Feb 02 15:12:57 compute-0 sudo[118378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:57 compute-0 python3.9[118380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:12:57 compute-0 sudo[118378]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:57 compute-0 ceph-mon[75334]: 9.f scrub starts
Feb 02 15:12:57 compute-0 ceph-mon[75334]: 9.f scrub ok
Feb 02 15:12:57 compute-0 ceph-mon[75334]: 9.11 scrub starts
Feb 02 15:12:57 compute-0 ceph-mon[75334]: 9.11 scrub ok
Feb 02 15:12:57 compute-0 ceph-mon[75334]: 6.4 scrub starts
Feb 02 15:12:57 compute-0 ceph-mon[75334]: 6.4 scrub ok
Feb 02 15:12:57 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Feb 02 15:12:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:57.970777) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:12:57 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Feb 02 15:12:57 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045177970850, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7218, "num_deletes": 251, "total_data_size": 9749839, "memory_usage": 9941896, "flush_reason": "Manual Compaction"}
Feb 02 15:12:57 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045178002277, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7711853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7361, "table_properties": {"data_size": 7685290, "index_size": 17296, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 76144, "raw_average_key_size": 23, "raw_value_size": 7622472, "raw_average_value_size": 2328, "num_data_blocks": 760, "num_entries": 3274, "num_filter_entries": 3274, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044785, "oldest_key_time": 1770044785, "file_creation_time": 1770045177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 31551 microseconds, and 10093 cpu microseconds.
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.002332) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7711853 bytes OK
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.002355) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.005300) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.005325) EVENT_LOG_v1 {"time_micros": 1770045178005318, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.005358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9718370, prev total WAL file size 9718370, number of live WAL files 2.
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.007559) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7531KB) 13(58KB) 8(1944B)]
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045178007767, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7773757, "oldest_snapshot_seqno": -1}
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3100 keys, 7726721 bytes, temperature: kUnknown
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045178049773, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7726721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7700477, "index_size": 17366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 74584, "raw_average_key_size": 24, "raw_value_size": 7638992, "raw_average_value_size": 2464, "num_data_blocks": 764, "num_entries": 3100, "num_filter_entries": 3100, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770045178, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.050006) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7726721 bytes
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.051464) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.7 rd, 183.6 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3389, records dropped: 289 output_compression: NoCompression
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.051485) EVENT_LOG_v1 {"time_micros": 1770045178051474, "job": 4, "event": "compaction_finished", "compaction_time_micros": 42083, "compaction_time_cpu_micros": 23110, "output_level": 6, "num_output_files": 1, "total_output_size": 7726721, "num_input_records": 3389, "num_output_records": 3100, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045178052835, "job": 4, "event": "table_file_deletion", "file_number": 19}
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045178052887, "job": 4, "event": "table_file_deletion", "file_number": 13}
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045178052918, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb 02 15:12:58 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:12:58.007394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:12:58 compute-0 sudo[118457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nakpmielxiqsgrbetplisfalpbuvasjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045177.3723917-152-86953181776333/AnsiballZ_file.py'
Feb 02 15:12:58 compute-0 sudo[118457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:58 compute-0 python3.9[118459]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:12:58 compute-0 sudo[118457]: pam_unix(sudo:session): session closed for user root
Feb 02 15:12:58 compute-0 sudo[118609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxvphdspiburxlpmmgofsnagzlhpaioa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045178.4059813-164-246491121843424/AnsiballZ_systemd.py'
Feb 02 15:12:58 compute-0 sudo[118609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:12:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:58 compute-0 python3.9[118611]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:12:58 compute-0 ceph-mon[75334]: pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:12:58 compute-0 systemd[1]: Reloading.
Feb 02 15:12:59 compute-0 systemd-rc-local-generator[118634]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:12:59 compute-0 systemd-sysv-generator[118640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:12:59 compute-0 systemd[1]: Starting Create netns directory...
Feb 02 15:12:59 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 02 15:12:59 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 02 15:12:59 compute-0 systemd[1]: Finished Create netns directory.
Feb 02 15:12:59 compute-0 sudo[118609]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:00 compute-0 python3.9[118803]: ansible-ansible.builtin.service_facts Invoked
Feb 02 15:13:00 compute-0 network[118820]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 15:13:00 compute-0 network[118821]: 'network-scripts' will be removed from distribution in near future.
Feb 02 15:13:00 compute-0 network[118822]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 15:13:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:00 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Feb 02 15:13:00 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Feb 02 15:13:01 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Feb 02 15:13:01 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Feb 02 15:13:01 compute-0 ceph-mon[75334]: pgmap v296: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:01 compute-0 ceph-mon[75334]: 9.6 scrub starts
Feb 02 15:13:01 compute-0 ceph-mon[75334]: 9.6 scrub ok
Feb 02 15:13:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:02 compute-0 ceph-mon[75334]: 9.18 scrub starts
Feb 02 15:13:02 compute-0 ceph-mon[75334]: 9.18 scrub ok
Feb 02 15:13:03 compute-0 sudo[119082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpthxpjxytroyjdjsnoayhzwditvyuaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045182.8640788-190-263467396000579/AnsiballZ_stat.py'
Feb 02 15:13:03 compute-0 sudo[119082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:03 compute-0 python3.9[119084]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:03 compute-0 sudo[119082]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:03 compute-0 sudo[119160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnyumbxsmbdlbgevgjubknhxhkxnrxjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045182.8640788-190-263467396000579/AnsiballZ_file.py'
Feb 02 15:13:03 compute-0 sudo[119160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:03 compute-0 python3.9[119162]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:03 compute-0 sudo[119160]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:03 compute-0 ceph-mon[75334]: pgmap v297: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:04 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Feb 02 15:13:04 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Feb 02 15:13:04 compute-0 sudo[119312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpjqgozgntkvlzcdqczvigwbgrpdoskq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045183.9324574-203-254905549413781/AnsiballZ_file.py'
Feb 02 15:13:04 compute-0 sudo[119312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:04 compute-0 python3.9[119314]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:04 compute-0 sudo[119312]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:04 compute-0 sudo[119464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izedgsdxquikvklybwycthruaixeufcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045184.586558-211-248526893281068/AnsiballZ_stat.py'
Feb 02 15:13:04 compute-0 sudo[119464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:04 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Feb 02 15:13:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:04 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Feb 02 15:13:04 compute-0 ceph-mon[75334]: 9.1b scrub starts
Feb 02 15:13:04 compute-0 ceph-mon[75334]: 9.1b scrub ok
Feb 02 15:13:04 compute-0 ceph-mon[75334]: pgmap v298: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:05 compute-0 python3.9[119466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:05 compute-0 sudo[119464]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:05 compute-0 sudo[119542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iavisrzeljpcolmvpbahwjrrzaueeztt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045184.586558-211-248526893281068/AnsiballZ_file.py'
Feb 02 15:13:05 compute-0 sudo[119542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:05 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Feb 02 15:13:05 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Feb 02 15:13:05 compute-0 python3.9[119544]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:05 compute-0 sudo[119542]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Feb 02 15:13:05 compute-0 ceph-osd[88227]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Feb 02 15:13:05 compute-0 ceph-mon[75334]: 9.17 scrub starts
Feb 02 15:13:05 compute-0 ceph-mon[75334]: 9.17 scrub ok
Feb 02 15:13:05 compute-0 ceph-mon[75334]: 6.6 scrub starts
Feb 02 15:13:05 compute-0 ceph-mon[75334]: 6.6 scrub ok
Feb 02 15:13:06 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Feb 02 15:13:06 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Feb 02 15:13:06 compute-0 sudo[119694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxrpmbeanfmhrwnlwknqvlozhvyspogf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045185.7637055-226-254810284335333/AnsiballZ_timezone.py'
Feb 02 15:13:06 compute-0 sudo[119694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:06 compute-0 python3.9[119696]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 02 15:13:06 compute-0 systemd[1]: Starting Time & Date Service...
Feb 02 15:13:06 compute-0 systemd[1]: Started Time & Date Service.
Feb 02 15:13:06 compute-0 sudo[119694]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:06 compute-0 ceph-mon[75334]: 9.13 scrub starts
Feb 02 15:13:06 compute-0 ceph-mon[75334]: 9.13 scrub ok
Feb 02 15:13:06 compute-0 ceph-mon[75334]: 9.1d scrub starts
Feb 02 15:13:06 compute-0 ceph-mon[75334]: 9.1d scrub ok
Feb 02 15:13:06 compute-0 ceph-mon[75334]: pgmap v299: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:07 compute-0 sudo[119850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibgiljjwseimlzpawkwlkbtbftrzkubi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045186.8013651-235-36984072697049/AnsiballZ_file.py'
Feb 02 15:13:07 compute-0 sudo[119850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:07 compute-0 python3.9[119852]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:07 compute-0 sudo[119850]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:07 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.c scrub starts
Feb 02 15:13:07 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 6.c scrub ok
Feb 02 15:13:07 compute-0 sudo[120002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qivreakheacawpvcgzwlrifbnhgqvake ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045187.4383585-243-153662142780684/AnsiballZ_stat.py'
Feb 02 15:13:07 compute-0 sudo[120002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:07 compute-0 python3.9[120004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:07 compute-0 sudo[120002]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:07 compute-0 ceph-mon[75334]: 6.c scrub starts
Feb 02 15:13:07 compute-0 ceph-mon[75334]: 6.c scrub ok
Feb 02 15:13:08 compute-0 sudo[120080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gechzluonzvgnaostgsxfhidrznykwwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045187.4383585-243-153662142780684/AnsiballZ_file.py'
Feb 02 15:13:08 compute-0 sudo[120080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:08 compute-0 python3.9[120082]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:08 compute-0 sudo[120080]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:08 compute-0 sudo[120232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apnexbuzjtcnczfnpyfhsoislhkuomas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045188.4946344-255-264687158826843/AnsiballZ_stat.py'
Feb 02 15:13:08 compute-0 sudo[120232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:08 compute-0 python3.9[120234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:08 compute-0 ceph-mon[75334]: pgmap v300: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:08 compute-0 sudo[120232]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:09 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Feb 02 15:13:09 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Feb 02 15:13:09 compute-0 sudo[120310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrtfoeunaaozzfpxznmghtsccvxcufgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045188.4946344-255-264687158826843/AnsiballZ_file.py'
Feb 02 15:13:09 compute-0 sudo[120310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:09 compute-0 python3.9[120312]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7l8kkecn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:09 compute-0 sudo[120310]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:09 compute-0 sudo[120462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuwuzheibluksgweoqyxsjdiqgmpocjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045189.5940094-267-151303357380090/AnsiballZ_stat.py'
Feb 02 15:13:09 compute-0 sudo[120462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:10 compute-0 ceph-mon[75334]: 9.9 scrub starts
Feb 02 15:13:10 compute-0 ceph-mon[75334]: 9.9 scrub ok
Feb 02 15:13:10 compute-0 python3.9[120464]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:10 compute-0 sudo[120462]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:10 compute-0 sudo[120540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuphlseyfouwjhdvqipuuclpjqlqpdoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045189.5940094-267-151303357380090/AnsiballZ_file.py'
Feb 02 15:13:10 compute-0 sudo[120540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:10 compute-0 python3.9[120542]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:10 compute-0 sudo[120540]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:11 compute-0 ceph-mon[75334]: pgmap v301: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:11 compute-0 sudo[120692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvsmgpqmdtbfftgfbkvxkjeryvahgtxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045190.7650025-280-261011678546481/AnsiballZ_command.py'
Feb 02 15:13:11 compute-0 sudo[120692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:11 compute-0 python3.9[120694]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:13:11 compute-0 sudo[120692]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:12 compute-0 sudo[120845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpzuisugqhmxpbjxyvafcjnllazumemx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770045191.6394656-288-112207934291939/AnsiballZ_edpm_nftables_from_files.py'
Feb 02 15:13:12 compute-0 sudo[120845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:12 compute-0 python3[120847]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 02 15:13:12 compute-0 sudo[120845]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:12 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Feb 02 15:13:12 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Feb 02 15:13:12 compute-0 ceph-mon[75334]: 9.15 scrub starts
Feb 02 15:13:12 compute-0 ceph-mon[75334]: 9.15 scrub ok
Feb 02 15:13:12 compute-0 sudo[120997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmfztpqoyjliljgvdtavjsjhxvblyubn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045192.4150712-296-267624334528683/AnsiballZ_stat.py'
Feb 02 15:13:12 compute-0 sudo[120997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:12 compute-0 python3.9[120999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:12 compute-0 sudo[120997]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:13 compute-0 sudo[121075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywhjbjtymrnvbhppedowhxdrpgwowwyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045192.4150712-296-267624334528683/AnsiballZ_file.py'
Feb 02 15:13:13 compute-0 sudo[121075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:13 compute-0 python3.9[121077]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:13 compute-0 sudo[121075]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:13 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Feb 02 15:13:13 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Feb 02 15:13:13 compute-0 ceph-mon[75334]: pgmap v302: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:13 compute-0 ceph-mon[75334]: 9.12 scrub starts
Feb 02 15:13:13 compute-0 ceph-mon[75334]: 9.12 scrub ok
Feb 02 15:13:13 compute-0 sudo[121227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzzrjqrcmyxrmazwscugfstrmjbwnedr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045193.4819787-308-148838393748110/AnsiballZ_stat.py'
Feb 02 15:13:13 compute-0 sudo[121227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:14 compute-0 python3.9[121229]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:14 compute-0 sudo[121227]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:14 compute-0 sudo[121352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jplthfortavfhtiuhqjsbyeroufioxwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045193.4819787-308-148838393748110/AnsiballZ_copy.py'
Feb 02 15:13:14 compute-0 sudo[121352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:13:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:13:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:13:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:13:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:13:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:13:14 compute-0 python3.9[121354]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045193.4819787-308-148838393748110/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:14 compute-0 sudo[121352]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:15 compute-0 sudo[121504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhpukljbbdgcffegehutaiegbeifpvcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045194.998117-323-173605747774988/AnsiballZ_stat.py'
Feb 02 15:13:15 compute-0 sudo[121504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:15 compute-0 python3.9[121506]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:15 compute-0 sudo[121504]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:15 compute-0 sudo[121582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtrlsvrbrwknarldxdinoqwmdbbhkqls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045194.998117-323-173605747774988/AnsiballZ_file.py'
Feb 02 15:13:15 compute-0 sudo[121582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:15 compute-0 python3.9[121584]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:15 compute-0 sudo[121582]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:15 compute-0 ceph-mon[75334]: pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:16 compute-0 sudo[121734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofomiatfhtfobungrsqyvbvlquihvgqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045196.0005043-335-197799542387699/AnsiballZ_stat.py'
Feb 02 15:13:16 compute-0 sudo[121734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:16 compute-0 python3.9[121736]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:16 compute-0 sudo[121734]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:16 compute-0 sudo[121812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvrqiiehbtmvdckmcagpkmwkzfnlzrdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045196.0005043-335-197799542387699/AnsiballZ_file.py'
Feb 02 15:13:16 compute-0 sudo[121812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:16 compute-0 python3.9[121814]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:17 compute-0 sudo[121812]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:17 compute-0 ceph-mon[75334]: pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:17 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Feb 02 15:13:17 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Feb 02 15:13:17 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb 02 15:13:17 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb 02 15:13:17 compute-0 sudo[121964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjxvchzfxilwfhjnjlzscmtivhmtxkpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045197.1788213-347-199820935819973/AnsiballZ_stat.py'
Feb 02 15:13:17 compute-0 sudo[121964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:17 compute-0 python3.9[121966]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:17 compute-0 sudo[121964]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:17 compute-0 sudo[122042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfwymjjnzusyprzrrtbtnvzlpkdscbah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045197.1788213-347-199820935819973/AnsiballZ_file.py'
Feb 02 15:13:17 compute-0 sudo[122042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:18 compute-0 ceph-mon[75334]: 9.3 scrub starts
Feb 02 15:13:18 compute-0 ceph-mon[75334]: 9.3 scrub ok
Feb 02 15:13:18 compute-0 ceph-mon[75334]: 9.1a scrub starts
Feb 02 15:13:18 compute-0 ceph-mon[75334]: 9.1a scrub ok
Feb 02 15:13:18 compute-0 python3.9[122044]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:18 compute-0 sudo[122042]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:18 compute-0 sudo[122194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjqlrebxxgpgtfgdpkporghrskxingbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045198.4403796-360-143854722709237/AnsiballZ_command.py'
Feb 02 15:13:18 compute-0 sudo[122194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:18 compute-0 python3.9[122196]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:13:18 compute-0 sudo[122194]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:19 compute-0 ceph-mon[75334]: pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:19 compute-0 sudo[122349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uszflkpnwwvekezqgqhhihucfhtgchpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045199.1921396-368-133831496276964/AnsiballZ_blockinfile.py'
Feb 02 15:13:19 compute-0 sudo[122349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:19 compute-0 python3.9[122351]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:19 compute-0 sudo[122349]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:20 compute-0 sudo[122501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asfhzzvyefantcfwnujabrsnuipcwnan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045200.184891-377-5852643305078/AnsiballZ_file.py'
Feb 02 15:13:20 compute-0 sudo[122501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:20 compute-0 python3.9[122503]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:20 compute-0 sudo[122501]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:21 compute-0 ceph-mon[75334]: pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:21 compute-0 sudo[122653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbvswkwzikojfkkmcizpqpxystapraie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045200.793174-377-191267552394016/AnsiballZ_file.py'
Feb 02 15:13:21 compute-0 sudo[122653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:21 compute-0 python3.9[122655]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:21 compute-0 sudo[122653]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:21 compute-0 sudo[122805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trboqckqkzdhbiibfyumcpoiyrcekrdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045201.440187-392-11657050958001/AnsiballZ_mount.py'
Feb 02 15:13:21 compute-0 sudo[122805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:22 compute-0 python3.9[122807]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 02 15:13:22 compute-0 sudo[122805]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:22 compute-0 sudo[122957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajfvljixsoebthfofiurswooikucqlto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045202.3261287-392-248232231181859/AnsiballZ_mount.py'
Feb 02 15:13:22 compute-0 sudo[122957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:22 compute-0 python3.9[122959]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 02 15:13:22 compute-0 sudo[122957]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:23 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Feb 02 15:13:23 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Feb 02 15:13:23 compute-0 sshd-session[115771]: Connection closed by 192.168.122.30 port 44452
Feb 02 15:13:23 compute-0 sshd-session[115754]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:13:23 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Feb 02 15:13:23 compute-0 systemd[1]: session-40.scope: Consumed 26.342s CPU time.
Feb 02 15:13:23 compute-0 systemd-logind[786]: Session 40 logged out. Waiting for processes to exit.
Feb 02 15:13:23 compute-0 systemd-logind[786]: Removed session 40.
Feb 02 15:13:23 compute-0 ceph-mon[75334]: pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:23 compute-0 ceph-mon[75334]: 9.16 scrub starts
Feb 02 15:13:23 compute-0 ceph-mon[75334]: 9.16 scrub ok
Feb 02 15:13:24 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.b scrub starts
Feb 02 15:13:24 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.b scrub ok
Feb 02 15:13:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:24 compute-0 ceph-mon[75334]: 9.b scrub starts
Feb 02 15:13:24 compute-0 ceph-mon[75334]: 9.b scrub ok
Feb 02 15:13:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:25 compute-0 ceph-mon[75334]: pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.d scrub starts
Feb 02 15:13:26 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.d scrub ok
Feb 02 15:13:26 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Feb 02 15:13:26 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Feb 02 15:13:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:26 compute-0 ceph-mon[75334]: 9.d scrub starts
Feb 02 15:13:26 compute-0 ceph-mon[75334]: 9.d scrub ok
Feb 02 15:13:26 compute-0 ceph-mon[75334]: 9.4 scrub starts
Feb 02 15:13:26 compute-0 ceph-mon[75334]: 9.4 scrub ok
Feb 02 15:13:26 compute-0 ceph-mon[75334]: pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:29 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Feb 02 15:13:29 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Feb 02 15:13:29 compute-0 ceph-mon[75334]: pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:29 compute-0 ceph-mon[75334]: 9.2 scrub starts
Feb 02 15:13:29 compute-0 ceph-mon[75334]: 9.2 scrub ok
Feb 02 15:13:30 compute-0 sshd-session[122984]: Accepted publickey for zuul from 192.168.122.30 port 38974 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:13:30 compute-0 systemd-logind[786]: New session 41 of user zuul.
Feb 02 15:13:30 compute-0 systemd[1]: Started Session 41 of User zuul.
Feb 02 15:13:30 compute-0 sshd-session[122984]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:13:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb 02 15:13:30 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb 02 15:13:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:30 compute-0 sudo[123137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekuagkfrkrvwaxtdziswfpklhptbczop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045210.2355778-16-222389597132505/AnsiballZ_tempfile.py'
Feb 02 15:13:30 compute-0 sudo[123137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:30 compute-0 ceph-mon[75334]: 9.a scrub starts
Feb 02 15:13:30 compute-0 ceph-mon[75334]: 9.a scrub ok
Feb 02 15:13:30 compute-0 ceph-mon[75334]: pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:30 compute-0 python3.9[123139]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 02 15:13:30 compute-0 sudo[123137]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:31 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Feb 02 15:13:31 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Feb 02 15:13:31 compute-0 sudo[123289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwbyadrqeeyaddevbfzuliufhyhnokgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045211.1220992-28-6006406153409/AnsiballZ_stat.py'
Feb 02 15:13:31 compute-0 sudo[123289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:31 compute-0 python3.9[123291]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:13:31 compute-0 sudo[123289]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:31 compute-0 ceph-mon[75334]: 9.10 scrub starts
Feb 02 15:13:31 compute-0 ceph-mon[75334]: 9.10 scrub ok
Feb 02 15:13:32 compute-0 sudo[123443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfbwgybhvkwukuejnxxfstcbuahyhgpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045211.9178884-36-86181146736160/AnsiballZ_slurp.py'
Feb 02 15:13:32 compute-0 sudo[123443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:32 compute-0 python3.9[123445]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Feb 02 15:13:32 compute-0 sudo[123443]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:32 compute-0 ceph-mon[75334]: pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:32 compute-0 sudo[123595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdgsisibqosryoewxgkuhdhhfrkxtaka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045212.7308547-44-110772080925764/AnsiballZ_stat.py'
Feb 02 15:13:32 compute-0 sudo[123595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:33 compute-0 python3.9[123597]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.rh0uo321 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:13:33 compute-0 sudo[123595]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:33 compute-0 sudo[123720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vboxssszclnwabeijhqtufkpxmgpsrqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045212.7308547-44-110772080925764/AnsiballZ_copy.py'
Feb 02 15:13:33 compute-0 sudo[123720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:33 compute-0 python3.9[123722]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.rh0uo321 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045212.7308547-44-110772080925764/.source.rh0uo321 _original_basename=.gk10acv4 follow=False checksum=18487cc92df08b5a25174c33af4d63a7611e08fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:33 compute-0 sudo[123720]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:34 compute-0 sudo[123872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pherrqxnueebokrujotxehcjjiuigqpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045214.0931664-59-145352849536793/AnsiballZ_setup.py'
Feb 02 15:13:34 compute-0 sudo[123872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:34 compute-0 python3.9[123874]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:13:34 compute-0 sudo[123872]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:35 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Feb 02 15:13:35 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Feb 02 15:13:35 compute-0 sudo[124024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfjctxeobijkrswqiozxnkcixxqvulnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045215.1903086-68-40910112842587/AnsiballZ_blockinfile.py'
Feb 02 15:13:35 compute-0 sudo[124024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:35 compute-0 python3.9[124026]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCujbCE344dzd5aW6GG04mR4h2IsckejACLY7ihWz4yYp81LQjEf3SFracBI8VRXub4oT9gkdzEXyLbvZ0BHsvBiNGkn16VOJ9Q3/GqKhU6E58mswaIOBpHKPHeW98mVcKwx7Sr++vzFwxKZcAs5adxcVfSLgRkkehKwMnp8Q532D24Ve7hfVLLjEPqqXAIxgumXpcgBlozL+69tEoxYMipxmf9Lb6EzgeWun+GLKSpEABakFJzad8F+CirhPEkREeGYUpNAKKU2Fv6H43cm8VGjdZ4cc4cITm7os6tUblAMee6NPQY6C7B8I2leHcey0yiT6zsZSSumGWfMGgl8E7tlrgsWLt9GKsEOrl3xxcAYV1vwl4498I4vD6snf1B0Jtki+BYhGzcN32imYItx4W1Ev/JhHfWHYOZUbStwuEtin2wxLS4MijlR+A4c2HoJZaUvSg2h8pKH/gK3hslcCA2vFH8P2C0gSeZxcPJ1UloBHbUmXScshovc23RMVcVtIc=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIImBKhc0pGFrUmCwl/KjuUkeButVm48ak5OqOT7WW0FK
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBFMuj0/gkzyq5OTLSvXRObhIfDk9AaGYJS/YW5/yeiFYKNmn1O5EdHf9Zx7iuWkXi6VSxpStB/Z4Y9fR602dno=
                                              create=True mode=0644 path=/tmp/ansible.rh0uo321 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:35 compute-0 sudo[124024]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:35 compute-0 ceph-mon[75334]: pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:35 compute-0 ceph-mon[75334]: 9.5 scrub starts
Feb 02 15:13:35 compute-0 ceph-mon[75334]: 9.5 scrub ok
Feb 02 15:13:36 compute-0 sudo[124176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwbulmmhfqujxgbumyeljxknocwqbhnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045215.9590151-76-159404067957648/AnsiballZ_command.py'
Feb 02 15:13:36 compute-0 sudo[124176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:36 compute-0 python3.9[124178]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.rh0uo321' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:13:36 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 02 15:13:36 compute-0 sudo[124176]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:36 compute-0 ceph-mon[75334]: pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:37 compute-0 sudo[124332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suxpxcrnpllieirzjathyybwzulhqojz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045216.7546775-84-77155063231046/AnsiballZ_file.py'
Feb 02 15:13:37 compute-0 sudo[124332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:37 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Feb 02 15:13:37 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Feb 02 15:13:37 compute-0 python3.9[124334]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.rh0uo321 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:37 compute-0 sudo[124332]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:37 compute-0 sshd-session[122987]: Connection closed by 192.168.122.30 port 38974
Feb 02 15:13:37 compute-0 sshd-session[122984]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:13:37 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Feb 02 15:13:37 compute-0 systemd[1]: session-41.scope: Consumed 4.583s CPU time.
Feb 02 15:13:37 compute-0 systemd-logind[786]: Session 41 logged out. Waiting for processes to exit.
Feb 02 15:13:37 compute-0 systemd-logind[786]: Removed session 41.
Feb 02 15:13:37 compute-0 ceph-mon[75334]: 9.1 scrub starts
Feb 02 15:13:37 compute-0 ceph-mon[75334]: 9.1 scrub ok
Feb 02 15:13:38 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Feb 02 15:13:38 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Feb 02 15:13:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:38 compute-0 ceph-mon[75334]: 9.0 scrub starts
Feb 02 15:13:38 compute-0 ceph-mon[75334]: 9.0 scrub ok
Feb 02 15:13:38 compute-0 ceph-mon[75334]: pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:40 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Feb 02 15:13:40 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Feb 02 15:13:40 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Feb 02 15:13:40 compute-0 ceph-osd[87170]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Feb 02 15:13:40 compute-0 ceph-mon[75334]: 9.1f scrub starts
Feb 02 15:13:40 compute-0 ceph-mon[75334]: 9.1f scrub ok
Feb 02 15:13:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:41 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Feb 02 15:13:41 compute-0 ceph-osd[86115]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Feb 02 15:13:41 compute-0 ceph-mon[75334]: 9.1c scrub starts
Feb 02 15:13:41 compute-0 ceph-mon[75334]: 9.1c scrub ok
Feb 02 15:13:41 compute-0 ceph-mon[75334]: pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:42 compute-0 ceph-mon[75334]: 9.1e scrub starts
Feb 02 15:13:42 compute-0 ceph-mon[75334]: 9.1e scrub ok
Feb 02 15:13:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:13:42
Feb 02 15:13:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:13:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:13:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.mgr']
Feb 02 15:13:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:13:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:42 compute-0 sshd-session[124360]: Accepted publickey for zuul from 192.168.122.30 port 44810 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:13:42 compute-0 systemd-logind[786]: New session 42 of user zuul.
Feb 02 15:13:42 compute-0 systemd[1]: Started Session 42 of User zuul.
Feb 02 15:13:43 compute-0 sshd-session[124360]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:13:43 compute-0 ceph-mon[75334]: pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:44 compute-0 python3.9[124513]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:13:44 compute-0 sudo[124594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:13:44 compute-0 sudo[124594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:13:44 compute-0 sudo[124594]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:13:44 compute-0 sudo[124619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:13:44 compute-0 sudo[124619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:13:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:44 compute-0 sudo[124717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyijzqrlijnugcjhsgeuipxjyhdqcqet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045224.3864608-27-27684254982535/AnsiballZ_systemd.py'
Feb 02 15:13:44 compute-0 sudo[124717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:44 compute-0 ceph-mon[75334]: pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:45 compute-0 sudo[124619]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:13:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:13:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:13:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:13:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:13:45 compute-0 python3.9[124721]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 02 15:13:45 compute-0 sudo[124717]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:13:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:13:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:13:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:13:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:13:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:13:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:13:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:13:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:13:46 compute-0 sudo[124757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:13:46 compute-0 sudo[124757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:13:46 compute-0 sudo[124757]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:46 compute-0 sudo[124805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:13:46 compute-0 sudo[124805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:13:47 compute-0 podman[124921]: 2026-02-02 15:13:47.204734876 +0000 UTC m=+0.058925593 container create b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 02 15:13:47 compute-0 systemd[1]: Started libpod-conmon-b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f.scope.
Feb 02 15:13:47 compute-0 sudo[124984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdfyhvhjvgkessneifwcaxuynmpmywun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045226.9235766-35-95114650704972/AnsiballZ_systemd.py'
Feb 02 15:13:47 compute-0 sudo[124984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:13:47 compute-0 podman[124921]: 2026-02-02 15:13:47.176636314 +0000 UTC m=+0.030827071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:13:47 compute-0 podman[124921]: 2026-02-02 15:13:47.28969622 +0000 UTC m=+0.143886967 container init b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pasteur, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:13:47 compute-0 podman[124921]: 2026-02-02 15:13:47.300111544 +0000 UTC m=+0.154302231 container start b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pasteur, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:13:47 compute-0 podman[124921]: 2026-02-02 15:13:47.30423308 +0000 UTC m=+0.158423797 container attach b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pasteur, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 15:13:47 compute-0 strange_pasteur[124986]: 167 167
Feb 02 15:13:47 compute-0 systemd[1]: libpod-b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f.scope: Deactivated successfully.
Feb 02 15:13:47 compute-0 conmon[124986]: conmon b7a90182163cabe099da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f.scope/container/memory.events
Feb 02 15:13:47 compute-0 podman[124921]: 2026-02-02 15:13:47.308649345 +0000 UTC m=+0.162840062 container died b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pasteur, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-64aac9f487741c24eacfb29b19d8d7e093da0727e69b6b727a75f0d0a526dcc9-merged.mount: Deactivated successfully.
Feb 02 15:13:47 compute-0 podman[124921]: 2026-02-02 15:13:47.355038572 +0000 UTC m=+0.209229289 container remove b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:13:47 compute-0 systemd[1]: libpod-conmon-b7a90182163cabe099da367ae35060a0d47f7e48428831d9ffc1bfa382a5951f.scope: Deactivated successfully.
Feb 02 15:13:47 compute-0 podman[125010]: 2026-02-02 15:13:47.510611007 +0000 UTC m=+0.053790877 container create 3bccebdccb7cb959c6e35315b36188054eb37ba55a1b79b04e3908da6d37b059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_almeida, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:13:47 compute-0 python3.9[124988]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:13:47 compute-0 systemd[1]: Started libpod-conmon-3bccebdccb7cb959c6e35315b36188054eb37ba55a1b79b04e3908da6d37b059.scope.
Feb 02 15:13:47 compute-0 podman[125010]: 2026-02-02 15:13:47.492079495 +0000 UTC m=+0.035259405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:13:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf1c411d07a43e540dc64e5d2f91f809e75d9a2faeecc19a31cb6ce01b5803f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf1c411d07a43e540dc64e5d2f91f809e75d9a2faeecc19a31cb6ce01b5803f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf1c411d07a43e540dc64e5d2f91f809e75d9a2faeecc19a31cb6ce01b5803f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf1c411d07a43e540dc64e5d2f91f809e75d9a2faeecc19a31cb6ce01b5803f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf1c411d07a43e540dc64e5d2f91f809e75d9a2faeecc19a31cb6ce01b5803f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:47 compute-0 sudo[124984]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:47 compute-0 podman[125010]: 2026-02-02 15:13:47.62353558 +0000 UTC m=+0.166715470 container init 3bccebdccb7cb959c6e35315b36188054eb37ba55a1b79b04e3908da6d37b059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:13:47 compute-0 podman[125010]: 2026-02-02 15:13:47.63987436 +0000 UTC m=+0.183054280 container start 3bccebdccb7cb959c6e35315b36188054eb37ba55a1b79b04e3908da6d37b059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:13:47 compute-0 podman[125010]: 2026-02-02 15:13:47.644478931 +0000 UTC m=+0.187658831 container attach 3bccebdccb7cb959c6e35315b36188054eb37ba55a1b79b04e3908da6d37b059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:13:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:13:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:13:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:13:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:13:47 compute-0 ceph-mon[75334]: pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:48 compute-0 modest_almeida[125028]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:13:48 compute-0 modest_almeida[125028]: --> All data devices are unavailable
Feb 02 15:13:48 compute-0 systemd[1]: libpod-3bccebdccb7cb959c6e35315b36188054eb37ba55a1b79b04e3908da6d37b059.scope: Deactivated successfully.
Feb 02 15:13:48 compute-0 podman[125010]: 2026-02-02 15:13:48.10353553 +0000 UTC m=+0.646715410 container died 3bccebdccb7cb959c6e35315b36188054eb37ba55a1b79b04e3908da6d37b059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_almeida, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bf1c411d07a43e540dc64e5d2f91f809e75d9a2faeecc19a31cb6ce01b5803f-merged.mount: Deactivated successfully.
Feb 02 15:13:48 compute-0 podman[125010]: 2026-02-02 15:13:48.15039014 +0000 UTC m=+0.693570060 container remove 3bccebdccb7cb959c6e35315b36188054eb37ba55a1b79b04e3908da6d37b059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:13:48 compute-0 systemd[1]: libpod-conmon-3bccebdccb7cb959c6e35315b36188054eb37ba55a1b79b04e3908da6d37b059.scope: Deactivated successfully.
Feb 02 15:13:48 compute-0 sudo[124805]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:48 compute-0 sudo[125228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzrwjacuqiivxjdemtckfavaqalfipq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045227.8100033-44-113341335813467/AnsiballZ_command.py'
Feb 02 15:13:48 compute-0 sudo[125190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:13:48 compute-0 sudo[125190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:13:48 compute-0 sudo[125228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:48 compute-0 sudo[125190]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:48 compute-0 sudo[125235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:13:48 compute-0 sudo[125235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:13:48 compute-0 python3.9[125234]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:13:48 compute-0 sudo[125228]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:48 compute-0 podman[125294]: 2026-02-02 15:13:48.560528961 +0000 UTC m=+0.039196086 container create a957054adaf5ccd238f5f8b9adcd3c440f4df18d7d898f57d37387ec926795af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:13:48 compute-0 systemd[1]: Started libpod-conmon-a957054adaf5ccd238f5f8b9adcd3c440f4df18d7d898f57d37387ec926795af.scope.
Feb 02 15:13:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:13:48 compute-0 podman[125294]: 2026-02-02 15:13:48.542746419 +0000 UTC m=+0.021413534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:13:48 compute-0 podman[125294]: 2026-02-02 15:13:48.64495081 +0000 UTC m=+0.123617975 container init a957054adaf5ccd238f5f8b9adcd3c440f4df18d7d898f57d37387ec926795af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:13:48 compute-0 podman[125294]: 2026-02-02 15:13:48.65130105 +0000 UTC m=+0.129968165 container start a957054adaf5ccd238f5f8b9adcd3c440f4df18d7d898f57d37387ec926795af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:13:48 compute-0 podman[125294]: 2026-02-02 15:13:48.655245301 +0000 UTC m=+0.133912486 container attach a957054adaf5ccd238f5f8b9adcd3c440f4df18d7d898f57d37387ec926795af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:13:48 compute-0 jolly_tu[125326]: 167 167
Feb 02 15:13:48 compute-0 systemd[1]: libpod-a957054adaf5ccd238f5f8b9adcd3c440f4df18d7d898f57d37387ec926795af.scope: Deactivated successfully.
Feb 02 15:13:48 compute-0 podman[125294]: 2026-02-02 15:13:48.65698597 +0000 UTC m=+0.135653105 container died a957054adaf5ccd238f5f8b9adcd3c440f4df18d7d898f57d37387ec926795af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-01a85f4f9b8080c388e57358952974d04087487c68491aa2f797bbe061f451de-merged.mount: Deactivated successfully.
Feb 02 15:13:48 compute-0 podman[125294]: 2026-02-02 15:13:48.697985925 +0000 UTC m=+0.176653010 container remove a957054adaf5ccd238f5f8b9adcd3c440f4df18d7d898f57d37387ec926795af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:13:48 compute-0 systemd[1]: libpod-conmon-a957054adaf5ccd238f5f8b9adcd3c440f4df18d7d898f57d37387ec926795af.scope: Deactivated successfully.
Feb 02 15:13:48 compute-0 podman[125392]: 2026-02-02 15:13:48.818823811 +0000 UTC m=+0.037739095 container create e85e88b93465905907d50a18cfbe795287fdc81b15d1e7fd83b01522793a6320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:13:48 compute-0 systemd[1]: Started libpod-conmon-e85e88b93465905907d50a18cfbe795287fdc81b15d1e7fd83b01522793a6320.scope.
Feb 02 15:13:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47aae6bd5416bb206dc76ece398887b2276c511b61e9021993bc3e58333b557/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:48 compute-0 podman[125392]: 2026-02-02 15:13:48.800219557 +0000 UTC m=+0.019134851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47aae6bd5416bb206dc76ece398887b2276c511b61e9021993bc3e58333b557/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47aae6bd5416bb206dc76ece398887b2276c511b61e9021993bc3e58333b557/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47aae6bd5416bb206dc76ece398887b2276c511b61e9021993bc3e58333b557/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:48 compute-0 podman[125392]: 2026-02-02 15:13:48.920660491 +0000 UTC m=+0.139575855 container init e85e88b93465905907d50a18cfbe795287fdc81b15d1e7fd83b01522793a6320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leakey, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:13:48 compute-0 podman[125392]: 2026-02-02 15:13:48.935217842 +0000 UTC m=+0.154133156 container start e85e88b93465905907d50a18cfbe795287fdc81b15d1e7fd83b01522793a6320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leakey, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:13:48 compute-0 podman[125392]: 2026-02-02 15:13:48.939419121 +0000 UTC m=+0.158334435 container attach e85e88b93465905907d50a18cfbe795287fdc81b15d1e7fd83b01522793a6320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leakey, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:13:48 compute-0 ceph-mon[75334]: pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:49 compute-0 sudo[125486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaqmoqvfpudvshtmxharlbsaexwaloyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045228.6230688-52-157971966995959/AnsiballZ_stat.py'
Feb 02 15:13:49 compute-0 sudo[125486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]: {
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:     "0": [
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:         {
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "devices": [
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "/dev/loop3"
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             ],
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_name": "ceph_lv0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_size": "21470642176",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "name": "ceph_lv0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "tags": {
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.cluster_name": "ceph",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.crush_device_class": "",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.encrypted": "0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.objectstore": "bluestore",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.osd_id": "0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.type": "block",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.vdo": "0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.with_tpm": "0"
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             },
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "type": "block",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "vg_name": "ceph_vg0"
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:         }
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:     ],
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:     "1": [
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:         {
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "devices": [
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "/dev/loop4"
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             ],
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_name": "ceph_lv1",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_size": "21470642176",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "name": "ceph_lv1",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "tags": {
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.cluster_name": "ceph",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.crush_device_class": "",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.encrypted": "0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.objectstore": "bluestore",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.osd_id": "1",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.type": "block",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.vdo": "0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.with_tpm": "0"
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             },
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "type": "block",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "vg_name": "ceph_vg1"
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:         }
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:     ],
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:     "2": [
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:         {
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "devices": [
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "/dev/loop5"
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             ],
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_name": "ceph_lv2",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_size": "21470642176",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "name": "ceph_lv2",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "tags": {
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.cluster_name": "ceph",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.crush_device_class": "",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.encrypted": "0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.objectstore": "bluestore",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.osd_id": "2",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.type": "block",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.vdo": "0",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:                 "ceph.with_tpm": "0"
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             },
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "type": "block",
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:             "vg_name": "ceph_vg2"
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:         }
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]:     ]
Feb 02 15:13:49 compute-0 nostalgic_leakey[125408]: }
Feb 02 15:13:49 compute-0 systemd[1]: libpod-e85e88b93465905907d50a18cfbe795287fdc81b15d1e7fd83b01522793a6320.scope: Deactivated successfully.
Feb 02 15:13:49 compute-0 podman[125392]: 2026-02-02 15:13:49.307357331 +0000 UTC m=+0.526272635 container died e85e88b93465905907d50a18cfbe795287fdc81b15d1e7fd83b01522793a6320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:13:49 compute-0 python3.9[125489]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c47aae6bd5416bb206dc76ece398887b2276c511b61e9021993bc3e58333b557-merged.mount: Deactivated successfully.
Feb 02 15:13:49 compute-0 sudo[125486]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:49 compute-0 podman[125392]: 2026-02-02 15:13:49.363540504 +0000 UTC m=+0.582455778 container remove e85e88b93465905907d50a18cfbe795287fdc81b15d1e7fd83b01522793a6320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:13:49 compute-0 systemd[1]: libpod-conmon-e85e88b93465905907d50a18cfbe795287fdc81b15d1e7fd83b01522793a6320.scope: Deactivated successfully.
Feb 02 15:13:49 compute-0 sudo[125235]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:49 compute-0 sudo[125527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:13:49 compute-0 sudo[125527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:13:49 compute-0 sudo[125527]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:49 compute-0 sudo[125552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:13:49 compute-0 sudo[125552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:13:49 compute-0 podman[125641]: 2026-02-02 15:13:49.83380658 +0000 UTC m=+0.059578340 container create 3d8f622fdeb560f8632470a53a8cb63024816f26d4f80dbb285c6cd5ae205f67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 15:13:49 compute-0 systemd[1]: Started libpod-conmon-3d8f622fdeb560f8632470a53a8cb63024816f26d4f80dbb285c6cd5ae205f67.scope.
Feb 02 15:13:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:13:49 compute-0 podman[125641]: 2026-02-02 15:13:49.810444992 +0000 UTC m=+0.036216802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:13:49 compute-0 podman[125641]: 2026-02-02 15:13:49.914789903 +0000 UTC m=+0.140561653 container init 3d8f622fdeb560f8632470a53a8cb63024816f26d4f80dbb285c6cd5ae205f67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 15:13:49 compute-0 podman[125641]: 2026-02-02 15:13:49.92001794 +0000 UTC m=+0.145789660 container start 3d8f622fdeb560f8632470a53a8cb63024816f26d4f80dbb285c6cd5ae205f67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_feistel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb 02 15:13:49 compute-0 podman[125641]: 2026-02-02 15:13:49.923029615 +0000 UTC m=+0.148801365 container attach 3d8f622fdeb560f8632470a53a8cb63024816f26d4f80dbb285c6cd5ae205f67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_feistel, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:13:49 compute-0 friendly_feistel[125681]: 167 167
Feb 02 15:13:49 compute-0 systemd[1]: libpod-3d8f622fdeb560f8632470a53a8cb63024816f26d4f80dbb285c6cd5ae205f67.scope: Deactivated successfully.
Feb 02 15:13:49 compute-0 podman[125641]: 2026-02-02 15:13:49.927684556 +0000 UTC m=+0.153456306 container died 3d8f622fdeb560f8632470a53a8cb63024816f26d4f80dbb285c6cd5ae205f67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_feistel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-743a374c157c59f62fae9b6e7f443e07c56d2fe9512b6c4cfa6bd7e4400521b4-merged.mount: Deactivated successfully.
Feb 02 15:13:49 compute-0 podman[125641]: 2026-02-02 15:13:49.973719904 +0000 UTC m=+0.199491624 container remove 3d8f622fdeb560f8632470a53a8cb63024816f26d4f80dbb285c6cd5ae205f67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_feistel, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:13:49 compute-0 systemd[1]: libpod-conmon-3d8f622fdeb560f8632470a53a8cb63024816f26d4f80dbb285c6cd5ae205f67.scope: Deactivated successfully.
Feb 02 15:13:50 compute-0 sudo[125751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlwhhlxzmreywsyfqetlnolrpacgcvwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045229.5372877-61-125106512644286/AnsiballZ_file.py'
Feb 02 15:13:50 compute-0 sudo[125751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:50 compute-0 podman[125759]: 2026-02-02 15:13:50.160880109 +0000 UTC m=+0.055496735 container create 6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:13:50 compute-0 systemd[1]: Started libpod-conmon-6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232.scope.
Feb 02 15:13:50 compute-0 python3.9[125753]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:13:50 compute-0 podman[125759]: 2026-02-02 15:13:50.137746917 +0000 UTC m=+0.032363553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:13:50 compute-0 sudo[125751]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2fb8ea14ca2fc296061192339c83a7d957f9635343833e0ba65542f7ef6fdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2fb8ea14ca2fc296061192339c83a7d957f9635343833e0ba65542f7ef6fdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2fb8ea14ca2fc296061192339c83a7d957f9635343833e0ba65542f7ef6fdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d2fb8ea14ca2fc296061192339c83a7d957f9635343833e0ba65542f7ef6fdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:13:50 compute-0 podman[125759]: 2026-02-02 15:13:50.29678272 +0000 UTC m=+0.191399376 container init 6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 15:13:50 compute-0 podman[125759]: 2026-02-02 15:13:50.309273502 +0000 UTC m=+0.203890118 container start 6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:13:50 compute-0 podman[125759]: 2026-02-02 15:13:50.313056989 +0000 UTC m=+0.207673655 container attach 6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:13:50 compute-0 sshd-session[124363]: Connection closed by 192.168.122.30 port 44810
Feb 02 15:13:50 compute-0 sshd-session[124360]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:13:50 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Feb 02 15:13:50 compute-0 systemd-logind[786]: Session 42 logged out. Waiting for processes to exit.
Feb 02 15:13:50 compute-0 systemd[1]: session-42.scope: Consumed 3.709s CPU time.
Feb 02 15:13:50 compute-0 systemd-logind[786]: Removed session 42.
Feb 02 15:13:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:50 compute-0 ceph-mon[75334]: pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:51 compute-0 lvm[125882]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:13:51 compute-0 lvm[125880]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:13:51 compute-0 lvm[125882]: VG ceph_vg2 finished
Feb 02 15:13:51 compute-0 lvm[125880]: VG ceph_vg1 finished
Feb 02 15:13:51 compute-0 lvm[125879]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:13:51 compute-0 lvm[125879]: VG ceph_vg0 finished
Feb 02 15:13:51 compute-0 vibrant_leavitt[125776]: {}
Feb 02 15:13:51 compute-0 systemd[1]: libpod-6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232.scope: Deactivated successfully.
Feb 02 15:13:51 compute-0 podman[125759]: 2026-02-02 15:13:51.218467839 +0000 UTC m=+1.113084425 container died 6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:13:51 compute-0 systemd[1]: libpod-6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232.scope: Consumed 1.302s CPU time.
Feb 02 15:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d2fb8ea14ca2fc296061192339c83a7d957f9635343833e0ba65542f7ef6fdb-merged.mount: Deactivated successfully.
Feb 02 15:13:51 compute-0 podman[125759]: 2026-02-02 15:13:51.272474482 +0000 UTC m=+1.167091088 container remove 6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:13:51 compute-0 systemd[1]: libpod-conmon-6f020a0eb59c437208742c1232626c493ddaecaf66930784f90c533c9fee0232.scope: Deactivated successfully.
Feb 02 15:13:51 compute-0 sudo[125552]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:13:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:13:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:13:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:13:51 compute-0 sudo[125897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:13:51 compute-0 sudo[125897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:13:51 compute-0 sudo[125897]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:13:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:13:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:53 compute-0 ceph-mon[75334]: pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:13:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:54 compute-0 ceph-mon[75334]: pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:56 compute-0 sshd-session[125922]: Accepted publickey for zuul from 192.168.122.30 port 51138 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:13:56 compute-0 systemd-logind[786]: New session 43 of user zuul.
Feb 02 15:13:56 compute-0 systemd[1]: Started Session 43 of User zuul.
Feb 02 15:13:56 compute-0 sshd-session[125922]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:13:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:13:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:56 compute-0 ceph-mon[75334]: pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:57 compute-0 python3.9[126075]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:13:57 compute-0 sudo[126229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwphotbtdkjnnnphvswnwuaieanmzisg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045237.5040493-29-18878467248801/AnsiballZ_setup.py'
Feb 02 15:13:57 compute-0 sudo[126229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:58 compute-0 python3.9[126231]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:13:58 compute-0 sudo[126229]: pam_unix(sudo:session): session closed for user root
Feb 02 15:13:58 compute-0 sudo[126313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahuzgunvhdyuzbxdixvpnnzcntfvatpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045237.5040493-29-18878467248801/AnsiballZ_dnf.py'
Feb 02 15:13:58 compute-0 sudo[126313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:13:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:58 compute-0 ceph-mon[75334]: pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:13:58 compute-0 python3.9[126315]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 02 15:14:00 compute-0 sudo[126313]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:00 compute-0 ceph-mon[75334]: pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:01 compute-0 python3.9[126466]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:14:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:02 compute-0 python3.9[126617]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 15:14:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:02 compute-0 ceph-mon[75334]: pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:03 compute-0 python3.9[126767]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:14:03 compute-0 python3.9[126917]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:14:04 compute-0 sshd-session[125925]: Connection closed by 192.168.122.30 port 51138
Feb 02 15:14:04 compute-0 sshd-session[125922]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:14:04 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Feb 02 15:14:04 compute-0 systemd[1]: session-43.scope: Consumed 5.523s CPU time.
Feb 02 15:14:04 compute-0 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Feb 02 15:14:04 compute-0 systemd-logind[786]: Removed session 43.
Feb 02 15:14:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:04 compute-0 ceph-mon[75334]: pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:06 compute-0 ceph-mon[75334]: pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:08 compute-0 ceph-mon[75334]: pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:09 compute-0 sshd-session[126942]: Accepted publickey for zuul from 192.168.122.30 port 55012 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:14:09 compute-0 systemd-logind[786]: New session 44 of user zuul.
Feb 02 15:14:09 compute-0 systemd[1]: Started Session 44 of User zuul.
Feb 02 15:14:09 compute-0 sshd-session[126942]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:14:10 compute-0 python3.9[127095]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:14:10 compute-0 sshd-session[71518]: Received disconnect from 38.129.56.75 port 41702:11: disconnected by user
Feb 02 15:14:10 compute-0 sshd-session[71518]: Disconnected from user zuul 38.129.56.75 port 41702
Feb 02 15:14:10 compute-0 sshd-session[71515]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:14:10 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Feb 02 15:14:10 compute-0 systemd[1]: session-18.scope: Consumed 1min 28.474s CPU time.
Feb 02 15:14:10 compute-0 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Feb 02 15:14:10 compute-0 systemd-logind[786]: Removed session 18.
Feb 02 15:14:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:11 compute-0 ceph-mon[75334]: pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:11 compute-0 sudo[127249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjrpgjmmgtxwmwaouyjcwoqcgkkxlimd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045251.5014133-45-246350062598409/AnsiballZ_file.py'
Feb 02 15:14:11 compute-0 sudo[127249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:12 compute-0 python3.9[127251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:12 compute-0 sudo[127249]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:12 compute-0 sudo[127401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzrzyjtmkkrfucoyddiixefznlxgsvlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045252.183399-45-32495923852592/AnsiballZ_file.py'
Feb 02 15:14:12 compute-0 sudo[127401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:12 compute-0 python3.9[127403]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:12 compute-0 sudo[127401]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:12 compute-0 ceph-mon[75334]: pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:13 compute-0 sudo[127553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlxcghdvzvolxlypkvskbbgbmolowucx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045252.761892-60-20649376877433/AnsiballZ_stat.py'
Feb 02 15:14:13 compute-0 sudo[127553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:13 compute-0 python3.9[127555]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:13 compute-0 sudo[127553]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:13 compute-0 sudo[127676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjxghacmrfemlhyqhoyzmpldttzlgtzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045252.761892-60-20649376877433/AnsiballZ_copy.py'
Feb 02 15:14:13 compute-0 sudo[127676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:13 compute-0 python3.9[127678]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045252.761892-60-20649376877433/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4544c1c8f1d6b68b7ffddb5c35c3a807c34ae06f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:13 compute-0 sudo[127676]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:14 compute-0 sudo[127828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idiljjklpxuvoknajgtxrrfzzuynacoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045254.0845406-60-18199576391874/AnsiballZ_stat.py'
Feb 02 15:14:14 compute-0 sudo[127828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:14 compute-0 python3.9[127830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:14 compute-0 sudo[127828]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:14:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:14:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:14:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:14:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:14:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:14:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:14 compute-0 ceph-mon[75334]: pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:15 compute-0 sudo[127951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygtugjreqoatpjxvuwhvmxcbpireizll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045254.0845406-60-18199576391874/AnsiballZ_copy.py'
Feb 02 15:14:15 compute-0 sudo[127951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:15 compute-0 python3.9[127953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045254.0845406-60-18199576391874/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a1f7e51767c6badba58e73ba52b85315c39fbac5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:15 compute-0 sudo[127951]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:15 compute-0 sudo[128103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjsukkebubvsfnkptloxpzxezlpitchi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045255.4140856-60-141771835420747/AnsiballZ_stat.py'
Feb 02 15:14:15 compute-0 sudo[128103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:15 compute-0 python3.9[128105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:15 compute-0 sudo[128103]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:16 compute-0 sudo[128226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzdvwowqssxyrplrimgvgrryyisvifme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045255.4140856-60-141771835420747/AnsiballZ_copy.py'
Feb 02 15:14:16 compute-0 sudo[128226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:16 compute-0 python3.9[128228]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045255.4140856-60-141771835420747/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a48b8d0842a30ce3f7a6423482ce104a1c94946c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:16 compute-0 sudo[128226]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:16 compute-0 sudo[128378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpjfzdrfrpwppzzmwhzfjbeiicevxtdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045256.557651-104-47996685654884/AnsiballZ_file.py'
Feb 02 15:14:16 compute-0 sudo[128378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:16 compute-0 ceph-mon[75334]: pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:17 compute-0 python3.9[128380]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:17 compute-0 sudo[128378]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:17 compute-0 sudo[128530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjzkryrmffpzktslmkoqtzjazqyyuwqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045257.195318-104-55532552166124/AnsiballZ_file.py'
Feb 02 15:14:17 compute-0 sudo[128530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:17 compute-0 python3.9[128532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:17 compute-0 sudo[128530]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:18 compute-0 sudo[128682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kldiionyruvjhflkzjsvpzcsqjxlzjdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045257.8199697-119-270683197194090/AnsiballZ_stat.py'
Feb 02 15:14:18 compute-0 sudo[128682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:18 compute-0 python3.9[128684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:18 compute-0 sudo[128682]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:18 compute-0 sudo[128805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whurhuautbgvcicrmxlyarwdklfgvpjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045257.8199697-119-270683197194090/AnsiballZ_copy.py'
Feb 02 15:14:18 compute-0 sudo[128805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:18 compute-0 python3.9[128807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045257.8199697-119-270683197194090/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8bd1fb811e7faf4f9c247a7ee82763e93d20091b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:18 compute-0 sudo[128805]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:18 compute-0 ceph-mon[75334]: pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:19 compute-0 sudo[128957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpxtqcxwabxtufjejxtctcvguuqiugmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045259.0360808-119-38383108145480/AnsiballZ_stat.py'
Feb 02 15:14:19 compute-0 sudo[128957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:19 compute-0 python3.9[128959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:19 compute-0 sudo[128957]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:19 compute-0 sudo[129080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxxaqiohfijcxyokpkvmfzneclfmoxtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045259.0360808-119-38383108145480/AnsiballZ_copy.py'
Feb 02 15:14:19 compute-0 sudo[129080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:20 compute-0 python3.9[129082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045259.0360808-119-38383108145480/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=449593ce096c52015454774d18dcfa3ada7002c1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:20 compute-0 sudo[129080]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:20 compute-0 sudo[129232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vytfdpemkoqoyjescsoakogjrkfgqghj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045260.306253-119-74692070320257/AnsiballZ_stat.py'
Feb 02 15:14:20 compute-0 sudo[129232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:20 compute-0 python3.9[129234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:20 compute-0 sudo[129232]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:20 compute-0 ceph-mon[75334]: pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:21 compute-0 sudo[129355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdeazqbtaekwtbfftaaikajnwaiumrxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045260.306253-119-74692070320257/AnsiballZ_copy.py'
Feb 02 15:14:21 compute-0 sudo[129355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:21 compute-0 python3.9[129357]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045260.306253-119-74692070320257/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1debc903cba93346546cd90bc7ad17072fd2e9cc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:21 compute-0 sudo[129355]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:21 compute-0 sudo[129507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usghryvxyjtulrntppgtwahifghixzhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045261.505171-163-2867202912809/AnsiballZ_file.py'
Feb 02 15:14:21 compute-0 sudo[129507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:21 compute-0 python3.9[129509]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:21 compute-0 sudo[129507]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:22 compute-0 sudo[129659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hldzkxwxdddhemvbkiutvlfzgqnzfwkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045262.05128-163-225163819146292/AnsiballZ_file.py'
Feb 02 15:14:22 compute-0 sudo[129659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:22 compute-0 python3.9[129661]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:22 compute-0 sudo[129659]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:22 compute-0 sudo[129811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajqoioxqbxiehfqrnosaksxhjyglsdvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045262.6771274-178-233654164543117/AnsiballZ_stat.py'
Feb 02 15:14:22 compute-0 sudo[129811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:22 compute-0 ceph-mon[75334]: pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:23 compute-0 python3.9[129813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:23 compute-0 sudo[129811]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:23 compute-0 sudo[129934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujbafsxbncpjnrtadalyyhdteypjdwrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045262.6771274-178-233654164543117/AnsiballZ_copy.py'
Feb 02 15:14:23 compute-0 sudo[129934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:23 compute-0 python3.9[129936]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045262.6771274-178-233654164543117/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=42c7785d0dc6a92e3e0ec269af84204e0dde4ea6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:23 compute-0 sudo[129934]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:24 compute-0 sudo[130086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woqulaxvqfcnfuhfiezarmgawpkftkct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045263.9252996-178-37438426678224/AnsiballZ_stat.py'
Feb 02 15:14:24 compute-0 sudo[130086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:24 compute-0 python3.9[130088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:24 compute-0 sudo[130086]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:24 compute-0 sudo[130209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxzlrjjywqfldivgihdggxikezuwvwaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045263.9252996-178-37438426678224/AnsiballZ_copy.py'
Feb 02 15:14:24 compute-0 sudo[130209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:24 compute-0 python3.9[130211]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045263.9252996-178-37438426678224/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=449593ce096c52015454774d18dcfa3ada7002c1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:24 compute-0 sudo[130209]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:24 compute-0 ceph-mon[75334]: pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:25 compute-0 sudo[130361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtgzkxcapapqwmcyphotvedxqsrmseux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045265.000082-178-246364290225881/AnsiballZ_stat.py'
Feb 02 15:14:25 compute-0 sudo[130361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:25 compute-0 python3.9[130363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:25 compute-0 sudo[130361]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:25 compute-0 sudo[130484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzpcalldsugwqpmwinhtnqmbjvcokzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045265.000082-178-246364290225881/AnsiballZ_copy.py'
Feb 02 15:14:25 compute-0 sudo[130484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:26 compute-0 python3.9[130486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045265.000082-178-246364290225881/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=daa14b059674c9c3429f4af7b79c6e9b7d0f303e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:26 compute-0 sudo[130484]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:26 compute-0 ceph-mon[75334]: pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:27 compute-0 sudo[130636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixjbbieenqwnmhhdbbhohnadjhooktwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045266.7931406-238-75153442644358/AnsiballZ_file.py'
Feb 02 15:14:27 compute-0 sudo[130636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:27 compute-0 python3.9[130638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:27 compute-0 sudo[130636]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:27 compute-0 sudo[130788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhefsjkfmrifgouvtnfrbwyfqpmhsrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045267.4484415-246-272989564429020/AnsiballZ_stat.py'
Feb 02 15:14:27 compute-0 sudo[130788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:27 compute-0 python3.9[130790]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:27 compute-0 sudo[130788]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:28 compute-0 sudo[130911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvemkgficvjvdzaybvarbpvxynmpmmkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045267.4484415-246-272989564429020/AnsiballZ_copy.py'
Feb 02 15:14:28 compute-0 sudo[130911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:28 compute-0 python3.9[130913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045267.4484415-246-272989564429020/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7be2e1fb7115b1c5b555a1bd4d7bf988bfc4e3da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:28 compute-0 sudo[130911]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:28 compute-0 ceph-mon[75334]: pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:29 compute-0 sudo[131063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuvlvjwlqfwmlhdngveayivjgpfkxevk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045268.8744998-262-240705222149820/AnsiballZ_file.py'
Feb 02 15:14:29 compute-0 sudo[131063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:29 compute-0 python3.9[131065]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:29 compute-0 sudo[131063]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:29 compute-0 sudo[131215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mofsfxmoetytcyexqolxwwquaammcvej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045269.4673214-270-45909793104732/AnsiballZ_stat.py'
Feb 02 15:14:29 compute-0 sudo[131215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:29 compute-0 python3.9[131217]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:29 compute-0 sudo[131215]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:30 compute-0 sudo[131338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odoowjrwwsazzccyzfntoqjhfanqdzrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045269.4673214-270-45909793104732/AnsiballZ_copy.py'
Feb 02 15:14:30 compute-0 sudo[131338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:30 compute-0 python3.9[131340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045269.4673214-270-45909793104732/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7be2e1fb7115b1c5b555a1bd4d7bf988bfc4e3da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:30 compute-0 sudo[131338]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:30 compute-0 sudo[131490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ervubzmsfrskozdhwczjrindoirxglbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045270.5977254-286-211909693230326/AnsiballZ_file.py'
Feb 02 15:14:30 compute-0 sudo[131490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:30 compute-0 python3.9[131492]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:30 compute-0 sudo[131490]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:30 compute-0 ceph-mon[75334]: pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:31 compute-0 sudo[131642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbnbvsgdpkerzrzoqtgdykjycoqprfnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045271.1124096-294-181749664834240/AnsiballZ_stat.py'
Feb 02 15:14:31 compute-0 sudo[131642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:31 compute-0 python3.9[131644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:31 compute-0 sudo[131642]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:31 compute-0 sudo[131765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yudvlrqsnbcffaxtoiaalilnhmfbxghm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045271.1124096-294-181749664834240/AnsiballZ_copy.py'
Feb 02 15:14:31 compute-0 sudo[131765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:32 compute-0 python3.9[131767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045271.1124096-294-181749664834240/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7be2e1fb7115b1c5b555a1bd4d7bf988bfc4e3da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:32 compute-0 sudo[131765]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:32 compute-0 sudo[131917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbbbacehdpnhvftikmjlphzmgtstdqvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045272.4764867-310-124421525480297/AnsiballZ_file.py'
Feb 02 15:14:32 compute-0 sudo[131917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:32 compute-0 python3.9[131919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:32 compute-0 sudo[131917]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:32 compute-0 ceph-mon[75334]: pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:33 compute-0 sudo[132069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqtdpciirxxkjqjzuugdhzbtocepzatr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045273.166396-318-68523123995799/AnsiballZ_stat.py'
Feb 02 15:14:33 compute-0 sudo[132069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:33 compute-0 python3.9[132071]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:33 compute-0 sudo[132069]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:33 compute-0 sudo[132192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikpkyfijzisqsjrkvupzzihyossbnwxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045273.166396-318-68523123995799/AnsiballZ_copy.py'
Feb 02 15:14:33 compute-0 sudo[132192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:34 compute-0 python3.9[132194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045273.166396-318-68523123995799/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7be2e1fb7115b1c5b555a1bd4d7bf988bfc4e3da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:34 compute-0 sudo[132192]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:34 compute-0 sudo[132344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omijofqumvotkqayzoaacysnagqwqlhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045274.3786628-334-59482646334394/AnsiballZ_file.py'
Feb 02 15:14:34 compute-0 sudo[132344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:34 compute-0 python3.9[132346]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:34 compute-0 sudo[132344]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:34 compute-0 ceph-mon[75334]: pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:35 compute-0 sudo[132496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fifvdtxfiiejyvpqcjcxhbzhtukosgec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045275.0434012-342-216562678284839/AnsiballZ_stat.py'
Feb 02 15:14:35 compute-0 sudo[132496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:35 compute-0 python3.9[132498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:35 compute-0 sudo[132496]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:35 compute-0 sudo[132619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaskojilwmzrksibjcaqugzlqgsbiyei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045275.0434012-342-216562678284839/AnsiballZ_copy.py'
Feb 02 15:14:35 compute-0 sudo[132619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:36 compute-0 python3.9[132621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045275.0434012-342-216562678284839/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7be2e1fb7115b1c5b555a1bd4d7bf988bfc4e3da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:36 compute-0 sudo[132619]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:36 compute-0 sudo[132771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuxrzfhurlkgihnumxtlxkfcoyuqfela ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045276.2395594-358-137233514829154/AnsiballZ_file.py'
Feb 02 15:14:36 compute-0 sudo[132771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:36 compute-0 python3.9[132773]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:36 compute-0 sudo[132771]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:36 compute-0 ceph-mon[75334]: pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:37 compute-0 sudo[132923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awnbdhvhhpuklnfuaqglhwwwlbnsmfjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045276.8344793-366-78422934452518/AnsiballZ_stat.py'
Feb 02 15:14:37 compute-0 sudo[132923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:37 compute-0 python3.9[132925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:37 compute-0 sudo[132923]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:37 compute-0 sudo[133046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wefmavjzdphphdrqkxmmecjbzguvyair ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045276.8344793-366-78422934452518/AnsiballZ_copy.py'
Feb 02 15:14:37 compute-0 sudo[133046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:37 compute-0 python3.9[133048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045276.8344793-366-78422934452518/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7be2e1fb7115b1c5b555a1bd4d7bf988bfc4e3da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:37 compute-0 sudo[133046]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:38 compute-0 sshd-session[126945]: Connection closed by 192.168.122.30 port 55012
Feb 02 15:14:38 compute-0 sshd-session[126942]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:14:38 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Feb 02 15:14:38 compute-0 systemd[1]: session-44.scope: Consumed 20.278s CPU time.
Feb 02 15:14:38 compute-0 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Feb 02 15:14:38 compute-0 systemd-logind[786]: Removed session 44.
Feb 02 15:14:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:38 compute-0 ceph-mon[75334]: pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:41 compute-0 ceph-mon[75334]: pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:14:42
Feb 02 15:14:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:14:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:14:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms']
Feb 02 15:14:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:14:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:43 compute-0 ceph-mon[75334]: pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:43 compute-0 sshd-session[133073]: Accepted publickey for zuul from 192.168.122.30 port 35818 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:14:43 compute-0 systemd-logind[786]: New session 45 of user zuul.
Feb 02 15:14:43 compute-0 systemd[1]: Started Session 45 of User zuul.
Feb 02 15:14:43 compute-0 sshd-session[133073]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:14:44 compute-0 sudo[133226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oedgfkbzqathmlfjrfcotptmnsmokufl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045283.826647-17-50488801926693/AnsiballZ_file.py'
Feb 02 15:14:44 compute-0 sudo[133226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:44 compute-0 python3.9[133228]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:44 compute-0 sudo[133226]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:14:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:45 compute-0 ceph-mon[75334]: pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:45 compute-0 sudo[133378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbfcwohazatdvbdznxitwmtrxhbxkpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045284.7260807-29-20807002337841/AnsiballZ_stat.py'
Feb 02 15:14:45 compute-0 sudo[133378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:45 compute-0 python3.9[133380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:45 compute-0 sudo[133378]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:45 compute-0 sudo[133501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiekonmsnugvahfibhbcamdjucgnnwln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045284.7260807-29-20807002337841/AnsiballZ_copy.py'
Feb 02 15:14:45 compute-0 sudo[133501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:46 compute-0 python3.9[133503]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045284.7260807-29-20807002337841/.source.conf _original_basename=ceph.conf follow=False checksum=400f38502a9955d2ffd47cc968da060341c6a023 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:46 compute-0 sudo[133501]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:46 compute-0 sudo[133653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riravqpaeugvwdcyofbfdemxwtjwbkgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045286.1863492-29-198394191320319/AnsiballZ_stat.py'
Feb 02 15:14:46 compute-0 sudo[133653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:46 compute-0 python3.9[133655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:14:46 compute-0 sudo[133653]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:46 compute-0 sudo[133776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adsltgoqyircmwayatcfszcnfwxndaim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045286.1863492-29-198394191320319/AnsiballZ_copy.py'
Feb 02 15:14:46 compute-0 sudo[133776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:47 compute-0 ceph-mon[75334]: pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:47 compute-0 python3.9[133778]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045286.1863492-29-198394191320319/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=ef385d56b9da4b632a87103668ad2cc30cca0d44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:14:47 compute-0 sudo[133776]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:47 compute-0 sshd-session[133076]: Connection closed by 192.168.122.30 port 35818
Feb 02 15:14:47 compute-0 sshd-session[133073]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:14:47 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Feb 02 15:14:47 compute-0 systemd[1]: session-45.scope: Consumed 2.424s CPU time.
Feb 02 15:14:47 compute-0 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Feb 02 15:14:47 compute-0 systemd-logind[786]: Removed session 45.
Feb 02 15:14:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:49 compute-0 ceph-mon[75334]: pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:51 compute-0 ceph-mon[75334]: pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:51 compute-0 sudo[133803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:14:51 compute-0 sudo[133803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:14:51 compute-0 sudo[133803]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:51 compute-0 sudo[133828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:14:51 compute-0 sudo[133828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:14:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:52 compute-0 sudo[133828]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:14:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:14:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:14:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:14:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:14:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:14:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:14:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:14:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:14:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:14:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:14:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:14:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:14:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:14:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:14:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:14:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:14:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:14:52 compute-0 sudo[133884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:14:52 compute-0 sudo[133884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:14:52 compute-0 sudo[133884]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:52 compute-0 sudo[133909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:14:52 compute-0 sudo[133909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:14:52 compute-0 podman[133946]: 2026-02-02 15:14:52.494480088 +0000 UTC m=+0.049782600 container create efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:14:52 compute-0 systemd[1]: Started libpod-conmon-efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985.scope.
Feb 02 15:14:52 compute-0 podman[133946]: 2026-02-02 15:14:52.467682192 +0000 UTC m=+0.022984714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:14:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:14:52 compute-0 podman[133946]: 2026-02-02 15:14:52.587889883 +0000 UTC m=+0.143192375 container init efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:14:52 compute-0 podman[133946]: 2026-02-02 15:14:52.597683258 +0000 UTC m=+0.152985730 container start efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:14:52 compute-0 podman[133946]: 2026-02-02 15:14:52.601455599 +0000 UTC m=+0.156758071 container attach efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:14:52 compute-0 pensive_sanderson[133962]: 167 167
Feb 02 15:14:52 compute-0 systemd[1]: libpod-efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985.scope: Deactivated successfully.
Feb 02 15:14:52 compute-0 conmon[133962]: conmon efd6ba2c905193ac1177 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985.scope/container/memory.events
Feb 02 15:14:52 compute-0 podman[133946]: 2026-02-02 15:14:52.606096936 +0000 UTC m=+0.161399418 container died efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-93db099e11f569de477539d32641cf9845a0cd3af0585deb6370ace189b184be-merged.mount: Deactivated successfully.
Feb 02 15:14:52 compute-0 podman[133946]: 2026-02-02 15:14:52.65446486 +0000 UTC m=+0.209767372 container remove efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:14:52 compute-0 systemd[1]: libpod-conmon-efd6ba2c905193ac1177c09ee299408c7220ed0092ac9b7a9c268f9a69c5a985.scope: Deactivated successfully.
Feb 02 15:14:52 compute-0 podman[133987]: 2026-02-02 15:14:52.820841611 +0000 UTC m=+0.052916620 container create fdbc19882cf77ea006f9534c89ff5813e88d301e610737c554ad78c7ae4f21e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sinoussi, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:14:52 compute-0 systemd[1]: Started libpod-conmon-fdbc19882cf77ea006f9534c89ff5813e88d301e610737c554ad78c7ae4f21e2.scope.
Feb 02 15:14:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11762f441085edec03599881fb2ca70828273bb5d613a56637913e4b82ceb816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11762f441085edec03599881fb2ca70828273bb5d613a56637913e4b82ceb816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11762f441085edec03599881fb2ca70828273bb5d613a56637913e4b82ceb816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11762f441085edec03599881fb2ca70828273bb5d613a56637913e4b82ceb816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11762f441085edec03599881fb2ca70828273bb5d613a56637913e4b82ceb816/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:52 compute-0 podman[133987]: 2026-02-02 15:14:52.798269755 +0000 UTC m=+0.030344804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:14:52 compute-0 podman[133987]: 2026-02-02 15:14:52.90233082 +0000 UTC m=+0.134405899 container init fdbc19882cf77ea006f9534c89ff5813e88d301e610737c554ad78c7ae4f21e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sinoussi, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:14:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:52 compute-0 podman[133987]: 2026-02-02 15:14:52.910445723 +0000 UTC m=+0.142520772 container start fdbc19882cf77ea006f9534c89ff5813e88d301e610737c554ad78c7ae4f21e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:14:52 compute-0 podman[133987]: 2026-02-02 15:14:52.914299786 +0000 UTC m=+0.146374795 container attach fdbc19882cf77ea006f9534c89ff5813e88d301e610737c554ad78c7ae4f21e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:14:52 compute-0 sshd-session[134001]: Accepted publickey for zuul from 192.168.122.30 port 60704 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:14:52 compute-0 systemd-logind[786]: New session 46 of user zuul.
Feb 02 15:14:52 compute-0 systemd[1]: Started Session 46 of User zuul.
Feb 02 15:14:52 compute-0 sshd-session[134001]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:14:53 compute-0 ceph-mon[75334]: pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:53 compute-0 unruffled_sinoussi[134005]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:14:53 compute-0 unruffled_sinoussi[134005]: --> All data devices are unavailable
Feb 02 15:14:53 compute-0 systemd[1]: libpod-fdbc19882cf77ea006f9534c89ff5813e88d301e610737c554ad78c7ae4f21e2.scope: Deactivated successfully.
Feb 02 15:14:53 compute-0 podman[133987]: 2026-02-02 15:14:53.406700763 +0000 UTC m=+0.638775832 container died fdbc19882cf77ea006f9534c89ff5813e88d301e610737c554ad78c7ae4f21e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sinoussi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 15:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-11762f441085edec03599881fb2ca70828273bb5d613a56637913e4b82ceb816-merged.mount: Deactivated successfully.
Feb 02 15:14:53 compute-0 podman[133987]: 2026-02-02 15:14:53.454956363 +0000 UTC m=+0.687031382 container remove fdbc19882cf77ea006f9534c89ff5813e88d301e610737c554ad78c7ae4f21e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sinoussi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:14:53 compute-0 systemd[1]: libpod-conmon-fdbc19882cf77ea006f9534c89ff5813e88d301e610737c554ad78c7ae4f21e2.scope: Deactivated successfully.
Feb 02 15:14:53 compute-0 sudo[133909]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:53 compute-0 sudo[134118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:14:53 compute-0 sudo[134118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:14:53 compute-0 sudo[134118]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:53 compute-0 sudo[134163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:14:53 compute-0 sudo[134163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:14:53 compute-0 podman[134250]: 2026-02-02 15:14:53.853147784 +0000 UTC m=+0.041355603 container create a6e8f007921d856ac85724332d270a664dc0f189f8cb1174306d6bba36f0e448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_babbage, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 15:14:53 compute-0 systemd[1]: Started libpod-conmon-a6e8f007921d856ac85724332d270a664dc0f189f8cb1174306d6bba36f0e448.scope.
Feb 02 15:14:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:14:53 compute-0 podman[134250]: 2026-02-02 15:14:53.923306368 +0000 UTC m=+0.111514227 container init a6e8f007921d856ac85724332d270a664dc0f189f8cb1174306d6bba36f0e448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:14:53 compute-0 podman[134250]: 2026-02-02 15:14:53.834494331 +0000 UTC m=+0.022702170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:14:53 compute-0 podman[134250]: 2026-02-02 15:14:53.930310431 +0000 UTC m=+0.118518250 container start a6e8f007921d856ac85724332d270a664dc0f189f8cb1174306d6bba36f0e448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_babbage, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:14:53 compute-0 podman[134250]: 2026-02-02 15:14:53.933887438 +0000 UTC m=+0.122095327 container attach a6e8f007921d856ac85724332d270a664dc0f189f8cb1174306d6bba36f0e448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:14:53 compute-0 blissful_babbage[134266]: 167 167
Feb 02 15:14:53 compute-0 systemd[1]: libpod-a6e8f007921d856ac85724332d270a664dc0f189f8cb1174306d6bba36f0e448.scope: Deactivated successfully.
Feb 02 15:14:53 compute-0 podman[134250]: 2026-02-02 15:14:53.935503419 +0000 UTC m=+0.123711258 container died a6e8f007921d856ac85724332d270a664dc0f189f8cb1174306d6bba36f0e448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_babbage, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:14:53 compute-0 python3.9[134238]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b806c100f3c4cdef44b128c02cc78a38dc9168f85392be67322833dd1942178d-merged.mount: Deactivated successfully.
Feb 02 15:14:53 compute-0 podman[134250]: 2026-02-02 15:14:53.977425371 +0000 UTC m=+0.165633180 container remove a6e8f007921d856ac85724332d270a664dc0f189f8cb1174306d6bba36f0e448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:14:53 compute-0 systemd[1]: libpod-conmon-a6e8f007921d856ac85724332d270a664dc0f189f8cb1174306d6bba36f0e448.scope: Deactivated successfully.
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:14:54 compute-0 podman[134294]: 2026-02-02 15:14:54.110211387 +0000 UTC m=+0.045526130 container create 7c8fb23bd6d2b4c90e223794e7148d161fd7c44fb32120b8ed37f6028c6d7728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:14:54 compute-0 systemd[1]: Started libpod-conmon-7c8fb23bd6d2b4c90e223794e7148d161fd7c44fb32120b8ed37f6028c6d7728.scope.
Feb 02 15:14:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2298880596e646c26a5c7d7e487e05286f81c53746b5cddace17698200d27166/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2298880596e646c26a5c7d7e487e05286f81c53746b5cddace17698200d27166/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2298880596e646c26a5c7d7e487e05286f81c53746b5cddace17698200d27166/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2298880596e646c26a5c7d7e487e05286f81c53746b5cddace17698200d27166/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:54 compute-0 podman[134294]: 2026-02-02 15:14:54.089119179 +0000 UTC m=+0.024433972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:14:54 compute-0 podman[134294]: 2026-02-02 15:14:54.211260546 +0000 UTC m=+0.146575329 container init 7c8fb23bd6d2b4c90e223794e7148d161fd7c44fb32120b8ed37f6028c6d7728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:14:54 compute-0 podman[134294]: 2026-02-02 15:14:54.221497549 +0000 UTC m=+0.156812302 container start 7c8fb23bd6d2b4c90e223794e7148d161fd7c44fb32120b8ed37f6028c6d7728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:14:54 compute-0 podman[134294]: 2026-02-02 15:14:54.225905532 +0000 UTC m=+0.161220365 container attach 7c8fb23bd6d2b4c90e223794e7148d161fd7c44fb32120b8ed37f6028c6d7728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]: {
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:     "0": [
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:         {
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "devices": [
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "/dev/loop3"
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             ],
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_name": "ceph_lv0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_size": "21470642176",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "name": "ceph_lv0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "tags": {
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.cluster_name": "ceph",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.crush_device_class": "",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.encrypted": "0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.objectstore": "bluestore",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.osd_id": "0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.type": "block",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.vdo": "0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.with_tpm": "0"
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             },
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "type": "block",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "vg_name": "ceph_vg0"
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:         }
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:     ],
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:     "1": [
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:         {
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "devices": [
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "/dev/loop4"
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             ],
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_name": "ceph_lv1",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_size": "21470642176",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "name": "ceph_lv1",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "tags": {
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.cluster_name": "ceph",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.crush_device_class": "",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.encrypted": "0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.objectstore": "bluestore",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.osd_id": "1",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.type": "block",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.vdo": "0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.with_tpm": "0"
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             },
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "type": "block",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "vg_name": "ceph_vg1"
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:         }
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:     ],
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:     "2": [
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:         {
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "devices": [
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "/dev/loop5"
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             ],
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_name": "ceph_lv2",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_size": "21470642176",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "name": "ceph_lv2",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "tags": {
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.cluster_name": "ceph",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.crush_device_class": "",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.encrypted": "0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.objectstore": "bluestore",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.osd_id": "2",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.type": "block",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.vdo": "0",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:                 "ceph.with_tpm": "0"
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             },
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "type": "block",
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:             "vg_name": "ceph_vg2"
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:         }
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]:     ]
Feb 02 15:14:54 compute-0 vibrant_snyder[134312]: }
Feb 02 15:14:54 compute-0 systemd[1]: libpod-7c8fb23bd6d2b4c90e223794e7148d161fd7c44fb32120b8ed37f6028c6d7728.scope: Deactivated successfully.
Feb 02 15:14:54 compute-0 podman[134294]: 2026-02-02 15:14:54.549442141 +0000 UTC m=+0.484756934 container died 7c8fb23bd6d2b4c90e223794e7148d161fd7c44fb32120b8ed37f6028c6d7728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2298880596e646c26a5c7d7e487e05286f81c53746b5cddace17698200d27166-merged.mount: Deactivated successfully.
Feb 02 15:14:54 compute-0 podman[134294]: 2026-02-02 15:14:54.591155818 +0000 UTC m=+0.526470561 container remove 7c8fb23bd6d2b4c90e223794e7148d161fd7c44fb32120b8ed37f6028c6d7728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Feb 02 15:14:54 compute-0 systemd[1]: libpod-conmon-7c8fb23bd6d2b4c90e223794e7148d161fd7c44fb32120b8ed37f6028c6d7728.scope: Deactivated successfully.
Feb 02 15:14:54 compute-0 sudo[134163]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:54 compute-0 sudo[134410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:14:54 compute-0 sudo[134410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:14:54 compute-0 sudo[134410]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:54 compute-0 sudo[134435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:14:54 compute-0 sudo[134435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:14:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:54 compute-0 sudo[134558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwaumcswfoynflletbgcjeqauovjshuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045294.4846182-29-160079300862931/AnsiballZ_file.py'
Feb 02 15:14:54 compute-0 sudo[134558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:54 compute-0 podman[134530]: 2026-02-02 15:14:54.992969716 +0000 UTC m=+0.038255603 container create f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:14:55 compute-0 ceph-mon[75334]: pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:55 compute-0 systemd[1]: Started libpod-conmon-f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f.scope.
Feb 02 15:14:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:14:55 compute-0 podman[134530]: 2026-02-02 15:14:55.063863315 +0000 UTC m=+0.109149222 container init f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 02 15:14:55 compute-0 podman[134530]: 2026-02-02 15:14:55.071616771 +0000 UTC m=+0.116902658 container start f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:14:55 compute-0 podman[134530]: 2026-02-02 15:14:54.976065707 +0000 UTC m=+0.021351624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:14:55 compute-0 podman[134530]: 2026-02-02 15:14:55.075163688 +0000 UTC m=+0.120449595 container attach f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:14:55 compute-0 zen_stonebraker[134565]: 167 167
Feb 02 15:14:55 compute-0 systemd[1]: libpod-f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f.scope: Deactivated successfully.
Feb 02 15:14:55 compute-0 conmon[134565]: conmon f46d1482418d86450d8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f.scope/container/memory.events
Feb 02 15:14:55 compute-0 podman[134530]: 2026-02-02 15:14:55.077603544 +0000 UTC m=+0.122889441 container died f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d8ebbbf03b1cb1661409e14a660672c935a89b1295b39c01f0e0a56a212a25b-merged.mount: Deactivated successfully.
Feb 02 15:14:55 compute-0 podman[134530]: 2026-02-02 15:14:55.116335935 +0000 UTC m=+0.161621882 container remove f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:14:55 compute-0 systemd[1]: libpod-conmon-f46d1482418d86450d8c84f0093c9fb98eb0938b86d4227abc69bee1e255f38f.scope: Deactivated successfully.
Feb 02 15:14:55 compute-0 python3.9[134560]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:55 compute-0 sudo[134558]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:55 compute-0 podman[134588]: 2026-02-02 15:14:55.287909795 +0000 UTC m=+0.057639890 container create c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:14:55 compute-0 systemd[1]: Started libpod-conmon-c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705.scope.
Feb 02 15:14:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:14:55 compute-0 podman[134588]: 2026-02-02 15:14:55.268495609 +0000 UTC m=+0.038225734 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e43390874da1742173a7a6ea91705276c9cc9c6b93b3021d7d0ebe1f499931/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e43390874da1742173a7a6ea91705276c9cc9c6b93b3021d7d0ebe1f499931/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e43390874da1742173a7a6ea91705276c9cc9c6b93b3021d7d0ebe1f499931/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e43390874da1742173a7a6ea91705276c9cc9c6b93b3021d7d0ebe1f499931/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:14:55 compute-0 podman[134588]: 2026-02-02 15:14:55.389117816 +0000 UTC m=+0.158847931 container init c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 02 15:14:55 compute-0 podman[134588]: 2026-02-02 15:14:55.396544086 +0000 UTC m=+0.166274211 container start c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 15:14:55 compute-0 podman[134588]: 2026-02-02 15:14:55.399771867 +0000 UTC m=+0.169501962 container attach c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Feb 02 15:14:55 compute-0 sudo[134769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyijclxywszvbpnnnnvxqutooljgielz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045295.4061835-29-45554138857269/AnsiballZ_file.py'
Feb 02 15:14:55 compute-0 sudo[134769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:55 compute-0 python3.9[134771]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:14:55 compute-0 sudo[134769]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:56 compute-0 lvm[134906]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:14:56 compute-0 lvm[134906]: VG ceph_vg1 finished
Feb 02 15:14:56 compute-0 lvm[134903]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:14:56 compute-0 lvm[134903]: VG ceph_vg0 finished
Feb 02 15:14:56 compute-0 lvm[134914]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:14:56 compute-0 lvm[134914]: VG ceph_vg2 finished
Feb 02 15:14:56 compute-0 strange_booth[134629]: {}
Feb 02 15:14:56 compute-0 systemd[1]: libpod-c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705.scope: Deactivated successfully.
Feb 02 15:14:56 compute-0 systemd[1]: libpod-c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705.scope: Consumed 1.169s CPU time.
Feb 02 15:14:56 compute-0 podman[134588]: 2026-02-02 15:14:56.216808005 +0000 UTC m=+0.986538100 container died c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2e43390874da1742173a7a6ea91705276c9cc9c6b93b3021d7d0ebe1f499931-merged.mount: Deactivated successfully.
Feb 02 15:14:56 compute-0 podman[134588]: 2026-02-02 15:14:56.26315458 +0000 UTC m=+1.032884675 container remove c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:14:56 compute-0 systemd[1]: libpod-conmon-c6ff8ca021ed26688c1b6ed4501e7615ca5839760e6282f86bff88b1bbe3a705.scope: Deactivated successfully.
Feb 02 15:14:56 compute-0 sudo[134435]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:14:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:14:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:14:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:14:56 compute-0 sudo[135004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:14:56 compute-0 sudo[135004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:14:56 compute-0 sudo[135004]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:56 compute-0 python3.9[135003]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:14:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:14:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:57 compute-0 sudo[135178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lufazixkqekjlzbuazcgsqrircakddjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045296.7169843-52-35264794026078/AnsiballZ_seboolean.py'
Feb 02 15:14:57 compute-0 sudo[135178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:14:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:14:57 compute-0 ceph-mon[75334]: pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:57 compute-0 python3.9[135180]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 02 15:14:58 compute-0 sudo[135178]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:58 compute-0 sudo[135334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hglqkbipxpuymmwvsuabrvqbzpbcpzde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045298.5297918-62-233157189781258/AnsiballZ_setup.py'
Feb 02 15:14:58 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb 02 15:14:58 compute-0 sudo[135334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:14:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:59 compute-0 ceph-mon[75334]: pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:14:59 compute-0 python3.9[135336]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:14:59 compute-0 sudo[135334]: pam_unix(sudo:session): session closed for user root
Feb 02 15:14:59 compute-0 sudo[135418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioqbifuklowmdbywljijflttcljcijfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045298.5297918-62-233157189781258/AnsiballZ_dnf.py'
Feb 02 15:14:59 compute-0 sudo[135418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:00 compute-0 python3.9[135420]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:15:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:01 compute-0 ceph-mon[75334]: pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:01 compute-0 sudo[135418]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:02 compute-0 sudo[135571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pydekotlchufeaftgjwmeyybrymksbge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045301.6140978-74-250993033682347/AnsiballZ_systemd.py'
Feb 02 15:15:02 compute-0 sudo[135571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:02 compute-0 python3.9[135573]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 15:15:02 compute-0 sudo[135571]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:03 compute-0 ceph-mon[75334]: pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:03 compute-0 sudo[135726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddtjryzjmtatikkcxncqlxfuidiabofd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770045302.7346404-82-111843095842657/AnsiballZ_edpm_nftables_snippet.py'
Feb 02 15:15:03 compute-0 sudo[135726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:03 compute-0 python3[135728]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb 02 15:15:03 compute-0 sudo[135726]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:03 compute-0 sudo[135878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jostuwhcqyyguukvvuwjutvyukcrbgun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045303.5826576-91-269099102840756/AnsiballZ_file.py'
Feb 02 15:15:03 compute-0 sudo[135878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:04 compute-0 python3.9[135880]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:04 compute-0 sudo[135878]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:04 compute-0 sudo[136030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhcujlfmhjmjuresazobisgahkaqyiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045304.280256-99-61666479956970/AnsiballZ_stat.py'
Feb 02 15:15:04 compute-0 sudo[136030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:04 compute-0 python3.9[136032]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:05 compute-0 ceph-mon[75334]: pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:05 compute-0 sudo[136030]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:05 compute-0 sudo[136108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjymlstxdlhkidgtulwuuufmdwwrnssa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045304.280256-99-61666479956970/AnsiballZ_file.py'
Feb 02 15:15:05 compute-0 sudo[136108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:05 compute-0 python3.9[136110]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:05 compute-0 sudo[136108]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:05 compute-0 sudo[136260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzflgqfvrvdoxsanujnuykyutpituqts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045305.5541613-111-149891934315587/AnsiballZ_stat.py'
Feb 02 15:15:05 compute-0 sudo[136260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:06 compute-0 python3.9[136262]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:06 compute-0 sudo[136260]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:06 compute-0 sudo[136338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onwwcmjxtccyzetvxijaxxkjsugvcvme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045305.5541613-111-149891934315587/AnsiballZ_file.py'
Feb 02 15:15:06 compute-0 sudo[136338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:06 compute-0 python3.9[136340]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.87ambbnz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:06 compute-0 sudo[136338]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:07 compute-0 ceph-mon[75334]: pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:07 compute-0 sudo[136490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdjttntubbfhmopupsdubsnxuhgxdjts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045306.735703-123-151685667965431/AnsiballZ_stat.py'
Feb 02 15:15:07 compute-0 sudo[136490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:07 compute-0 python3.9[136492]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:07 compute-0 sudo[136490]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:07 compute-0 sudo[136568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyepxpagkckanumpftcbsclrzbmqgwkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045306.735703-123-151685667965431/AnsiballZ_file.py'
Feb 02 15:15:07 compute-0 sudo[136568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:07 compute-0 python3.9[136570]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:07 compute-0 sudo[136568]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:08 compute-0 sudo[136720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjrwjsgxeicgttxwrrfyzdmnlegckvww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045307.869175-136-177862497614337/AnsiballZ_command.py'
Feb 02 15:15:08 compute-0 sudo[136720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:08 compute-0 python3.9[136722]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:08 compute-0 sudo[136720]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:08 compute-0 sudo[136873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrbuquuaxzsjpmoqgjsnpbrmtontnzkj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770045308.5884275-144-17595930693021/AnsiballZ_edpm_nftables_from_files.py'
Feb 02 15:15:08 compute-0 sudo[136873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:09 compute-0 ceph-mon[75334]: pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:09 compute-0 python3[136875]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 02 15:15:09 compute-0 sudo[136873]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:09 compute-0 sudo[137025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqlijtzmxndkfneyhnqqwjoncsbfeunk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045309.3458548-152-23241638819747/AnsiballZ_stat.py'
Feb 02 15:15:09 compute-0 sudo[137025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:09 compute-0 python3.9[137027]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:09 compute-0 sudo[137025]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:10 compute-0 sudo[137150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptvdgmrkbmjbdnxyiriejwylbuclafsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045309.3458548-152-23241638819747/AnsiballZ_copy.py'
Feb 02 15:15:10 compute-0 sudo[137150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:10 compute-0 python3.9[137152]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045309.3458548-152-23241638819747/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:10 compute-0 sudo[137150]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:10 compute-0 sudo[137302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rysowkgwtoqnthryfknxitcfrmohqaro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045310.6575267-167-23914898298422/AnsiballZ_stat.py'
Feb 02 15:15:10 compute-0 sudo[137302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:11 compute-0 ceph-mon[75334]: pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:11 compute-0 python3.9[137304]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:11 compute-0 sudo[137302]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:11 compute-0 sudo[137427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkzzkrkehjkphyvlsorzlfnmsavsbdvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045310.6575267-167-23914898298422/AnsiballZ_copy.py'
Feb 02 15:15:11 compute-0 sudo[137427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:11 compute-0 python3.9[137429]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045310.6575267-167-23914898298422/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:11 compute-0 sudo[137427]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:12 compute-0 sudo[137579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpzfuzqcgbhrtpnlrfwqddltaflkqrwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045311.892738-182-236132230561654/AnsiballZ_stat.py'
Feb 02 15:15:12 compute-0 sudo[137579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:12 compute-0 python3.9[137581]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:12 compute-0 sudo[137579]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:12 compute-0 sudo[137704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivftbqqfqmqpkpdkiltiszsgnoxqhtyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045311.892738-182-236132230561654/AnsiballZ_copy.py'
Feb 02 15:15:12 compute-0 sudo[137704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:13 compute-0 python3.9[137706]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045311.892738-182-236132230561654/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:13 compute-0 ceph-mon[75334]: pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:13 compute-0 sudo[137704]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:13 compute-0 sudo[137856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yorrwubaffdnnlcewfnuqggnxbkboqiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045313.186152-197-196739138431844/AnsiballZ_stat.py'
Feb 02 15:15:13 compute-0 sudo[137856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:13 compute-0 python3.9[137858]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:13 compute-0 sudo[137856]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:13 compute-0 sudo[137981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-earukkhbqxivsjmbvlwrabzedphqsuen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045313.186152-197-196739138431844/AnsiballZ_copy.py'
Feb 02 15:15:13 compute-0 sudo[137981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:14 compute-0 python3.9[137983]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045313.186152-197-196739138431844/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:14 compute-0 sudo[137981]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:15:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:15:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:15:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:15:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:15:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:15:14 compute-0 sudo[138133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaotiujjbiaqlosyndldjllrumimyaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045314.348511-212-190555783388986/AnsiballZ_stat.py'
Feb 02 15:15:14 compute-0 sudo[138133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:14 compute-0 python3.9[138135]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:14 compute-0 sudo[138133]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:15 compute-0 ceph-mon[75334]: pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:15 compute-0 sudo[138258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swstngnxwzgzdyfxmkpypmneexymkedx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045314.348511-212-190555783388986/AnsiballZ_copy.py'
Feb 02 15:15:15 compute-0 sudo[138258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:15 compute-0 python3.9[138260]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045314.348511-212-190555783388986/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:15 compute-0 sudo[138258]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:15 compute-0 sudo[138410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myhznfmjpnzgkilckfltbpncmbmllcfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045315.6976502-227-167824100171424/AnsiballZ_file.py'
Feb 02 15:15:15 compute-0 sudo[138410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:16 compute-0 python3.9[138412]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:16 compute-0 sudo[138410]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:16 compute-0 sudo[138562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htsuicwpcqmbmlkwobuhrktjoktrqyku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045316.355841-235-154798236019727/AnsiballZ_command.py'
Feb 02 15:15:16 compute-0 sudo[138562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:16 compute-0 python3.9[138564]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:16 compute-0 sudo[138562]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:17 compute-0 ceph-mon[75334]: pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:17 compute-0 sudo[138717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlxjycqasvnbumsusdlquoakktjflqoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045317.023865-243-148851203728/AnsiballZ_blockinfile.py'
Feb 02 15:15:17 compute-0 sudo[138717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:17 compute-0 python3.9[138719]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:17 compute-0 sudo[138717]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:18 compute-0 sudo[138869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmaqsvwnutbbfhegbdstkbcmwzywpnor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045317.8447387-252-140563544394642/AnsiballZ_command.py'
Feb 02 15:15:18 compute-0 sudo[138869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:18 compute-0 python3.9[138871]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:18 compute-0 sudo[138869]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:18 compute-0 sudo[139022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hobnwfcxsbrmojrekpnugghstcmsduwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045318.4497533-260-113729872454279/AnsiballZ_stat.py'
Feb 02 15:15:18 compute-0 sudo[139022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:18 compute-0 python3.9[139024]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:15:18 compute-0 sudo[139022]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:19 compute-0 ceph-mon[75334]: pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:19 compute-0 sudo[139176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvvfrbsiiomsqdhamemlwthnyqmeoycg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045319.087503-268-239874511159243/AnsiballZ_command.py'
Feb 02 15:15:19 compute-0 sudo[139176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:19 compute-0 python3.9[139178]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:19 compute-0 sudo[139176]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:20 compute-0 sudo[139331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgfuhrxndoqachxabeqxnqjqesntvtcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045319.747698-276-65090705277637/AnsiballZ_file.py'
Feb 02 15:15:20 compute-0 sudo[139331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:20 compute-0 python3.9[139333]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:20 compute-0 sudo[139331]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:21 compute-0 ceph-mon[75334]: pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:21 compute-0 python3.9[139483]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:15:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:22 compute-0 sudo[139634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnpliolgnthxgovnwjqghoqfmrqyznwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045321.9331286-316-37922787011243/AnsiballZ_command.py'
Feb 02 15:15:22 compute-0 sudo[139634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:22 compute-0 python3.9[139636]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:22 compute-0 ovs-vsctl[139637]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb 02 15:15:22 compute-0 sudo[139634]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:22 compute-0 sudo[139787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpnmfgcjkusfmkulmhttnyprbngxqamy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045322.6500597-325-117705561933746/AnsiballZ_command.py'
Feb 02 15:15:22 compute-0 sudo[139787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:23 compute-0 ceph-mon[75334]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:23 compute-0 python3.9[139789]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:23 compute-0 sudo[139787]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:23 compute-0 sudo[139942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkjpgldghmvtzomzfqxfjvlwgzakrxcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045323.364178-333-252604482388827/AnsiballZ_command.py'
Feb 02 15:15:23 compute-0 sudo[139942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:23 compute-0 python3.9[139944]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:23 compute-0 ovs-vsctl[139945]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb 02 15:15:23 compute-0 sudo[139942]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:24 compute-0 python3.9[140095]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:15:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:25 compute-0 ceph-mon[75334]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:25 compute-0 sudo[140247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imkfmykioiarybinnqvhoqlvdlispbwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045324.8178406-350-220611576111304/AnsiballZ_file.py'
Feb 02 15:15:25 compute-0 sudo[140247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:25 compute-0 python3.9[140249]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:15:25 compute-0 sudo[140247]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:25 compute-0 sudo[140399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcasoiwsekfcefidahxggnndqxkbpfom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045325.449135-358-187427523916348/AnsiballZ_stat.py'
Feb 02 15:15:25 compute-0 sudo[140399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:25 compute-0 python3.9[140401]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:25 compute-0 sudo[140399]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:26 compute-0 sudo[140477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjqeqjgwvoxuamliyahqvddzdpjeuez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045325.449135-358-187427523916348/AnsiballZ_file.py'
Feb 02 15:15:26 compute-0 sudo[140477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:26 compute-0 python3.9[140479]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:15:26 compute-0 sudo[140477]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:26 compute-0 sudo[140629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnfpndxtlzeokxjyunuugafhunhiixbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045326.4271343-358-133845288913277/AnsiballZ_stat.py'
Feb 02 15:15:26 compute-0 sudo[140629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:26 compute-0 python3.9[140631]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:26 compute-0 sudo[140629]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:27 compute-0 ceph-mon[75334]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:27 compute-0 sudo[140707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsktmpgzqpdtlxkhuipyyogyqmxwvubj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045326.4271343-358-133845288913277/AnsiballZ_file.py'
Feb 02 15:15:27 compute-0 sudo[140707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:27 compute-0 python3.9[140709]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:15:27 compute-0 sudo[140707]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:27 compute-0 sudo[140859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjmvkawnubxukbsxzfrjuuccotpawuzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045327.4903455-381-237967608220088/AnsiballZ_file.py'
Feb 02 15:15:27 compute-0 sudo[140859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:27 compute-0 python3.9[140861]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:27 compute-0 sudo[140859]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:28 compute-0 sudo[141011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjkxwibtevqtltgvhepfarphskfomxdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045328.0979872-389-102811818003735/AnsiballZ_stat.py'
Feb 02 15:15:28 compute-0 sudo[141011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:28 compute-0 python3.9[141013]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:28 compute-0 sudo[141011]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:28 compute-0 sudo[141089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbhouyhxylkpceifyuhzsrulnxqkwffj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045328.0979872-389-102811818003735/AnsiballZ_file.py'
Feb 02 15:15:28 compute-0 sudo[141089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:29 compute-0 ceph-mon[75334]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:29 compute-0 python3.9[141091]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:29 compute-0 sudo[141089]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:29 compute-0 sudo[141241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijzzprbzyefipjhnhupsozxclpeswzrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045329.2239602-401-226076504840244/AnsiballZ_stat.py'
Feb 02 15:15:29 compute-0 sudo[141241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:29 compute-0 python3.9[141243]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:29 compute-0 sudo[141241]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:29 compute-0 sudo[141319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxvdcjqdtxikmehskpvfyozpplductwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045329.2239602-401-226076504840244/AnsiballZ_file.py'
Feb 02 15:15:29 compute-0 sudo[141319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:30 compute-0 python3.9[141321]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:30 compute-0 sudo[141319]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:30 compute-0 sudo[141471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chtsczjfznewzpsqcxjiiajdvoqpezmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045330.3123178-413-205260240863253/AnsiballZ_systemd.py'
Feb 02 15:15:30 compute-0 sudo[141471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:30 compute-0 python3.9[141473]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:15:30 compute-0 systemd[1]: Reloading.
Feb 02 15:15:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:30 compute-0 systemd-rc-local-generator[141497]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:15:30 compute-0 systemd-sysv-generator[141503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:15:31 compute-0 ceph-mon[75334]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:31 compute-0 sudo[141471]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:31 compute-0 sudo[141659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whdljbzhsdryxprctmtnrraxppqqegqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045331.3638039-421-44953486588801/AnsiballZ_stat.py'
Feb 02 15:15:31 compute-0 sudo[141659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.767215) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045331767291, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1531, "num_deletes": 250, "total_data_size": 2322540, "memory_usage": 2357528, "flush_reason": "Manual Compaction"}
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045331774229, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1344189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7362, "largest_seqno": 8892, "table_properties": {"data_size": 1339113, "index_size": 2280, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13879, "raw_average_key_size": 20, "raw_value_size": 1327458, "raw_average_value_size": 1932, "num_data_blocks": 108, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770045178, "oldest_key_time": 1770045178, "file_creation_time": 1770045331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7037 microseconds, and 3378 cpu microseconds.
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.774269) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1344189 bytes OK
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.774285) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.775554) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.775566) EVENT_LOG_v1 {"time_micros": 1770045331775562, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.775581) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2315773, prev total WAL file size 2315773, number of live WAL files 2.
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.776276) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1312KB)], [20(7545KB)]
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045331776376, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9070910, "oldest_snapshot_seqno": -1}
Feb 02 15:15:31 compute-0 python3.9[141661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3346 keys, 7005775 bytes, temperature: kUnknown
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045331825090, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7005775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6980306, "index_size": 16016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 80135, "raw_average_key_size": 23, "raw_value_size": 6916701, "raw_average_value_size": 2067, "num_data_blocks": 710, "num_entries": 3346, "num_filter_entries": 3346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770045331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.825324) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7005775 bytes
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.826534) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.0 rd, 143.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.4 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(12.0) write-amplify(5.2) OK, records in: 3787, records dropped: 441 output_compression: NoCompression
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.826555) EVENT_LOG_v1 {"time_micros": 1770045331826544, "job": 6, "event": "compaction_finished", "compaction_time_micros": 48778, "compaction_time_cpu_micros": 30554, "output_level": 6, "num_output_files": 1, "total_output_size": 7005775, "num_input_records": 3787, "num_output_records": 3346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045331826799, "job": 6, "event": "table_file_deletion", "file_number": 22}
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045331827504, "job": 6, "event": "table_file_deletion", "file_number": 20}
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.776091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.827618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.827624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.827626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.827627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:15:31 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:15:31.827628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:15:31 compute-0 sudo[141659]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:32 compute-0 sudo[141737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxlxkrprxixrjqxzqwrtvhxhtlgpqgkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045331.3638039-421-44953486588801/AnsiballZ_file.py'
Feb 02 15:15:32 compute-0 sudo[141737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:32 compute-0 python3.9[141739]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:32 compute-0 sudo[141737]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:32 compute-0 sudo[141889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htwffktxyiqawduidjlkjbhmbrcrfoxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045332.440396-433-274507786611795/AnsiballZ_stat.py'
Feb 02 15:15:32 compute-0 sudo[141889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:32 compute-0 python3.9[141891]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:32 compute-0 sudo[141889]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:33 compute-0 ceph-mon[75334]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:33 compute-0 sudo[141967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laaxmxibrdtrqnfeeamsiwxmmtqdpfeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045332.440396-433-274507786611795/AnsiballZ_file.py'
Feb 02 15:15:33 compute-0 sudo[141967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:33 compute-0 python3.9[141969]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:33 compute-0 sudo[141967]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:33 compute-0 sudo[142119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afxhhbbvofvmpwgeuuatrcbnlxdpiyng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045333.397984-445-268399754857998/AnsiballZ_systemd.py'
Feb 02 15:15:33 compute-0 sudo[142119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:34 compute-0 python3.9[142121]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:15:34 compute-0 systemd[1]: Reloading.
Feb 02 15:15:34 compute-0 systemd-rc-local-generator[142144]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:15:34 compute-0 systemd-sysv-generator[142150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:15:34 compute-0 systemd[1]: Starting Create netns directory...
Feb 02 15:15:34 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 02 15:15:34 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 02 15:15:34 compute-0 systemd[1]: Finished Create netns directory.
Feb 02 15:15:34 compute-0 sudo[142119]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:34 compute-0 sudo[142311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtolztxnjigtvuhvlcugepmdfekxzoil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045334.6337502-455-108165540366371/AnsiballZ_file.py'
Feb 02 15:15:34 compute-0 sudo[142311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:35 compute-0 ceph-mon[75334]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:35 compute-0 python3.9[142313]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:15:35 compute-0 sudo[142311]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:35 compute-0 sudo[142463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmzgfspqujmuajiblkofhaaewfclswhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045335.2844272-463-191285487157352/AnsiballZ_stat.py'
Feb 02 15:15:35 compute-0 sudo[142463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:35 compute-0 python3.9[142465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:35 compute-0 sudo[142463]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:36 compute-0 sudo[142586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmmypcffrysymkrizdnkbgvnwxhcder ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045335.2844272-463-191285487157352/AnsiballZ_copy.py'
Feb 02 15:15:36 compute-0 sudo[142586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:36 compute-0 python3.9[142588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045335.2844272-463-191285487157352/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:15:36 compute-0 sudo[142586]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:36 compute-0 sudo[142738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgscmdmwfyydrbhwuvwauewybdvcsqgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045336.5935385-480-194772285763539/AnsiballZ_file.py'
Feb 02 15:15:36 compute-0 sudo[142738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:37 compute-0 ceph-mon[75334]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:37 compute-0 python3.9[142740]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:37 compute-0 sudo[142738]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:37 compute-0 sudo[142890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pljanprtslnhxxhiyrjoyrqnfufwgywd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045337.2932172-488-276482801307642/AnsiballZ_file.py'
Feb 02 15:15:37 compute-0 sudo[142890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:37 compute-0 python3.9[142892]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:15:37 compute-0 sudo[142890]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:38 compute-0 sudo[143042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlfsyfztpjenxinunfbutyrymlneeqnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045338.0078669-496-179182504325105/AnsiballZ_stat.py'
Feb 02 15:15:38 compute-0 sudo[143042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:38 compute-0 python3.9[143044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:38 compute-0 sudo[143042]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:38 compute-0 sudo[143165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbebpvusrdqkozirwxkihojyacwvywck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045338.0078669-496-179182504325105/AnsiballZ_copy.py'
Feb 02 15:15:38 compute-0 sudo[143165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:39 compute-0 ceph-mon[75334]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:39 compute-0 python3.9[143167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045338.0078669-496-179182504325105/.source.json _original_basename=.8yedn0w0 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:39 compute-0 sudo[143165]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:39 compute-0 python3.9[143317]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:41 compute-0 ceph-mon[75334]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:41 compute-0 sudo[143738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwwwghxetsbldreanbtficlcggqcsufn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045341.395594-536-86260347332271/AnsiballZ_container_config_data.py'
Feb 02 15:15:41 compute-0 sudo[143738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:42 compute-0 python3.9[143740]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb 02 15:15:42 compute-0 sudo[143738]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:15:42
Feb 02 15:15:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:15:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:15:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data']
Feb 02 15:15:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:15:42 compute-0 sudo[143890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqsfoqtddlkfcezbnjfpxrplnghkrliv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045342.4252157-547-198184132807198/AnsiballZ_container_config_hash.py'
Feb 02 15:15:42 compute-0 sudo[143890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:43 compute-0 ceph-mon[75334]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:43 compute-0 python3.9[143892]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 02 15:15:43 compute-0 sudo[143890]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:43 compute-0 sudo[144042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvsrgifrxpzgyahueucwupwngeahszbf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770045343.343476-557-68895243047891/AnsiballZ_edpm_container_manage.py'
Feb 02 15:15:43 compute-0 sudo[144042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:44 compute-0 python3[144044]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:15:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:45 compute-0 ceph-mon[75334]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:47 compute-0 ceph-mon[75334]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:49 compute-0 ceph-mon[75334]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:49 compute-0 podman[144057]: 2026-02-02 15:15:49.527472742 +0000 UTC m=+5.386816132 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 02 15:15:49 compute-0 podman[144177]: 2026-02-02 15:15:49.641143269 +0000 UTC m=+0.040765925 container create 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 02 15:15:49 compute-0 podman[144177]: 2026-02-02 15:15:49.617663625 +0000 UTC m=+0.017286241 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 02 15:15:49 compute-0 python3[144044]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 02 15:15:49 compute-0 sudo[144042]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:50 compute-0 sudo[144367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkkwdcwesrkewtvmcxfagdmayytafrae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045350.0136683-565-262210797650746/AnsiballZ_stat.py'
Feb 02 15:15:50 compute-0 sudo[144367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:50 compute-0 python3.9[144369]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:15:50 compute-0 sudo[144367]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:51 compute-0 ceph-mon[75334]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:51 compute-0 sudo[144521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivxfvbauqpbsyzdqvdcdzypuoccxtrzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045350.8204777-574-219373971645506/AnsiballZ_file.py'
Feb 02 15:15:51 compute-0 sudo[144521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:51 compute-0 python3.9[144523]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:51 compute-0 sudo[144521]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:51 compute-0 sudo[144597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uidkufweqvakklizpbrvbfpmdpgtgwxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045350.8204777-574-219373971645506/AnsiballZ_stat.py'
Feb 02 15:15:51 compute-0 sudo[144597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:51 compute-0 python3.9[144599]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:15:51 compute-0 sudo[144597]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:52 compute-0 sudo[144748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djcyzvyzkwgducuzmfuemdotomnjdkso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045351.938646-574-225400418291337/AnsiballZ_copy.py'
Feb 02 15:15:52 compute-0 sudo[144748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:52 compute-0 python3.9[144750]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770045351.938646-574-225400418291337/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:52 compute-0 sudo[144748]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:52 compute-0 sudo[144824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozrepxeigryiipxqhwmuclvwvgocmzcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045351.938646-574-225400418291337/AnsiballZ_systemd.py'
Feb 02 15:15:52 compute-0 sudo[144824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:53 compute-0 ceph-mon[75334]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:53 compute-0 python3.9[144826]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 15:15:53 compute-0 systemd[1]: Reloading.
Feb 02 15:15:53 compute-0 systemd-sysv-generator[144857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:15:53 compute-0 systemd-rc-local-generator[144853]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:15:53 compute-0 sudo[144824]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:53 compute-0 sudo[144937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjsqfqztwzkgwwclqthhtxxkuuctcklu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045351.938646-574-225400418291337/AnsiballZ_systemd.py'
Feb 02 15:15:53 compute-0 sudo[144937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:54 compute-0 python3.9[144939]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:15:54 compute-0 systemd[1]: Reloading.
Feb 02 15:15:54 compute-0 systemd-sysv-generator[144972]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:15:54 compute-0 systemd-rc-local-generator[144967]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:15:54 compute-0 systemd[1]: Starting ovn_controller container...
Feb 02 15:15:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf8b0ff7004523174dc43baca01f6befd2088845eb030565ad109aef317e90b/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:54 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53.
Feb 02 15:15:54 compute-0 podman[144980]: 2026-02-02 15:15:54.535534542 +0000 UTC m=+0.149766491 container init 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 02 15:15:54 compute-0 ovn_controller[144995]: + sudo -E kolla_set_configs
Feb 02 15:15:54 compute-0 podman[144980]: 2026-02-02 15:15:54.570183281 +0000 UTC m=+0.184415200 container start 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 02 15:15:54 compute-0 edpm-start-podman-container[144980]: ovn_controller
Feb 02 15:15:54 compute-0 systemd[1]: Created slice User Slice of UID 0.
Feb 02 15:15:54 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 02 15:15:54 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 02 15:15:54 compute-0 systemd[1]: Starting User Manager for UID 0...
Feb 02 15:15:54 compute-0 systemd[145027]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Feb 02 15:15:54 compute-0 podman[145002]: 2026-02-02 15:15:54.652789104 +0000 UTC m=+0.071478081 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 15:15:54 compute-0 edpm-start-podman-container[144979]: Creating additional drop-in dependency for "ovn_controller" (3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53)
Feb 02 15:15:54 compute-0 systemd[1]: 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53-5175280d82ea74cb.service: Main process exited, code=exited, status=1/FAILURE
Feb 02 15:15:54 compute-0 systemd[1]: 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53-5175280d82ea74cb.service: Failed with result 'exit-code'.
Feb 02 15:15:54 compute-0 systemd[1]: Reloading.
Feb 02 15:15:54 compute-0 systemd-rc-local-generator[145079]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:15:54 compute-0 systemd-sysv-generator[145083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:15:54 compute-0 systemd[145027]: Queued start job for default target Main User Target.
Feb 02 15:15:54 compute-0 systemd[145027]: Created slice User Application Slice.
Feb 02 15:15:54 compute-0 systemd[145027]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 02 15:15:54 compute-0 systemd[145027]: Started Daily Cleanup of User's Temporary Directories.
Feb 02 15:15:54 compute-0 systemd[145027]: Reached target Paths.
Feb 02 15:15:54 compute-0 systemd[145027]: Reached target Timers.
Feb 02 15:15:54 compute-0 systemd[145027]: Starting D-Bus User Message Bus Socket...
Feb 02 15:15:54 compute-0 systemd[145027]: Starting Create User's Volatile Files and Directories...
Feb 02 15:15:54 compute-0 systemd[145027]: Finished Create User's Volatile Files and Directories.
Feb 02 15:15:54 compute-0 systemd[145027]: Listening on D-Bus User Message Bus Socket.
Feb 02 15:15:54 compute-0 systemd[145027]: Reached target Sockets.
Feb 02 15:15:54 compute-0 systemd[145027]: Reached target Basic System.
Feb 02 15:15:54 compute-0 systemd[145027]: Reached target Main User Target.
Feb 02 15:15:54 compute-0 systemd[145027]: Startup finished in 167ms.
Feb 02 15:15:54 compute-0 systemd[1]: Started User Manager for UID 0.
Feb 02 15:15:54 compute-0 systemd[1]: Started ovn_controller container.
Feb 02 15:15:54 compute-0 systemd[1]: Started Session c1 of User root.
Feb 02 15:15:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:54 compute-0 sudo[144937]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:55 compute-0 ovn_controller[144995]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 02 15:15:55 compute-0 ovn_controller[144995]: INFO:__main__:Validating config file
Feb 02 15:15:55 compute-0 ovn_controller[144995]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 02 15:15:55 compute-0 ovn_controller[144995]: INFO:__main__:Writing out command to execute
Feb 02 15:15:55 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Feb 02 15:15:55 compute-0 ovn_controller[144995]: ++ cat /run_command
Feb 02 15:15:55 compute-0 ovn_controller[144995]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 02 15:15:55 compute-0 ovn_controller[144995]: + ARGS=
Feb 02 15:15:55 compute-0 ovn_controller[144995]: + sudo kolla_copy_cacerts
Feb 02 15:15:55 compute-0 ceph-mon[75334]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:55 compute-0 systemd[1]: Started Session c2 of User root.
Feb 02 15:15:55 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Feb 02 15:15:55 compute-0 ovn_controller[144995]: + [[ ! -n '' ]]
Feb 02 15:15:55 compute-0 ovn_controller[144995]: + . kolla_extend_start
Feb 02 15:15:55 compute-0 ovn_controller[144995]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 02 15:15:55 compute-0 ovn_controller[144995]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb 02 15:15:55 compute-0 ovn_controller[144995]: + umask 0022
Feb 02 15:15:55 compute-0 ovn_controller[144995]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb 02 15:15:55 compute-0 NetworkManager[49171]: <info>  [1770045355.1402] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Feb 02 15:15:55 compute-0 NetworkManager[49171]: <info>  [1770045355.1411] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:15:55 compute-0 NetworkManager[49171]: <warn>  [1770045355.1413] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 15:15:55 compute-0 NetworkManager[49171]: <info>  [1770045355.1422] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb 02 15:15:55 compute-0 NetworkManager[49171]: <info>  [1770045355.1427] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Feb 02 15:15:55 compute-0 NetworkManager[49171]: <info>  [1770045355.1430] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 02 15:15:55 compute-0 kernel: br-int: entered promiscuous mode
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 02 15:15:55 compute-0 ovn_controller[144995]: 2026-02-02T15:15:55Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 02 15:15:55 compute-0 NetworkManager[49171]: <info>  [1770045355.1638] manager: (ovn-cbd495-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb 02 15:15:55 compute-0 systemd-udevd[145134]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:15:55 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Feb 02 15:15:55 compute-0 NetworkManager[49171]: <info>  [1770045355.1782] device (genev_sys_6081): carrier: link connected
Feb 02 15:15:55 compute-0 NetworkManager[49171]: <info>  [1770045355.1785] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Feb 02 15:15:55 compute-0 systemd-udevd[145155]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:15:55 compute-0 python3.9[145259]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 02 15:15:56 compute-0 sudo[145357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:15:56 compute-0 sudo[145357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:15:56 compute-0 sudo[145357]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:56 compute-0 sudo[145384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:15:56 compute-0 sudo[145384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:15:56 compute-0 sudo[145466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwttycgczhbyzmxxsmhjzffymzuzwvrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045356.2401705-619-181171781682392/AnsiballZ_stat.py'
Feb 02 15:15:56 compute-0 sudo[145466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:15:56 compute-0 python3.9[145473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:15:56 compute-0 sudo[145466]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:56 compute-0 sudo[145384]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:15:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:15:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:15:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:15:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:15:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:15:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:15:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:15:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:15:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:15:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:15:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:15:57 compute-0 ceph-mon[75334]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:15:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:15:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:15:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:15:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:15:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:15:57 compute-0 sudo[145525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:15:57 compute-0 sudo[145525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:15:57 compute-0 sudo[145525]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:57 compute-0 sudo[145571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:15:57 compute-0 sudo[145571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:15:57 compute-0 sudo[145665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znwsrmnyxtydmcelgdqfepynwvukrknt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045356.2401705-619-181171781682392/AnsiballZ_copy.py'
Feb 02 15:15:57 compute-0 sudo[145665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:57 compute-0 podman[145680]: 2026-02-02 15:15:57.42511488 +0000 UTC m=+0.062445747 container create b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:15:57 compute-0 python3.9[145667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045356.2401705-619-181171781682392/.source.yaml _original_basename=.l852ktil follow=False checksum=c5d744e3ce746c5b2017b252b92ed20787602421 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:15:57 compute-0 systemd[1]: Started libpod-conmon-b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19.scope.
Feb 02 15:15:57 compute-0 sudo[145665]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:57 compute-0 podman[145680]: 2026-02-02 15:15:57.398143623 +0000 UTC m=+0.035474540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:15:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:15:57 compute-0 podman[145680]: 2026-02-02 15:15:57.532079698 +0000 UTC m=+0.169410615 container init b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:15:57 compute-0 podman[145680]: 2026-02-02 15:15:57.543439836 +0000 UTC m=+0.180770673 container start b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:15:57 compute-0 podman[145680]: 2026-02-02 15:15:57.547775569 +0000 UTC m=+0.185106406 container attach b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:15:57 compute-0 systemd[1]: libpod-b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19.scope: Deactivated successfully.
Feb 02 15:15:57 compute-0 naughty_hopper[145696]: 167 167
Feb 02 15:15:57 compute-0 conmon[145696]: conmon b2f928e91af081ccf752 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19.scope/container/memory.events
Feb 02 15:15:57 compute-0 podman[145680]: 2026-02-02 15:15:57.551858535 +0000 UTC m=+0.189189372 container died b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:15:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f6490d6106c8b04b00d7887cc61bd4889b1f110b6bf0cd6bca9b8ab58d4aea6-merged.mount: Deactivated successfully.
Feb 02 15:15:57 compute-0 podman[145680]: 2026-02-02 15:15:57.596401818 +0000 UTC m=+0.233732655 container remove b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:15:57 compute-0 systemd[1]: libpod-conmon-b2f928e91af081ccf7521661c87ac1207e8a9eb44625cf14a26a1e43601bcf19.scope: Deactivated successfully.
Feb 02 15:15:57 compute-0 podman[145794]: 2026-02-02 15:15:57.747047449 +0000 UTC m=+0.049007069 container create 6186ff3b3897f2e3f6690f9abdc5b19e39e0b5590af579837f4a06bfd79889fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:15:57 compute-0 systemd[1]: Started libpod-conmon-6186ff3b3897f2e3f6690f9abdc5b19e39e0b5590af579837f4a06bfd79889fb.scope.
Feb 02 15:15:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9416a4a12fe204f36eafb8a3e4491201877124af6323f7e6f1ca9d0c44450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9416a4a12fe204f36eafb8a3e4491201877124af6323f7e6f1ca9d0c44450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9416a4a12fe204f36eafb8a3e4491201877124af6323f7e6f1ca9d0c44450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9416a4a12fe204f36eafb8a3e4491201877124af6323f7e6f1ca9d0c44450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9416a4a12fe204f36eafb8a3e4491201877124af6323f7e6f1ca9d0c44450/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:57 compute-0 podman[145794]: 2026-02-02 15:15:57.726005062 +0000 UTC m=+0.027964712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:15:57 compute-0 podman[145794]: 2026-02-02 15:15:57.845069176 +0000 UTC m=+0.147028866 container init 6186ff3b3897f2e3f6690f9abdc5b19e39e0b5590af579837f4a06bfd79889fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:15:57 compute-0 podman[145794]: 2026-02-02 15:15:57.855310107 +0000 UTC m=+0.157269757 container start 6186ff3b3897f2e3f6690f9abdc5b19e39e0b5590af579837f4a06bfd79889fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:15:57 compute-0 podman[145794]: 2026-02-02 15:15:57.859254431 +0000 UTC m=+0.161214131 container attach 6186ff3b3897f2e3f6690f9abdc5b19e39e0b5590af579837f4a06bfd79889fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:15:57 compute-0 sudo[145889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsvhvnxcjsqgwbesjkzsnxivqzsokobk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045357.6381097-634-82358306675095/AnsiballZ_command.py'
Feb 02 15:15:57 compute-0 sudo[145889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:58 compute-0 python3.9[145891]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:58 compute-0 ovs-vsctl[145896]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb 02 15:15:58 compute-0 sudo[145889]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:58 compute-0 pensive_carson[145834]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:15:58 compute-0 pensive_carson[145834]: --> All data devices are unavailable
Feb 02 15:15:58 compute-0 systemd[1]: libpod-6186ff3b3897f2e3f6690f9abdc5b19e39e0b5590af579837f4a06bfd79889fb.scope: Deactivated successfully.
Feb 02 15:15:58 compute-0 podman[145794]: 2026-02-02 15:15:58.327534029 +0000 UTC m=+0.629493679 container died 6186ff3b3897f2e3f6690f9abdc5b19e39e0b5590af579837f4a06bfd79889fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 15:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-76b9416a4a12fe204f36eafb8a3e4491201877124af6323f7e6f1ca9d0c44450-merged.mount: Deactivated successfully.
Feb 02 15:15:58 compute-0 sudo[146072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oorndzlkvaydrjkglizhapmjwyneylnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045358.2877488-642-28138349002883/AnsiballZ_command.py'
Feb 02 15:15:58 compute-0 sudo[146072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:58 compute-0 podman[145794]: 2026-02-02 15:15:58.813656179 +0000 UTC m=+1.115615849 container remove 6186ff3b3897f2e3f6690f9abdc5b19e39e0b5590af579837f4a06bfd79889fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_carson, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:15:58 compute-0 systemd[1]: libpod-conmon-6186ff3b3897f2e3f6690f9abdc5b19e39e0b5590af579837f4a06bfd79889fb.scope: Deactivated successfully.
Feb 02 15:15:58 compute-0 python3.9[146075]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:58 compute-0 sudo[145571]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:58 compute-0 ovs-vsctl[146077]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb 02 15:15:58 compute-0 sudo[146072]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:58 compute-0 sudo[146079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:15:58 compute-0 sudo[146079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:15:58 compute-0 sudo[146079]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:58 compute-0 sudo[146117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:15:59 compute-0 sudo[146117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:15:59 compute-0 ceph-mon[75334]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:15:59 compute-0 podman[146194]: 2026-02-02 15:15:59.373726817 +0000 UTC m=+0.087790236 container create e1e4047662ea0e447d4f870d1a5627edf94c6dcb91dca7d7f2c88bb74337b43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:15:59 compute-0 systemd[1]: Started libpod-conmon-e1e4047662ea0e447d4f870d1a5627edf94c6dcb91dca7d7f2c88bb74337b43e.scope.
Feb 02 15:15:59 compute-0 podman[146194]: 2026-02-02 15:15:59.335509363 +0000 UTC m=+0.049572802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:15:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:15:59 compute-0 podman[146194]: 2026-02-02 15:15:59.442226156 +0000 UTC m=+0.156289605 container init e1e4047662ea0e447d4f870d1a5627edf94c6dcb91dca7d7f2c88bb74337b43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:15:59 compute-0 podman[146194]: 2026-02-02 15:15:59.447250804 +0000 UTC m=+0.161314233 container start e1e4047662ea0e447d4f870d1a5627edf94c6dcb91dca7d7f2c88bb74337b43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:15:59 compute-0 podman[146194]: 2026-02-02 15:15:59.451001063 +0000 UTC m=+0.165064522 container attach e1e4047662ea0e447d4f870d1a5627edf94c6dcb91dca7d7f2c88bb74337b43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:15:59 compute-0 crazy_yonath[146269]: 167 167
Feb 02 15:15:59 compute-0 systemd[1]: libpod-e1e4047662ea0e447d4f870d1a5627edf94c6dcb91dca7d7f2c88bb74337b43e.scope: Deactivated successfully.
Feb 02 15:15:59 compute-0 podman[146194]: 2026-02-02 15:15:59.453090202 +0000 UTC m=+0.167153611 container died e1e4047662ea0e447d4f870d1a5627edf94c6dcb91dca7d7f2c88bb74337b43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-876d05ec495ee927618c759e0d6e5ab4661e88d78168858cba644e67a6d9b49f-merged.mount: Deactivated successfully.
Feb 02 15:15:59 compute-0 podman[146194]: 2026-02-02 15:15:59.492144085 +0000 UTC m=+0.206207514 container remove e1e4047662ea0e447d4f870d1a5627edf94c6dcb91dca7d7f2c88bb74337b43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:15:59 compute-0 sudo[146318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-happpsturwmcvhiqkhvzfdbdaatdhfxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045359.2468183-656-56662644231783/AnsiballZ_command.py'
Feb 02 15:15:59 compute-0 sudo[146318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:15:59 compute-0 systemd[1]: libpod-conmon-e1e4047662ea0e447d4f870d1a5627edf94c6dcb91dca7d7f2c88bb74337b43e.scope: Deactivated successfully.
Feb 02 15:15:59 compute-0 podman[146332]: 2026-02-02 15:15:59.637590693 +0000 UTC m=+0.052520892 container create 91bb2b32131c6225dbe8df3b12797e3568373d549e1b21321f4d60223fd69fad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldberg, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:15:59 compute-0 python3.9[146324]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:15:59 compute-0 systemd[1]: Started libpod-conmon-91bb2b32131c6225dbe8df3b12797e3568373d549e1b21321f4d60223fd69fad.scope.
Feb 02 15:15:59 compute-0 ovs-vsctl[146349]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb 02 15:15:59 compute-0 sudo[146318]: pam_unix(sudo:session): session closed for user root
Feb 02 15:15:59 compute-0 podman[146332]: 2026-02-02 15:15:59.61204679 +0000 UTC m=+0.026977069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:15:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff922612bd426b317496c502d6156b2143efc0da66f9c173c049ace9f9fa8b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff922612bd426b317496c502d6156b2143efc0da66f9c173c049ace9f9fa8b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff922612bd426b317496c502d6156b2143efc0da66f9c173c049ace9f9fa8b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff922612bd426b317496c502d6156b2143efc0da66f9c173c049ace9f9fa8b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:15:59 compute-0 podman[146332]: 2026-02-02 15:15:59.729633669 +0000 UTC m=+0.144563908 container init 91bb2b32131c6225dbe8df3b12797e3568373d549e1b21321f4d60223fd69fad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldberg, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:15:59 compute-0 podman[146332]: 2026-02-02 15:15:59.740052845 +0000 UTC m=+0.154983034 container start 91bb2b32131c6225dbe8df3b12797e3568373d549e1b21321f4d60223fd69fad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:15:59 compute-0 podman[146332]: 2026-02-02 15:15:59.744341876 +0000 UTC m=+0.159272095 container attach 91bb2b32131c6225dbe8df3b12797e3568373d549e1b21321f4d60223fd69fad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 02 15:16:00 compute-0 sad_goldberg[146350]: {
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:     "0": [
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:         {
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "devices": [
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "/dev/loop3"
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             ],
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_name": "ceph_lv0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_size": "21470642176",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "name": "ceph_lv0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "tags": {
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.cluster_name": "ceph",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.crush_device_class": "",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.encrypted": "0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.objectstore": "bluestore",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.osd_id": "0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.type": "block",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.vdo": "0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.with_tpm": "0"
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             },
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "type": "block",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "vg_name": "ceph_vg0"
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:         }
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:     ],
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:     "1": [
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:         {
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "devices": [
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "/dev/loop4"
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             ],
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_name": "ceph_lv1",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_size": "21470642176",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "name": "ceph_lv1",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "tags": {
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.cluster_name": "ceph",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.crush_device_class": "",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.encrypted": "0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.objectstore": "bluestore",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.osd_id": "1",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.type": "block",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.vdo": "0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.with_tpm": "0"
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             },
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "type": "block",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "vg_name": "ceph_vg1"
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:         }
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:     ],
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:     "2": [
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:         {
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "devices": [
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "/dev/loop5"
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             ],
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_name": "ceph_lv2",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_size": "21470642176",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "name": "ceph_lv2",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "tags": {
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.cluster_name": "ceph",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.crush_device_class": "",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.encrypted": "0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.objectstore": "bluestore",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.osd_id": "2",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.type": "block",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.vdo": "0",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:                 "ceph.with_tpm": "0"
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             },
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "type": "block",
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:             "vg_name": "ceph_vg2"
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:         }
Feb 02 15:16:00 compute-0 sad_goldberg[146350]:     ]
Feb 02 15:16:00 compute-0 sad_goldberg[146350]: }
Feb 02 15:16:00 compute-0 systemd[1]: libpod-91bb2b32131c6225dbe8df3b12797e3568373d549e1b21321f4d60223fd69fad.scope: Deactivated successfully.
Feb 02 15:16:00 compute-0 podman[146332]: 2026-02-02 15:16:00.045385871 +0000 UTC m=+0.460316090 container died 91bb2b32131c6225dbe8df3b12797e3568373d549e1b21321f4d60223fd69fad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ff922612bd426b317496c502d6156b2143efc0da66f9c173c049ace9f9fa8b1-merged.mount: Deactivated successfully.
Feb 02 15:16:00 compute-0 podman[146332]: 2026-02-02 15:16:00.092760701 +0000 UTC m=+0.507690930 container remove 91bb2b32131c6225dbe8df3b12797e3568373d549e1b21321f4d60223fd69fad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:16:00 compute-0 systemd[1]: libpod-conmon-91bb2b32131c6225dbe8df3b12797e3568373d549e1b21321f4d60223fd69fad.scope: Deactivated successfully.
Feb 02 15:16:00 compute-0 sshd-session[134011]: Connection closed by 192.168.122.30 port 60704
Feb 02 15:16:00 compute-0 sshd-session[134001]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:16:00 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Feb 02 15:16:00 compute-0 systemd[1]: session-46.scope: Consumed 53.751s CPU time.
Feb 02 15:16:00 compute-0 systemd-logind[786]: Session 46 logged out. Waiting for processes to exit.
Feb 02 15:16:00 compute-0 systemd-logind[786]: Removed session 46.
Feb 02 15:16:00 compute-0 sudo[146117]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:00 compute-0 sudo[146395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:16:00 compute-0 sudo[146395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:16:00 compute-0 sudo[146395]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:00 compute-0 sudo[146420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:16:00 compute-0 sudo[146420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:16:00 compute-0 podman[146459]: 2026-02-02 15:16:00.538163968 +0000 UTC m=+0.054695573 container create 75a46accf5ea27313385ed01c514c239646b8565d01a544b503eb437ad139779 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:16:00 compute-0 systemd[1]: Started libpod-conmon-75a46accf5ea27313385ed01c514c239646b8565d01a544b503eb437ad139779.scope.
Feb 02 15:16:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:16:00 compute-0 podman[146459]: 2026-02-02 15:16:00.511361475 +0000 UTC m=+0.027893100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:16:00 compute-0 podman[146459]: 2026-02-02 15:16:00.623172448 +0000 UTC m=+0.139704133 container init 75a46accf5ea27313385ed01c514c239646b8565d01a544b503eb437ad139779 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:16:00 compute-0 podman[146459]: 2026-02-02 15:16:00.630827149 +0000 UTC m=+0.147358794 container start 75a46accf5ea27313385ed01c514c239646b8565d01a544b503eb437ad139779 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:16:00 compute-0 podman[146459]: 2026-02-02 15:16:00.635160742 +0000 UTC m=+0.151692447 container attach 75a46accf5ea27313385ed01c514c239646b8565d01a544b503eb437ad139779 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:16:00 compute-0 epic_hypatia[146476]: 167 167
Feb 02 15:16:00 compute-0 systemd[1]: libpod-75a46accf5ea27313385ed01c514c239646b8565d01a544b503eb437ad139779.scope: Deactivated successfully.
Feb 02 15:16:00 compute-0 podman[146459]: 2026-02-02 15:16:00.639172126 +0000 UTC m=+0.155703801 container died 75a46accf5ea27313385ed01c514c239646b8565d01a544b503eb437ad139779 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-760a9f3df80d0d690f6c64d708cf610aedc08b821add9ea54a4ae2f0735e120a-merged.mount: Deactivated successfully.
Feb 02 15:16:00 compute-0 podman[146459]: 2026-02-02 15:16:00.685677336 +0000 UTC m=+0.202208981 container remove 75a46accf5ea27313385ed01c514c239646b8565d01a544b503eb437ad139779 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:16:00 compute-0 systemd[1]: libpod-conmon-75a46accf5ea27313385ed01c514c239646b8565d01a544b503eb437ad139779.scope: Deactivated successfully.
Feb 02 15:16:00 compute-0 podman[146501]: 2026-02-02 15:16:00.851867903 +0000 UTC m=+0.053481585 container create 094bba40e888d51341c94e6ff84d7feb44607d0986d538b8725eb6e1baf9826e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:16:00 compute-0 systemd[1]: Started libpod-conmon-094bba40e888d51341c94e6ff84d7feb44607d0986d538b8725eb6e1baf9826e.scope.
Feb 02 15:16:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c91fb76e3a3cb8adafde6c70caa46ae4c963f4cef3fa25b5f39834b1088a95d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c91fb76e3a3cb8adafde6c70caa46ae4c963f4cef3fa25b5f39834b1088a95d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c91fb76e3a3cb8adafde6c70caa46ae4c963f4cef3fa25b5f39834b1088a95d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c91fb76e3a3cb8adafde6c70caa46ae4c963f4cef3fa25b5f39834b1088a95d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:16:00 compute-0 podman[146501]: 2026-02-02 15:16:00.831996314 +0000 UTC m=+0.033610036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:16:00 compute-0 podman[146501]: 2026-02-02 15:16:00.93124867 +0000 UTC m=+0.132862432 container init 094bba40e888d51341c94e6ff84d7feb44607d0986d538b8725eb6e1baf9826e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:16:00 compute-0 podman[146501]: 2026-02-02 15:16:00.938568572 +0000 UTC m=+0.140182254 container start 094bba40e888d51341c94e6ff84d7feb44607d0986d538b8725eb6e1baf9826e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:16:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:00 compute-0 podman[146501]: 2026-02-02 15:16:00.942325522 +0000 UTC m=+0.143939204 container attach 094bba40e888d51341c94e6ff84d7feb44607d0986d538b8725eb6e1baf9826e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:16:01 compute-0 ceph-mon[75334]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:01 compute-0 lvm[146594]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:16:01 compute-0 lvm[146594]: VG ceph_vg0 finished
Feb 02 15:16:01 compute-0 lvm[146597]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:16:01 compute-0 lvm[146597]: VG ceph_vg1 finished
Feb 02 15:16:01 compute-0 lvm[146599]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:16:01 compute-0 lvm[146599]: VG ceph_vg2 finished
Feb 02 15:16:01 compute-0 charming_dijkstra[146518]: {}
Feb 02 15:16:01 compute-0 systemd[1]: libpod-094bba40e888d51341c94e6ff84d7feb44607d0986d538b8725eb6e1baf9826e.scope: Deactivated successfully.
Feb 02 15:16:01 compute-0 podman[146501]: 2026-02-02 15:16:01.643327831 +0000 UTC m=+0.844941523 container died 094bba40e888d51341c94e6ff84d7feb44607d0986d538b8725eb6e1baf9826e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 15:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c91fb76e3a3cb8adafde6c70caa46ae4c963f4cef3fa25b5f39834b1088a95d-merged.mount: Deactivated successfully.
Feb 02 15:16:01 compute-0 podman[146501]: 2026-02-02 15:16:01.688539119 +0000 UTC m=+0.890152841 container remove 094bba40e888d51341c94e6ff84d7feb44607d0986d538b8725eb6e1baf9826e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:16:01 compute-0 systemd[1]: libpod-conmon-094bba40e888d51341c94e6ff84d7feb44607d0986d538b8725eb6e1baf9826e.scope: Deactivated successfully.
Feb 02 15:16:01 compute-0 sudo[146420]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:16:01 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:16:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:16:01 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:16:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:01 compute-0 sudo[146613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:16:01 compute-0 sudo[146613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:16:01 compute-0 sudo[146613]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:02 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:16:02 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:16:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:03 compute-0 ceph-mon[75334]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:05 compute-0 ceph-mon[75334]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:05 compute-0 systemd[1]: Stopping User Manager for UID 0...
Feb 02 15:16:05 compute-0 systemd[145027]: Activating special unit Exit the Session...
Feb 02 15:16:05 compute-0 systemd[145027]: Stopped target Main User Target.
Feb 02 15:16:05 compute-0 systemd[145027]: Stopped target Basic System.
Feb 02 15:16:05 compute-0 systemd[145027]: Stopped target Paths.
Feb 02 15:16:05 compute-0 systemd[145027]: Stopped target Sockets.
Feb 02 15:16:05 compute-0 systemd[145027]: Stopped target Timers.
Feb 02 15:16:05 compute-0 systemd[145027]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 02 15:16:05 compute-0 systemd[145027]: Closed D-Bus User Message Bus Socket.
Feb 02 15:16:05 compute-0 systemd[145027]: Stopped Create User's Volatile Files and Directories.
Feb 02 15:16:05 compute-0 systemd[145027]: Removed slice User Application Slice.
Feb 02 15:16:05 compute-0 systemd[145027]: Reached target Shutdown.
Feb 02 15:16:05 compute-0 systemd[145027]: Finished Exit the Session.
Feb 02 15:16:05 compute-0 systemd[145027]: Reached target Exit the Session.
Feb 02 15:16:05 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Feb 02 15:16:05 compute-0 systemd[1]: Stopped User Manager for UID 0.
Feb 02 15:16:05 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 02 15:16:05 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 02 15:16:05 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 02 15:16:05 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 02 15:16:05 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Feb 02 15:16:06 compute-0 sshd-session[146640]: Accepted publickey for zuul from 192.168.122.30 port 37546 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:16:06 compute-0 systemd-logind[786]: New session 48 of user zuul.
Feb 02 15:16:06 compute-0 systemd[1]: Started Session 48 of User zuul.
Feb 02 15:16:06 compute-0 sshd-session[146640]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:16:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:07 compute-0 ceph-mon[75334]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:07 compute-0 python3.9[146793]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:16:08 compute-0 sudo[146947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahauytzesfiufojsrimaefwennyenjdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045367.6823726-29-51748193955689/AnsiballZ_file.py'
Feb 02 15:16:08 compute-0 sudo[146947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:08 compute-0 python3.9[146949]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:08 compute-0 sudo[146947]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:08 compute-0 sudo[147099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcckbpgmahrpushinduwbgvyiwrstnsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045368.4325812-29-188158873572368/AnsiballZ_file.py'
Feb 02 15:16:08 compute-0 sudo[147099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:08 compute-0 python3.9[147101]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:08 compute-0 sudo[147099]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:09 compute-0 ceph-mon[75334]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:09 compute-0 sudo[147251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwozhsfjsivydigernhgpuixvnstvgju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045368.9885309-29-174597195025232/AnsiballZ_file.py'
Feb 02 15:16:09 compute-0 sudo[147251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:09 compute-0 python3.9[147253]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:09 compute-0 sudo[147251]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:09 compute-0 sudo[147403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwpfsvahvyxnnyyhazilkbbbsssdvgiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045369.5953999-29-238449975039547/AnsiballZ_file.py'
Feb 02 15:16:09 compute-0 sudo[147403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:10 compute-0 python3.9[147405]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:10 compute-0 sudo[147403]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:10 compute-0 sudo[147555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjcrslaydbsubbckvfbcdetjpzefijvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045370.2358446-29-22915712329064/AnsiballZ_file.py'
Feb 02 15:16:10 compute-0 sudo[147555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:10 compute-0 python3.9[147557]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:10 compute-0 sudo[147555]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:11 compute-0 ceph-mon[75334]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:11 compute-0 python3.9[147707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:16:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:12 compute-0 sudo[147857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siriuebfzjnuscrbtnwiohodibogutrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045371.6390717-73-11426746743382/AnsiballZ_seboolean.py'
Feb 02 15:16:12 compute-0 sudo[147857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:12 compute-0 python3.9[147859]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 02 15:16:12 compute-0 sudo[147857]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:13 compute-0 ceph-mon[75334]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:13 compute-0 python3.9[148010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:14 compute-0 python3.9[148131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045373.0371819-81-8230659489476/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:16:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:16:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:16:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:16:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:16:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:16:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:15 compute-0 python3.9[148281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:15 compute-0 ceph-mon[75334]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:15 compute-0 python3.9[148402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045374.5705829-96-195391924014256/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:16 compute-0 sudo[148552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trukyhqbwfvrdvhhyfrdhmxmovhqjwur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045375.8314855-113-65325340643415/AnsiballZ_setup.py'
Feb 02 15:16:16 compute-0 sudo[148552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:16 compute-0 python3.9[148554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:16:16 compute-0 sudo[148552]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:17 compute-0 ceph-mon[75334]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:17 compute-0 sudo[148636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukwwrqppzqsgzmgwgyebkdccycrjmrky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045375.8314855-113-65325340643415/AnsiballZ_dnf.py'
Feb 02 15:16:17 compute-0 sudo[148636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:17 compute-0 python3.9[148638]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:16:18 compute-0 sudo[148636]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:19 compute-0 ceph-mon[75334]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:19 compute-0 sudo[148789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apuncoopmlanigtvzcivkambqjnnoskf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045378.7140648-125-239267438449266/AnsiballZ_systemd.py'
Feb 02 15:16:19 compute-0 sudo[148789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:19 compute-0 python3.9[148791]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 15:16:19 compute-0 sudo[148789]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:20 compute-0 python3.9[148944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:20 compute-0 python3.9[149065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045379.861964-133-199923866987367/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:21 compute-0 ceph-mon[75334]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:21 compute-0 python3.9[149215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:21 compute-0 python3.9[149336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045380.9900978-133-129175119056098/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:23 compute-0 ceph-mon[75334]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:23 compute-0 python3.9[149486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:23 compute-0 python3.9[149607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045382.6855326-177-253264268610515/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:24 compute-0 python3.9[149757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:24 compute-0 python3.9[149878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045383.7894347-177-102044342444610/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:24 compute-0 ovn_controller[144995]: 2026-02-02T15:16:24Z|00025|memory|INFO|17280 kB peak resident set size after 29.8 seconds
Feb 02 15:16:24 compute-0 ovn_controller[144995]: 2026-02-02T15:16:24Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb 02 15:16:24 compute-0 podman[149879]: 2026-02-02 15:16:24.93148086 +0000 UTC m=+0.112830977 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:16:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:25 compute-0 ceph-mon[75334]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:25 compute-0 python3.9[150054]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:16:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:16:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2075 writes, 9162 keys, 2075 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2075 writes, 2075 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2075 writes, 9162 keys, 2075 commit groups, 1.0 writes per commit group, ingest: 12.25 MB, 0.02 MB/s
                                           Interval WAL: 2075 writes, 2075 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    207.6      0.04              0.01         3    0.014       0      0       0.0       0.0
                                             L6      1/0    6.68 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    176.8    154.6      0.09              0.05         2    0.045    7176    730       0.0       0.0
                                            Sum      1/0    6.68 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    121.0    171.4      0.13              0.07         5    0.027    7176    730       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    124.1    175.3      0.13              0.07         4    0.032    7176    730       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    176.8    154.6      0.09              0.05         2    0.045    7176    730       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    223.8      0.04              0.01         2    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1f12ef8d0#2 capacity: 308.00 MB usage: 698.97 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(37,611.69 KB,0.193945%) FilterBlock(6,28.36 KB,0.00899179%) IndexBlock(6,58.92 KB,0.0186821%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 15:16:25 compute-0 sudo[150206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzsyfphfbprrxyugscsivptctphlbrxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045385.6051137-215-110931307607206/AnsiballZ_file.py'
Feb 02 15:16:25 compute-0 sudo[150206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:26 compute-0 python3.9[150208]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:26 compute-0 sudo[150206]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:26 compute-0 sudo[150358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzfnzmbcumcnttsuvvreayprcqsasops ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045386.3378553-223-232465162934057/AnsiballZ_stat.py'
Feb 02 15:16:26 compute-0 sudo[150358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:26 compute-0 python3.9[150360]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:26 compute-0 sudo[150358]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:27 compute-0 ceph-mon[75334]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:27 compute-0 sudo[150436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkxjltawsppllesgucquadecwhsihwvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045386.3378553-223-232465162934057/AnsiballZ_file.py'
Feb 02 15:16:27 compute-0 sudo[150436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:27 compute-0 python3.9[150438]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:27 compute-0 sudo[150436]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:27 compute-0 sudo[150588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlfcezsjuqmkhrpoufbzhtaqlgtzhuhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045387.406678-223-239953568543624/AnsiballZ_stat.py'
Feb 02 15:16:27 compute-0 sudo[150588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:27 compute-0 python3.9[150590]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:27 compute-0 sudo[150588]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:28 compute-0 sudo[150666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsqjepdkkidfdzhehrqxkyiqfpvmwjtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045387.406678-223-239953568543624/AnsiballZ_file.py'
Feb 02 15:16:28 compute-0 sudo[150666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:28 compute-0 python3.9[150668]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:28 compute-0 sudo[150666]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:28 compute-0 sudo[150818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywpvphremjaqzuqofsiygvwwegjddrwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045388.4652767-246-245484990780014/AnsiballZ_file.py'
Feb 02 15:16:28 compute-0 sudo[150818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:29 compute-0 python3.9[150820]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:29 compute-0 sudo[150818]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:29 compute-0 ceph-mon[75334]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:29 compute-0 sudo[150970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njlfydkarykezadncsynwhogrsykqsnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045389.2336433-254-279484928046883/AnsiballZ_stat.py'
Feb 02 15:16:29 compute-0 sudo[150970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:29 compute-0 python3.9[150972]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:29 compute-0 sudo[150970]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:29 compute-0 sudo[151048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vomnuljawvxxftwwhngpbofaysrjlvbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045389.2336433-254-279484928046883/AnsiballZ_file.py'
Feb 02 15:16:29 compute-0 sudo[151048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:30 compute-0 python3.9[151050]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:30 compute-0 sudo[151048]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:30 compute-0 sudo[151200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phejphbmhcodesbvuvbtmyetlgnfkofa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045390.3540764-266-164830327982100/AnsiballZ_stat.py'
Feb 02 15:16:30 compute-0 sudo[151200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:30 compute-0 python3.9[151202]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:30 compute-0 sudo[151200]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:31 compute-0 ceph-mon[75334]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:31 compute-0 sudo[151278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yacdnpgpkyxtondcqayrejnlbopmllgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045390.3540764-266-164830327982100/AnsiballZ_file.py'
Feb 02 15:16:31 compute-0 sudo[151278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:31 compute-0 python3.9[151280]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:31 compute-0 sudo[151278]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:31 compute-0 sudo[151430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfjalckgmfrnodepnaqhdzdbgrnrerx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045391.4301898-278-210882013512114/AnsiballZ_systemd.py'
Feb 02 15:16:31 compute-0 sudo[151430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:32 compute-0 python3.9[151432]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:16:32 compute-0 systemd[1]: Reloading.
Feb 02 15:16:32 compute-0 systemd-rc-local-generator[151453]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:16:32 compute-0 systemd-sysv-generator[151456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:16:32 compute-0 sudo[151430]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:32 compute-0 sudo[151619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpanmeplkozcvrqmdaeasmtunirmkuud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045392.5478287-286-113329220422873/AnsiballZ_stat.py'
Feb 02 15:16:32 compute-0 sudo[151619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:32 compute-0 python3.9[151621]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:33 compute-0 ceph-mon[75334]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:33 compute-0 sudo[151619]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:33 compute-0 sudo[151697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmrryhifvmnigyhygbmrhpcdzdnhmgcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045392.5478287-286-113329220422873/AnsiballZ_file.py'
Feb 02 15:16:33 compute-0 sudo[151697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:33 compute-0 python3.9[151699]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:33 compute-0 sudo[151697]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:33 compute-0 sudo[151849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tttzvwfeklbgoawndytxbmfralbxohwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045393.601082-298-178897274270081/AnsiballZ_stat.py'
Feb 02 15:16:33 compute-0 sudo[151849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:34 compute-0 python3.9[151851]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:34 compute-0 sudo[151849]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:34 compute-0 sudo[151927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzmbspkkbotioxouczsdkeqxaroirczh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045393.601082-298-178897274270081/AnsiballZ_file.py'
Feb 02 15:16:34 compute-0 sudo[151927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:34 compute-0 python3.9[151929]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:34 compute-0 sudo[151927]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:35 compute-0 ceph-mon[75334]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:35 compute-0 sudo[152079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcedaxhqjkrxlgpgvtoxocdfxblgvvsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045394.8010063-310-7544963504369/AnsiballZ_systemd.py'
Feb 02 15:16:35 compute-0 sudo[152079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:35 compute-0 python3.9[152081]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:16:35 compute-0 systemd[1]: Reloading.
Feb 02 15:16:35 compute-0 systemd-rc-local-generator[152109]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:16:35 compute-0 systemd-sysv-generator[152112]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:16:35 compute-0 systemd[1]: Starting Create netns directory...
Feb 02 15:16:35 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 02 15:16:35 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 02 15:16:35 compute-0 systemd[1]: Finished Create netns directory.
Feb 02 15:16:35 compute-0 sudo[152079]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:36 compute-0 sudo[152272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jitwwegvabzqdzmppxpjmwvuvqgnqbyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045396.0877533-320-134275733509197/AnsiballZ_file.py'
Feb 02 15:16:36 compute-0 sudo[152272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:36 compute-0 python3.9[152274]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:36 compute-0 sudo[152272]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:37 compute-0 ceph-mon[75334]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:37 compute-0 sudo[152424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrxaskqfahjkbxfmjxqldmdowqnpanft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045396.794845-328-188188827402951/AnsiballZ_stat.py'
Feb 02 15:16:37 compute-0 sudo[152424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:37 compute-0 python3.9[152426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:37 compute-0 sudo[152424]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:37 compute-0 sudo[152547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uataligyylvzmzgifbadglrvczamqasq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045396.794845-328-188188827402951/AnsiballZ_copy.py'
Feb 02 15:16:37 compute-0 sudo[152547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:37 compute-0 python3.9[152549]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045396.794845-328-188188827402951/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:37 compute-0 sudo[152547]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:38 compute-0 sudo[152699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxcgbuhadzmzeatjzfdhugclcdybvqzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045398.1660247-345-5959833767431/AnsiballZ_file.py'
Feb 02 15:16:38 compute-0 sudo[152699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:38 compute-0 python3.9[152701]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:38 compute-0 sudo[152699]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:39 compute-0 ceph-mon[75334]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:39 compute-0 sudo[152851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moohdojatoozcmpyhmrjtztunkorrrbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045398.8270285-353-170602420711045/AnsiballZ_file.py'
Feb 02 15:16:39 compute-0 sudo[152851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:39 compute-0 python3.9[152853]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:16:39 compute-0 sudo[152851]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:39 compute-0 sudo[153003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdnszhtsonlamvsiiahigjkgmszncdgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045399.472613-361-106366906345639/AnsiballZ_stat.py'
Feb 02 15:16:39 compute-0 sudo[153003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:39 compute-0 python3.9[153005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:39 compute-0 sudo[153003]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:40 compute-0 sudo[153126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhbcgwquvnupdnnnqdtehglrmieaycxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045399.472613-361-106366906345639/AnsiballZ_copy.py'
Feb 02 15:16:40 compute-0 sudo[153126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:40 compute-0 python3.9[153128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045399.472613-361-106366906345639/.source.json _original_basename=.ryjj2ny7 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:40 compute-0 sudo[153126]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:41 compute-0 ceph-mon[75334]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:41 compute-0 python3.9[153278]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:16:42
Feb 02 15:16:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:16:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:16:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'backups', 'volumes']
Feb 02 15:16:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:16:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.025127) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045403025172, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 790, "num_deletes": 251, "total_data_size": 1077060, "memory_usage": 1094848, "flush_reason": "Manual Compaction"}
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045403033863, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1067627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8893, "largest_seqno": 9682, "table_properties": {"data_size": 1063626, "index_size": 1781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8509, "raw_average_key_size": 18, "raw_value_size": 1055608, "raw_average_value_size": 2304, "num_data_blocks": 83, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770045332, "oldest_key_time": 1770045332, "file_creation_time": 1770045403, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8788 microseconds, and 4014 cpu microseconds.
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.033916) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1067627 bytes OK
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.033938) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.036551) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.036576) EVENT_LOG_v1 {"time_micros": 1770045403036569, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.036600) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1073103, prev total WAL file size 1074258, number of live WAL files 2.
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.037283) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1042KB)], [23(6841KB)]
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045403037327, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8073402, "oldest_snapshot_seqno": -1}
Feb 02 15:16:43 compute-0 ceph-mon[75334]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3290 keys, 6185981 bytes, temperature: kUnknown
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045403069909, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6185981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6162164, "index_size": 14492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79719, "raw_average_key_size": 24, "raw_value_size": 6100821, "raw_average_value_size": 1854, "num_data_blocks": 632, "num_entries": 3290, "num_filter_entries": 3290, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770045403, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.070166) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6185981 bytes
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.071844) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 247.2 rd, 189.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.7 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(13.4) write-amplify(5.8) OK, records in: 3804, records dropped: 514 output_compression: NoCompression
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.071872) EVENT_LOG_v1 {"time_micros": 1770045403071858, "job": 8, "event": "compaction_finished", "compaction_time_micros": 32658, "compaction_time_cpu_micros": 19671, "output_level": 6, "num_output_files": 1, "total_output_size": 6185981, "num_input_records": 3804, "num_output_records": 3290, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045403072128, "job": 8, "event": "table_file_deletion", "file_number": 25}
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045403073001, "job": 8, "event": "table_file_deletion", "file_number": 23}
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.037190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.073156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.073166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.073169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.073173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:16:43 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:16:43.073176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:16:43 compute-0 sudo[153699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjwqlatdmhiwcxupebcgdtqhyxelwlqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045402.7166603-401-4297941529499/AnsiballZ_container_config_data.py'
Feb 02 15:16:43 compute-0 sudo[153699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:43 compute-0 python3.9[153701]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb 02 15:16:43 compute-0 sudo[153699]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:44 compute-0 sudo[153851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhkwmqlheuljwcflnewcpdnunjnwrxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045403.6679225-412-227530712330426/AnsiballZ_container_config_hash.py'
Feb 02 15:16:44 compute-0 sudo[153851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:44 compute-0 python3.9[153853]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 02 15:16:44 compute-0 sudo[153851]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:16:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:45 compute-0 ceph-mon[75334]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:45 compute-0 sudo[154003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilpzernolwknznsybxqppdyhojohvlra ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770045404.5671544-422-120687646579217/AnsiballZ_edpm_container_manage.py'
Feb 02 15:16:45 compute-0 sudo[154003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:45 compute-0 python3[154005]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 02 15:16:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:47 compute-0 ceph-mon[75334]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:49 compute-0 ceph-mon[75334]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:51 compute-0 ceph-mon[75334]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:52 compute-0 podman[154016]: 2026-02-02 15:16:52.546699065 +0000 UTC m=+7.208116437 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:16:52 compute-0 podman[154137]: 2026-02-02 15:16:52.719275681 +0000 UTC m=+0.062218886 container create 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 15:16:52 compute-0 podman[154137]: 2026-02-02 15:16:52.684959684 +0000 UTC m=+0.027902949 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:16:52 compute-0 python3[154005]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:16:52 compute-0 sudo[154003]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:53 compute-0 ceph-mon[75334]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:53 compute-0 sudo[154324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eslllveckmidflejucxkyujvcoyrvvvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045413.02471-430-87394453215060/AnsiballZ_stat.py'
Feb 02 15:16:53 compute-0 sudo[154324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:53 compute-0 python3.9[154326]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:16:53 compute-0 sudo[154324]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 sudo[154478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnvrvckptxpozrafzlmdewbytksuyrpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045413.778319-439-48047067402216/AnsiballZ_file.py'
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:16:54 compute-0 sudo[154478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:54 compute-0 python3.9[154480]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:54 compute-0 sudo[154478]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:54 compute-0 sudo[154554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlbivdnkwxcgjahkgafbmkrbkfypmmkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045413.778319-439-48047067402216/AnsiballZ_stat.py'
Feb 02 15:16:54 compute-0 sudo[154554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:54 compute-0 python3.9[154556]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:16:54 compute-0 sudo[154554]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:55 compute-0 ceph-mon[75334]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:55 compute-0 sudo[154720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcwcodoylgxnlwsmjwpikwwecarvvwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045414.6882796-439-155548230929826/AnsiballZ_copy.py'
Feb 02 15:16:55 compute-0 sudo[154720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:55 compute-0 podman[154679]: 2026-02-02 15:16:55.258154337 +0000 UTC m=+0.156845882 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:16:55 compute-0 python3.9[154724]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770045414.6882796-439-155548230929826/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:55 compute-0 sudo[154720]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:55 compute-0 sudo[154807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzcnqmrnwttavatzkzlupmlseesraadi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045414.6882796-439-155548230929826/AnsiballZ_systemd.py'
Feb 02 15:16:55 compute-0 sudo[154807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:55 compute-0 python3.9[154809]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 15:16:55 compute-0 systemd[1]: Reloading.
Feb 02 15:16:56 compute-0 systemd-rc-local-generator[154833]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:16:56 compute-0 systemd-sysv-generator[154840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:16:56 compute-0 sudo[154807]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:56 compute-0 sudo[154918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aerkkgihwhtlqkrnaczxjuykpqwxfttt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045414.6882796-439-155548230929826/AnsiballZ_systemd.py'
Feb 02 15:16:56 compute-0 sudo[154918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:56 compute-0 python3.9[154920]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:16:56 compute-0 systemd[1]: Reloading.
Feb 02 15:16:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:16:56 compute-0 systemd-sysv-generator[154953]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:16:56 compute-0 systemd-rc-local-generator[154950]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:16:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:57 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Feb 02 15:16:57 compute-0 ceph-mon[75334]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab4757767225640097e75dd9fdc4f39e0e0382d580f791aa7f167b582ee0f9/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 02 15:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab4757767225640097e75dd9fdc4f39e0e0382d580f791aa7f167b582ee0f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:16:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3.
Feb 02 15:16:57 compute-0 podman[154961]: 2026-02-02 15:16:57.188916416 +0000 UTC m=+0.146391658 container init 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: + sudo -E kolla_set_configs
Feb 02 15:16:57 compute-0 podman[154961]: 2026-02-02 15:16:57.220551681 +0000 UTC m=+0.178026873 container start 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:16:57 compute-0 edpm-start-podman-container[154961]: ovn_metadata_agent
Feb 02 15:16:57 compute-0 edpm-start-podman-container[154960]: Creating additional drop-in dependency for "ovn_metadata_agent" (79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3)
Feb 02 15:16:57 compute-0 podman[154984]: 2026-02-02 15:16:57.307148181 +0000 UTC m=+0.075474293 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:16:57 compute-0 systemd[1]: Reloading.
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Validating config file
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Copying service configuration files
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Writing out command to execute
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: ++ cat /run_command
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: + CMD=neutron-ovn-metadata-agent
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: + ARGS=
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: + sudo kolla_copy_cacerts
Feb 02 15:16:57 compute-0 systemd-rc-local-generator[155047]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: + [[ ! -n '' ]]
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: + . kolla_extend_start
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: Running command: 'neutron-ovn-metadata-agent'
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: + umask 0022
Feb 02 15:16:57 compute-0 ovn_metadata_agent[154977]: + exec neutron-ovn-metadata-agent
Feb 02 15:16:57 compute-0 systemd-sysv-generator[155053]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:16:57 compute-0 systemd[1]: Started ovn_metadata_agent container.
Feb 02 15:16:57 compute-0 sudo[154918]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:58 compute-0 python3.9[155216]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 02 15:16:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:59 compute-0 ceph-mon[75334]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:16:59 compute-0 sudo[155366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqaeggorunalsqtkwithgxryhzrpaozt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045418.8601549-484-158989565264995/AnsiballZ_stat.py'
Feb 02 15:16:59 compute-0 sudo[155366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.186 154982 INFO neutron.common.config [-] Logging enabled!
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.187 154982 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.187 154982 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.187 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.188 154982 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.188 154982 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.188 154982 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.188 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.188 154982 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.188 154982 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.188 154982 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.189 154982 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.189 154982 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.189 154982 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.189 154982 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.189 154982 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.189 154982 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.189 154982 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.190 154982 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.190 154982 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.190 154982 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.190 154982 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.190 154982 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.190 154982 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.190 154982 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.191 154982 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.191 154982 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.191 154982 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.191 154982 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.191 154982 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.191 154982 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.192 154982 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.192 154982 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.193 154982 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.193 154982 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.193 154982 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.193 154982 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.194 154982 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.194 154982 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.194 154982 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.194 154982 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.195 154982 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.195 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.195 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.196 154982 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.196 154982 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.196 154982 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.196 154982 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.196 154982 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.196 154982 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.196 154982 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.196 154982 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.197 154982 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.197 154982 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.197 154982 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.197 154982 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.197 154982 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.197 154982 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.197 154982 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.197 154982 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.197 154982 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.198 154982 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.198 154982 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.198 154982 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.198 154982 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.198 154982 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.198 154982 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.198 154982 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.198 154982 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.198 154982 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.199 154982 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.199 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.199 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.199 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.199 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.199 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.199 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.199 154982 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.199 154982 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.200 154982 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.201 154982 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.202 154982 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.202 154982 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.202 154982 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.202 154982 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.202 154982 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.202 154982 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.202 154982 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.202 154982 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.202 154982 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.203 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.203 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.203 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.203 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.203 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.203 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.203 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.204 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.204 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.204 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.204 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.204 154982 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.204 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.204 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.205 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.205 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.205 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.205 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.205 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.205 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.205 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.205 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.206 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.206 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.206 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.206 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.206 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.206 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.206 154982 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.206 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.207 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.207 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.207 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.207 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.207 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.207 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.207 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.207 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.208 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.208 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.208 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.208 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.208 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.208 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.208 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.208 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.209 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.209 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.209 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.209 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.209 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.209 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.209 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.209 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.209 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.210 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.210 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.210 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.210 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.210 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.210 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.210 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.210 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.210 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.211 154982 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.211 154982 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.211 154982 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.211 154982 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.211 154982 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.211 154982 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.212 154982 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.212 154982 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.212 154982 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.212 154982 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.212 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.212 154982 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.212 154982 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.213 154982 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.213 154982 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.213 154982 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.213 154982 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.213 154982 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.213 154982 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.213 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.213 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.213 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.214 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.214 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.214 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.214 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.214 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.214 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.214 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.214 154982 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.215 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.215 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.215 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.215 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.215 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.215 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.215 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.215 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.215 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.216 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.217 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.218 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.218 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.218 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.218 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.218 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.218 154982 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.218 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.218 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.218 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.219 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.219 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.219 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.219 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.219 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.219 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.219 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.219 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.219 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.220 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.221 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.221 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.221 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.221 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.221 154982 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.221 154982 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.221 154982 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.221 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.221 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.222 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.223 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.223 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.223 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.223 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.223 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.223 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.223 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.223 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.223 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.224 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.224 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.224 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.224 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.224 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.224 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.224 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.224 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.224 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.225 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.225 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.225 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.225 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.225 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.225 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.225 154982 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.225 154982 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.234 154982 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.234 154982 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.234 154982 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.235 154982 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.235 154982 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.246 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 673607ba-6470-4d88-9324-0f750aed69af (UUID: 673607ba-6470-4d88-9324-0f750aed69af) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.268 154982 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.268 154982 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.268 154982 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.268 154982 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.270 154982 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.276 154982 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.280 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '673607ba-6470-4d88-9324-0f750aed69af'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], external_ids={}, name=673607ba-6470-4d88-9324-0f750aed69af, nb_cfg_timestamp=1770045363159, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.281 154982 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7efc0aaa2c10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.282 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.282 154982 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.282 154982 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.282 154982 INFO oslo_service.service [-] Starting 1 workers
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.285 154982 DEBUG oslo_service.service [-] Started child 155369 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.288 154982 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpyr_z3z2m/privsep.sock']
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.289 155369 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-505321'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Feb 02 15:16:59 compute-0 python3.9[155368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.322 155369 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.323 155369 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.324 155369 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.329 155369 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.335 155369 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.340 155369 INFO eventlet.wsgi.server [-] (155369) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Feb 02 15:16:59 compute-0 sudo[155366]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:59 compute-0 sudo[155496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plxzlqgwlpwjkafeqqmecaqigdughhlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045418.8601549-484-158989565264995/AnsiballZ_copy.py'
Feb 02 15:16:59 compute-0 sudo[155496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:16:59 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb 02 15:16:59 compute-0 python3.9[155498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045418.8601549-484-158989565264995/.source.yaml _original_basename=.girpulzq follow=False checksum=1b1e46b85c7f8e1e61b7086c133b59de368ed19e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:16:59 compute-0 sudo[155496]: pam_unix(sudo:session): session closed for user root
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.878 154982 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.880 154982 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpyr_z3z2m/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.789 155499 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.796 155499 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.800 155499 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.800 155499 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155499
Feb 02 15:16:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:16:59.885 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2477ba-a3ab-411b-8e25-41e3373287b2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:17:00 compute-0 sshd-session[146643]: Connection closed by 192.168.122.30 port 37546
Feb 02 15:17:00 compute-0 sshd-session[146640]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:17:00 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Feb 02 15:17:00 compute-0 systemd[1]: session-48.scope: Consumed 50.180s CPU time.
Feb 02 15:17:00 compute-0 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Feb 02 15:17:00 compute-0 systemd-logind[786]: Removed session 48.
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.314 155499 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.314 155499 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.314 155499 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.782 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[adca2255-cf86-4c11-8573-7a50a55d500c]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.784 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, column=external_ids, values=({'neutron:ovn-metadata-id': '8ad75c79-2aec-55e2-9cb2-9f7eb865ffc4'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.791 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.796 154982 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.796 154982 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.797 154982 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.797 154982 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.797 154982 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.797 154982 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.797 154982 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.797 154982 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.797 154982 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.797 154982 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.797 154982 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.798 154982 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.798 154982 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.798 154982 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.798 154982 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.798 154982 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.798 154982 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.798 154982 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.798 154982 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.798 154982 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.799 154982 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.799 154982 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.799 154982 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.799 154982 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.799 154982 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.799 154982 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.799 154982 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.799 154982 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.800 154982 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.800 154982 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.800 154982 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.800 154982 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.800 154982 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.800 154982 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.800 154982 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.800 154982 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.801 154982 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.801 154982 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.801 154982 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.801 154982 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.801 154982 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.801 154982 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.801 154982 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.802 154982 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.802 154982 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.802 154982 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.802 154982 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.802 154982 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.802 154982 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.802 154982 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.802 154982 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.803 154982 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.803 154982 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.803 154982 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.803 154982 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.803 154982 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.803 154982 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.803 154982 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.803 154982 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.803 154982 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.804 154982 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.805 154982 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.806 154982 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.806 154982 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.806 154982 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.806 154982 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.806 154982 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.806 154982 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.806 154982 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.806 154982 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.806 154982 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.807 154982 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.808 154982 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.809 154982 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.809 154982 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.809 154982 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.809 154982 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.809 154982 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.809 154982 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.809 154982 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.810 154982 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.810 154982 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.810 154982 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.810 154982 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.810 154982 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.810 154982 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.810 154982 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.810 154982 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.811 154982 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.811 154982 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.811 154982 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.811 154982 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.811 154982 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.811 154982 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.811 154982 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.812 154982 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.812 154982 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.812 154982 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.812 154982 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.812 154982 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.812 154982 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.812 154982 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.812 154982 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.812 154982 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.813 154982 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.814 154982 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.815 154982 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.816 154982 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.816 154982 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.816 154982 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.816 154982 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.816 154982 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.816 154982 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.816 154982 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.816 154982 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.817 154982 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.817 154982 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.817 154982 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.817 154982 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.817 154982 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.817 154982 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.817 154982 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.817 154982 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.817 154982 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.818 154982 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.819 154982 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.820 154982 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.821 154982 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.821 154982 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.821 154982 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.821 154982 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.821 154982 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.821 154982 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.821 154982 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.821 154982 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.821 154982 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.822 154982 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.822 154982 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.822 154982 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.822 154982 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.822 154982 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.822 154982 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.822 154982 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.822 154982 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.822 154982 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.823 154982 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.823 154982 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.823 154982 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.823 154982 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.823 154982 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.823 154982 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.823 154982 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.824 154982 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.824 154982 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.824 154982 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.824 154982 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.824 154982 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.824 154982 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.824 154982 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.824 154982 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.825 154982 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.825 154982 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.825 154982 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.825 154982 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.825 154982 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.825 154982 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.825 154982 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.825 154982 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.825 154982 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.826 154982 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.826 154982 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.826 154982 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.826 154982 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.826 154982 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.826 154982 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.826 154982 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.826 154982 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.826 154982 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.827 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.827 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.827 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.827 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.827 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.827 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.827 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.828 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.829 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.829 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.829 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.829 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.829 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.829 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.829 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.829 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.829 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.830 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.830 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.830 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.830 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.830 154982 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.830 154982 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.830 154982 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.830 154982 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.830 154982 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:17:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:00.831 154982 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 02 15:17:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:01 compute-0 ceph-mon[75334]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:01 compute-0 sudo[155528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:17:01 compute-0 sudo[155528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:01 compute-0 sudo[155528]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:01 compute-0 sudo[155553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 02 15:17:01 compute-0 sudo[155553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:02 compute-0 sudo[155553]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:17:02 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:17:02 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:02 compute-0 sudo[155599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:17:02 compute-0 sudo[155599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:02 compute-0 sudo[155599]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:02 compute-0 sudo[155624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:17:02 compute-0 sudo[155624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:02 compute-0 sudo[155624]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:17:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:17:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:17:02 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:17:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:17:02 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:17:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:17:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:17:02 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:17:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:17:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:17:02 compute-0 sudo[155680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:17:02 compute-0 sudo[155680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:02 compute-0 sudo[155680]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:02 compute-0 sudo[155705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:17:02 compute-0 sudo[155705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:03 compute-0 podman[155743]: 2026-02-02 15:17:03.070455198 +0000 UTC m=+0.037229296 container create da39686d490df92fd220f36fda05570090ef6e6ab0193ba91dcf22477a3900f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:17:03 compute-0 systemd[1]: Started libpod-conmon-da39686d490df92fd220f36fda05570090ef6e6ab0193ba91dcf22477a3900f6.scope.
Feb 02 15:17:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:17:03 compute-0 podman[155743]: 2026-02-02 15:17:03.142584442 +0000 UTC m=+0.109358590 container init da39686d490df92fd220f36fda05570090ef6e6ab0193ba91dcf22477a3900f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:17:03 compute-0 podman[155743]: 2026-02-02 15:17:03.151583131 +0000 UTC m=+0.118357229 container start da39686d490df92fd220f36fda05570090ef6e6ab0193ba91dcf22477a3900f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:17:03 compute-0 podman[155743]: 2026-02-02 15:17:03.05635276 +0000 UTC m=+0.023126878 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:17:03 compute-0 podman[155743]: 2026-02-02 15:17:03.155780128 +0000 UTC m=+0.122554276 container attach da39686d490df92fd220f36fda05570090ef6e6ab0193ba91dcf22477a3900f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:17:03 compute-0 nice_ellis[155760]: 167 167
Feb 02 15:17:03 compute-0 systemd[1]: libpod-da39686d490df92fd220f36fda05570090ef6e6ab0193ba91dcf22477a3900f6.scope: Deactivated successfully.
Feb 02 15:17:03 compute-0 podman[155743]: 2026-02-02 15:17:03.157294423 +0000 UTC m=+0.124068531 container died da39686d490df92fd220f36fda05570090ef6e6ab0193ba91dcf22477a3900f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ellis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b20a3552521092bde8960f403de2134da51eb77588979e40fd2499aca52043e5-merged.mount: Deactivated successfully.
Feb 02 15:17:03 compute-0 podman[155743]: 2026-02-02 15:17:03.204314574 +0000 UTC m=+0.171088672 container remove da39686d490df92fd220f36fda05570090ef6e6ab0193ba91dcf22477a3900f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:17:03 compute-0 systemd[1]: libpod-conmon-da39686d490df92fd220f36fda05570090ef6e6ab0193ba91dcf22477a3900f6.scope: Deactivated successfully.
Feb 02 15:17:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:17:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:17:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:17:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:17:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:17:03 compute-0 ceph-mon[75334]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:03 compute-0 podman[155784]: 2026-02-02 15:17:03.333111884 +0000 UTC m=+0.038289309 container create d9911b19190763d5a5a1c4c65a70b08cea1c4693640ccd30c8007e0ccd59e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chatterjee, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:17:03 compute-0 systemd[1]: Started libpod-conmon-d9911b19190763d5a5a1c4c65a70b08cea1c4693640ccd30c8007e0ccd59e60d.scope.
Feb 02 15:17:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac67df6397ae3499948e22b15b8f1caea158e8be8ec9eb3cf211c44516e6740b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac67df6397ae3499948e22b15b8f1caea158e8be8ec9eb3cf211c44516e6740b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac67df6397ae3499948e22b15b8f1caea158e8be8ec9eb3cf211c44516e6740b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac67df6397ae3499948e22b15b8f1caea158e8be8ec9eb3cf211c44516e6740b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac67df6397ae3499948e22b15b8f1caea158e8be8ec9eb3cf211c44516e6740b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:03 compute-0 podman[155784]: 2026-02-02 15:17:03.317369479 +0000 UTC m=+0.022546934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:17:03 compute-0 podman[155784]: 2026-02-02 15:17:03.420038522 +0000 UTC m=+0.125215977 container init d9911b19190763d5a5a1c4c65a70b08cea1c4693640ccd30c8007e0ccd59e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:17:03 compute-0 podman[155784]: 2026-02-02 15:17:03.426039192 +0000 UTC m=+0.131216627 container start d9911b19190763d5a5a1c4c65a70b08cea1c4693640ccd30c8007e0ccd59e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chatterjee, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:17:03 compute-0 podman[155784]: 2026-02-02 15:17:03.429037242 +0000 UTC m=+0.134214717 container attach d9911b19190763d5a5a1c4c65a70b08cea1c4693640ccd30c8007e0ccd59e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:17:03 compute-0 stoic_chatterjee[155801]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:17:03 compute-0 stoic_chatterjee[155801]: --> All data devices are unavailable
Feb 02 15:17:03 compute-0 systemd[1]: libpod-d9911b19190763d5a5a1c4c65a70b08cea1c4693640ccd30c8007e0ccd59e60d.scope: Deactivated successfully.
Feb 02 15:17:03 compute-0 podman[155784]: 2026-02-02 15:17:03.804093768 +0000 UTC m=+0.509271193 container died d9911b19190763d5a5a1c4c65a70b08cea1c4693640ccd30c8007e0ccd59e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac67df6397ae3499948e22b15b8f1caea158e8be8ec9eb3cf211c44516e6740b-merged.mount: Deactivated successfully.
Feb 02 15:17:03 compute-0 podman[155784]: 2026-02-02 15:17:03.855179403 +0000 UTC m=+0.560356828 container remove d9911b19190763d5a5a1c4c65a70b08cea1c4693640ccd30c8007e0ccd59e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chatterjee, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 02 15:17:03 compute-0 systemd[1]: libpod-conmon-d9911b19190763d5a5a1c4c65a70b08cea1c4693640ccd30c8007e0ccd59e60d.scope: Deactivated successfully.
Feb 02 15:17:03 compute-0 sudo[155705]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:03 compute-0 sudo[155832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:17:03 compute-0 sudo[155832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:03 compute-0 sudo[155832]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:04 compute-0 sudo[155857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:17:04 compute-0 sudo[155857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:04 compute-0 podman[155895]: 2026-02-02 15:17:04.250273045 +0000 UTC m=+0.036823596 container create 2cc7caaf84d44eb5aa71e1dd0e3a7e58fadbb85cfa840b1ac5b2e7315a4fbd48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_black, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:17:04 compute-0 systemd[1]: Started libpod-conmon-2cc7caaf84d44eb5aa71e1dd0e3a7e58fadbb85cfa840b1ac5b2e7315a4fbd48.scope.
Feb 02 15:17:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:17:04 compute-0 podman[155895]: 2026-02-02 15:17:04.322444041 +0000 UTC m=+0.108994652 container init 2cc7caaf84d44eb5aa71e1dd0e3a7e58fadbb85cfa840b1ac5b2e7315a4fbd48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:17:04 compute-0 podman[155895]: 2026-02-02 15:17:04.232001561 +0000 UTC m=+0.018552172 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:17:04 compute-0 podman[155895]: 2026-02-02 15:17:04.328742216 +0000 UTC m=+0.115292787 container start 2cc7caaf84d44eb5aa71e1dd0e3a7e58fadbb85cfa840b1ac5b2e7315a4fbd48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:17:04 compute-0 podman[155895]: 2026-02-02 15:17:04.332550895 +0000 UTC m=+0.119101476 container attach 2cc7caaf84d44eb5aa71e1dd0e3a7e58fadbb85cfa840b1ac5b2e7315a4fbd48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:17:04 compute-0 romantic_black[155912]: 167 167
Feb 02 15:17:04 compute-0 systemd[1]: libpod-2cc7caaf84d44eb5aa71e1dd0e3a7e58fadbb85cfa840b1ac5b2e7315a4fbd48.scope: Deactivated successfully.
Feb 02 15:17:04 compute-0 podman[155895]: 2026-02-02 15:17:04.334664064 +0000 UTC m=+0.121214625 container died 2cc7caaf84d44eb5aa71e1dd0e3a7e58fadbb85cfa840b1ac5b2e7315a4fbd48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_black, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-800de76a83b9e00c2e084c0c4e5d167d7c6ccd5d50d92d46c7a4e12fb9067cc4-merged.mount: Deactivated successfully.
Feb 02 15:17:04 compute-0 podman[155895]: 2026-02-02 15:17:04.370212269 +0000 UTC m=+0.156762840 container remove 2cc7caaf84d44eb5aa71e1dd0e3a7e58fadbb85cfa840b1ac5b2e7315a4fbd48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:17:04 compute-0 systemd[1]: libpod-conmon-2cc7caaf84d44eb5aa71e1dd0e3a7e58fadbb85cfa840b1ac5b2e7315a4fbd48.scope: Deactivated successfully.
Feb 02 15:17:04 compute-0 podman[155936]: 2026-02-02 15:17:04.5020917 +0000 UTC m=+0.046081530 container create e0c1d9f70105978ff123dffe3d0fa128bee344d28508ca9b2a9e1dfd6202adf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_matsumoto, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 15:17:04 compute-0 systemd[1]: Started libpod-conmon-e0c1d9f70105978ff123dffe3d0fa128bee344d28508ca9b2a9e1dfd6202adf7.scope.
Feb 02 15:17:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:17:04 compute-0 podman[155936]: 2026-02-02 15:17:04.478694478 +0000 UTC m=+0.022684338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f5424c7503054746a26af8cdcaf00e2e449dd2dcb48c52f72fbbc9c9ac47d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f5424c7503054746a26af8cdcaf00e2e449dd2dcb48c52f72fbbc9c9ac47d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f5424c7503054746a26af8cdcaf00e2e449dd2dcb48c52f72fbbc9c9ac47d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f5424c7503054746a26af8cdcaf00e2e449dd2dcb48c52f72fbbc9c9ac47d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:04 compute-0 podman[155936]: 2026-02-02 15:17:04.589834428 +0000 UTC m=+0.133824248 container init e0c1d9f70105978ff123dffe3d0fa128bee344d28508ca9b2a9e1dfd6202adf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:17:04 compute-0 podman[155936]: 2026-02-02 15:17:04.595761895 +0000 UTC m=+0.139751715 container start e0c1d9f70105978ff123dffe3d0fa128bee344d28508ca9b2a9e1dfd6202adf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:17:04 compute-0 podman[155936]: 2026-02-02 15:17:04.602163404 +0000 UTC m=+0.146153224 container attach e0c1d9f70105978ff123dffe3d0fa128bee344d28508ca9b2a9e1dfd6202adf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_matsumoto, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]: {
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:     "0": [
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:         {
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "devices": [
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "/dev/loop3"
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             ],
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_name": "ceph_lv0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_size": "21470642176",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "name": "ceph_lv0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "tags": {
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.cluster_name": "ceph",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.crush_device_class": "",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.encrypted": "0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.objectstore": "bluestore",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.osd_id": "0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.type": "block",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.vdo": "0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.with_tpm": "0"
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             },
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "type": "block",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "vg_name": "ceph_vg0"
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:         }
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:     ],
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:     "1": [
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:         {
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "devices": [
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "/dev/loop4"
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             ],
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_name": "ceph_lv1",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_size": "21470642176",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "name": "ceph_lv1",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "tags": {
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.cluster_name": "ceph",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.crush_device_class": "",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.encrypted": "0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.objectstore": "bluestore",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.osd_id": "1",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.type": "block",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.vdo": "0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.with_tpm": "0"
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             },
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "type": "block",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "vg_name": "ceph_vg1"
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:         }
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:     ],
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:     "2": [
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:         {
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "devices": [
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "/dev/loop5"
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             ],
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_name": "ceph_lv2",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_size": "21470642176",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "name": "ceph_lv2",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "tags": {
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.cluster_name": "ceph",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.crush_device_class": "",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.encrypted": "0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.objectstore": "bluestore",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.osd_id": "2",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.type": "block",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.vdo": "0",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:                 "ceph.with_tpm": "0"
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             },
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "type": "block",
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:             "vg_name": "ceph_vg2"
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:         }
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]:     ]
Feb 02 15:17:04 compute-0 loving_matsumoto[155952]: }
Feb 02 15:17:04 compute-0 systemd[1]: libpod-e0c1d9f70105978ff123dffe3d0fa128bee344d28508ca9b2a9e1dfd6202adf7.scope: Deactivated successfully.
Feb 02 15:17:04 compute-0 podman[155936]: 2026-02-02 15:17:04.881210221 +0000 UTC m=+0.425200061 container died e0c1d9f70105978ff123dffe3d0fa128bee344d28508ca9b2a9e1dfd6202adf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_matsumoto, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f5424c7503054746a26af8cdcaf00e2e449dd2dcb48c52f72fbbc9c9ac47d0-merged.mount: Deactivated successfully.
Feb 02 15:17:04 compute-0 podman[155936]: 2026-02-02 15:17:04.93845831 +0000 UTC m=+0.482448120 container remove e0c1d9f70105978ff123dffe3d0fa128bee344d28508ca9b2a9e1dfd6202adf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:17:04 compute-0 systemd[1]: libpod-conmon-e0c1d9f70105978ff123dffe3d0fa128bee344d28508ca9b2a9e1dfd6202adf7.scope: Deactivated successfully.
Feb 02 15:17:04 compute-0 sudo[155857]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:05 compute-0 sudo[155972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:17:05 compute-0 sudo[155972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:05 compute-0 sudo[155972]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:05 compute-0 ceph-mon[75334]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:05 compute-0 sudo[155997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:17:05 compute-0 sudo[155997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:05 compute-0 podman[156035]: 2026-02-02 15:17:05.282402175 +0000 UTC m=+0.035967316 container create 1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:17:05 compute-0 systemd[1]: Started libpod-conmon-1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940.scope.
Feb 02 15:17:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:17:05 compute-0 podman[156035]: 2026-02-02 15:17:05.35283567 +0000 UTC m=+0.106400821 container init 1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:17:05 compute-0 podman[156035]: 2026-02-02 15:17:05.359150846 +0000 UTC m=+0.112715977 container start 1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brown, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:17:05 compute-0 podman[156035]: 2026-02-02 15:17:05.266214719 +0000 UTC m=+0.019779890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:17:05 compute-0 podman[156035]: 2026-02-02 15:17:05.362512114 +0000 UTC m=+0.116077245 container attach 1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:17:05 compute-0 agitated_brown[156053]: 167 167
Feb 02 15:17:05 compute-0 systemd[1]: libpod-1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940.scope: Deactivated successfully.
Feb 02 15:17:05 compute-0 conmon[156053]: conmon 1b56193cf9b40fa48928 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940.scope/container/memory.events
Feb 02 15:17:05 compute-0 podman[156035]: 2026-02-02 15:17:05.364656273 +0000 UTC m=+0.118221404 container died 1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brown, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:17:05 compute-0 sshd-session[156049]: Accepted publickey for zuul from 192.168.122.30 port 38562 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:17:05 compute-0 systemd-logind[786]: New session 49 of user zuul.
Feb 02 15:17:05 compute-0 systemd[1]: Started Session 49 of User zuul.
Feb 02 15:17:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8341be697bcf54ec96f168b25f33374a9d360a18939fa0157e83609cb296d7d-merged.mount: Deactivated successfully.
Feb 02 15:17:05 compute-0 sshd-session[156049]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:17:05 compute-0 podman[156035]: 2026-02-02 15:17:05.419827445 +0000 UTC m=+0.173392616 container remove 1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:17:05 compute-0 systemd[1]: libpod-conmon-1b56193cf9b40fa48928bf83d85f3e09549ca31c18e22f04a6b19a02b91d6940.scope: Deactivated successfully.
Feb 02 15:17:05 compute-0 podman[156107]: 2026-02-02 15:17:05.558164346 +0000 UTC m=+0.044440613 container create e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_proskuriakova, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:17:05 compute-0 systemd[1]: Started libpod-conmon-e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde.scope.
Feb 02 15:17:05 compute-0 podman[156107]: 2026-02-02 15:17:05.53249308 +0000 UTC m=+0.018769367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:17:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac31dbff201f2bace30335537f27c266a3040bdfddb465c22d8b6049d53109f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac31dbff201f2bace30335537f27c266a3040bdfddb465c22d8b6049d53109f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac31dbff201f2bace30335537f27c266a3040bdfddb465c22d8b6049d53109f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac31dbff201f2bace30335537f27c266a3040bdfddb465c22d8b6049d53109f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:17:05 compute-0 podman[156107]: 2026-02-02 15:17:05.65439833 +0000 UTC m=+0.140674617 container init e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_proskuriakova, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:17:05 compute-0 podman[156107]: 2026-02-02 15:17:05.664584867 +0000 UTC m=+0.150861174 container start e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:17:05 compute-0 podman[156107]: 2026-02-02 15:17:05.693275242 +0000 UTC m=+0.179551529 container attach e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_proskuriakova, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:17:06 compute-0 lvm[156324]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:17:06 compute-0 lvm[156324]: VG ceph_vg0 finished
Feb 02 15:17:06 compute-0 lvm[156327]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:17:06 compute-0 lvm[156327]: VG ceph_vg1 finished
Feb 02 15:17:06 compute-0 lvm[156328]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:17:06 compute-0 lvm[156328]: VG ceph_vg2 finished
Feb 02 15:17:06 compute-0 python3.9[156290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:17:06 compute-0 agitated_proskuriakova[156150]: {}
Feb 02 15:17:06 compute-0 systemd[1]: libpod-e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde.scope: Deactivated successfully.
Feb 02 15:17:06 compute-0 conmon[156150]: conmon e5f9ffef7ce9c33d3d59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde.scope/container/memory.events
Feb 02 15:17:06 compute-0 podman[156107]: 2026-02-02 15:17:06.365847535 +0000 UTC m=+0.852123802 container died e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_proskuriakova, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:17:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ac31dbff201f2bace30335537f27c266a3040bdfddb465c22d8b6049d53109f-merged.mount: Deactivated successfully.
Feb 02 15:17:06 compute-0 podman[156107]: 2026-02-02 15:17:06.571849557 +0000 UTC m=+1.058125864 container remove e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:17:06 compute-0 systemd[1]: libpod-conmon-e5f9ffef7ce9c33d3d5981655dd55b2ffa967067f4344019c3add246915cbbde.scope: Deactivated successfully.
Feb 02 15:17:06 compute-0 sudo[155997]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:17:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:17:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:06 compute-0 sudo[156371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:17:06 compute-0 sudo[156371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:17:06 compute-0 sudo[156371]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:07 compute-0 sudo[156521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmurvjbmswgbcckvwiwxxmzmrivlmzgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045426.7209618-29-143753476185356/AnsiballZ_command.py'
Feb 02 15:17:07 compute-0 sudo[156521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:07 compute-0 python3.9[156523]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:17:07 compute-0 sudo[156521]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:17:07 compute-0 ceph-mon[75334]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:08 compute-0 sudo[156686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvkokyfzcepwhemrkkjsicjupwfbhbye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045427.6045344-40-78060105022610/AnsiballZ_systemd_service.py'
Feb 02 15:17:08 compute-0 sudo[156686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:08 compute-0 python3.9[156688]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 15:17:08 compute-0 systemd[1]: Reloading.
Feb 02 15:17:08 compute-0 systemd-rc-local-generator[156715]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:17:08 compute-0 systemd-sysv-generator[156718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:17:08 compute-0 sudo[156686]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:09 compute-0 ceph-mon[75334]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:09 compute-0 python3.9[156874]: ansible-ansible.builtin.service_facts Invoked
Feb 02 15:17:09 compute-0 network[156891]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 15:17:09 compute-0 network[156892]: 'network-scripts' will be removed from distribution in near future.
Feb 02 15:17:09 compute-0 network[156893]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 15:17:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:11 compute-0 ceph-mon[75334]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:12 compute-0 sudo[157153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opfwvdokilljaiqsiwqmhooonotmsnlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045432.1967618-59-54656649854349/AnsiballZ_systemd_service.py'
Feb 02 15:17:12 compute-0 sudo[157153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:12 compute-0 python3.9[157155]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:17:12 compute-0 sudo[157153]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:13 compute-0 ceph-mon[75334]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:13 compute-0 sudo[157306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurnmgdlxzbgmdrffrpxgpfvjoriqewy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045432.9502313-59-253563304827185/AnsiballZ_systemd_service.py'
Feb 02 15:17:13 compute-0 sudo[157306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:13 compute-0 python3.9[157308]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:17:13 compute-0 sudo[157306]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:13 compute-0 sudo[157459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqkkminrncvptfdgbujfclhdmskftedr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045433.6645167-59-141495067766425/AnsiballZ_systemd_service.py'
Feb 02 15:17:13 compute-0 sudo[157459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:14 compute-0 python3.9[157461]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:17:14 compute-0 sudo[157459]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:17:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:17:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:17:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:17:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:17:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:17:14 compute-0 sudo[157612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmzktymhzbulzugfrwtwabwnebjynakr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045434.408447-59-257470648104306/AnsiballZ_systemd_service.py'
Feb 02 15:17:14 compute-0 sudo[157612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:14 compute-0 python3.9[157614]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:17:15 compute-0 sudo[157612]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:15 compute-0 ceph-mon[75334]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:15 compute-0 sudo[157765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xobtvwzbtmtqlzaoyzopcilctmezskqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045435.1905942-59-57560226939810/AnsiballZ_systemd_service.py'
Feb 02 15:17:15 compute-0 sudo[157765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:15 compute-0 python3.9[157767]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:17:15 compute-0 sudo[157765]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:16 compute-0 sudo[157918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuzvqbetxdjngfjrotmlnqkxirkbudis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045435.900167-59-258496473969462/AnsiballZ_systemd_service.py'
Feb 02 15:17:16 compute-0 sudo[157918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:16 compute-0 python3.9[157920]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:17:16 compute-0 sudo[157918]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:16 compute-0 sudo[158071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqcqzrwhhbsranluimpckmihzxvktsyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045436.597122-59-244335047674737/AnsiballZ_systemd_service.py'
Feb 02 15:17:16 compute-0 sudo[158071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:17 compute-0 ceph-mon[75334]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:17 compute-0 python3.9[158073]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:17:18 compute-0 sudo[158071]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:19 compute-0 ceph-mon[75334]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:19 compute-0 sudo[158224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqgescrpkjubpikifwnswqiwozzxswje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045438.6674263-111-128031340528066/AnsiballZ_file.py'
Feb 02 15:17:19 compute-0 sudo[158224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:19 compute-0 python3.9[158226]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:19 compute-0 sudo[158224]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:19 compute-0 sudo[158376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrhbvxdsinznoqxixpfqrrvnpeaysmjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045439.4638596-111-150642284719567/AnsiballZ_file.py'
Feb 02 15:17:19 compute-0 sudo[158376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:19 compute-0 python3.9[158378]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:19 compute-0 sudo[158376]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:20 compute-0 sudo[158528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylfzcngenavjexevzekjlkucctsuqcbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045440.082877-111-29350356931278/AnsiballZ_file.py'
Feb 02 15:17:20 compute-0 sudo[158528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:20 compute-0 python3.9[158530]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:20 compute-0 sudo[158528]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:20 compute-0 sudo[158680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkkzcxebrulqaxwgzaibxdgrsitokwxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045440.6799781-111-40822366149247/AnsiballZ_file.py'
Feb 02 15:17:20 compute-0 sudo[158680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:21 compute-0 ceph-mon[75334]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:21 compute-0 python3.9[158682]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:21 compute-0 sudo[158680]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:21 compute-0 sudo[158832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubzcrkcqqnsbtkrlbhadaxtvwhfixcto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045441.3214161-111-232161420876866/AnsiballZ_file.py'
Feb 02 15:17:21 compute-0 sudo[158832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:21 compute-0 python3.9[158834]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:21 compute-0 sudo[158832]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:22 compute-0 sudo[158984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejpklhlyrvjkfnknkedxzsxnvyxbiedr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045441.9682825-111-250291616126205/AnsiballZ_file.py'
Feb 02 15:17:22 compute-0 sudo[158984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:22 compute-0 python3.9[158986]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:22 compute-0 sudo[158984]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:22 compute-0 sudo[159136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piynosxtjicybpvlxbwvdrzgawxzfehx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045442.5996232-111-59353213771311/AnsiballZ_file.py'
Feb 02 15:17:22 compute-0 sudo[159136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:23 compute-0 ceph-mon[75334]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:23 compute-0 python3.9[159138]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:23 compute-0 sudo[159136]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:23 compute-0 sudo[159288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmnsyrywyzvbiiqdhvlokkwtjraivyka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045443.2767348-161-220284781144144/AnsiballZ_file.py'
Feb 02 15:17:23 compute-0 sudo[159288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:23 compute-0 python3.9[159290]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:23 compute-0 sudo[159288]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:24 compute-0 sudo[159440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqcmkzpmdvlwawygawevlkvxtjxubhsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045443.9086034-161-79626984302353/AnsiballZ_file.py'
Feb 02 15:17:24 compute-0 sudo[159440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:24 compute-0 python3.9[159442]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:24 compute-0 sudo[159440]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:24 compute-0 sudo[159592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvvzilhwzbdfuzqpzegpohgxcmzwubwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045444.4739046-161-111240632247541/AnsiballZ_file.py'
Feb 02 15:17:24 compute-0 sudo[159592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:24 compute-0 python3.9[159594]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:24 compute-0 sudo[159592]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:25 compute-0 ceph-mon[75334]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:25 compute-0 sudo[159744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gghjhgaowozkhnncovtcipeglmkyrkks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045445.027847-161-206955565089952/AnsiballZ_file.py'
Feb 02 15:17:25 compute-0 sudo[159744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:25 compute-0 podman[159746]: 2026-02-02 15:17:25.420025536 +0000 UTC m=+0.097426572 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:17:25 compute-0 python3.9[159747]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:25 compute-0 sudo[159744]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:17:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5604 writes, 24K keys, 5604 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5604 writes, 875 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5604 writes, 24K keys, 5604 commit groups, 1.0 writes per commit group, ingest: 18.63 MB, 0.03 MB/s
                                           Interval WAL: 5604 writes, 875 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 15:17:25 compute-0 sudo[159922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yofpuusvbhalbafmkrktybwjudlyicsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045445.6828218-161-209972466699945/AnsiballZ_file.py'
Feb 02 15:17:25 compute-0 sudo[159922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:26 compute-0 python3.9[159924]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:26 compute-0 sudo[159922]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:26 compute-0 sudo[160074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-velnvconzfezrjmfqmyvulyuzmncluzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045446.231128-161-165805157675996/AnsiballZ_file.py'
Feb 02 15:17:26 compute-0 sudo[160074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:26 compute-0 python3.9[160076]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:26 compute-0 sudo[160074]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:27 compute-0 ceph-mon[75334]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:27 compute-0 sudo[160226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmfxaylbkbvquwzvexvlqqtxdcifoaak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045446.8626208-161-44553365551084/AnsiballZ_file.py'
Feb 02 15:17:27 compute-0 sudo[160226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:27 compute-0 python3.9[160228]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:17:27 compute-0 sudo[160226]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:27 compute-0 sudo[160390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqpoyetzebnjrwylzevtkeksvfykioj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045447.5854702-212-179403767984514/AnsiballZ_command.py'
Feb 02 15:17:27 compute-0 sudo[160390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:27 compute-0 podman[160352]: 2026-02-02 15:17:27.877682547 +0000 UTC m=+0.069527575 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 15:17:28 compute-0 python3.9[160397]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:17:28 compute-0 sudo[160390]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:28 compute-0 python3.9[160551]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 15:17:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:29 compute-0 ceph-mon[75334]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:29 compute-0 sudo[160701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzeoicsxfyccmhakmiylfigtujeartkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045449.1215513-230-104774612379446/AnsiballZ_systemd_service.py'
Feb 02 15:17:29 compute-0 sudo[160701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:17:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6895 writes, 28K keys, 6895 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6895 writes, 1297 syncs, 5.32 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6895 writes, 28K keys, 6895 commit groups, 1.0 writes per commit group, ingest: 19.70 MB, 0.03 MB/s
                                           Interval WAL: 6895 writes, 1297 syncs, 5.32 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 15:17:29 compute-0 python3.9[160703]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 15:17:29 compute-0 systemd[1]: Reloading.
Feb 02 15:17:29 compute-0 systemd-sysv-generator[160727]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:17:29 compute-0 systemd-rc-local-generator[160724]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:17:29 compute-0 sudo[160701]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:30 compute-0 sudo[160888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgzoyudctfnawlitwvdtoivuhacuqdvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045450.165564-238-201548500758920/AnsiballZ_command.py'
Feb 02 15:17:30 compute-0 sudo[160888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:30 compute-0 python3.9[160890]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:17:30 compute-0 sudo[160888]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:31 compute-0 sudo[161041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylzpkszvrdmxgadcygwordidvhxvxdhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045450.7785933-238-109923300270900/AnsiballZ_command.py'
Feb 02 15:17:31 compute-0 ceph-mon[75334]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:31 compute-0 sudo[161041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:31 compute-0 python3.9[161043]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:17:31 compute-0 sudo[161041]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:31 compute-0 sudo[161194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvmphmcnvupbxgvsqksvejraztgnzyon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045451.3806372-238-216498317403449/AnsiballZ_command.py'
Feb 02 15:17:31 compute-0 sudo[161194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:31 compute-0 python3.9[161196]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:17:31 compute-0 sudo[161194]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:32 compute-0 sudo[161347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waaqrdeofgdibkelxsylhplokyzaphgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045451.9558067-238-157522211574564/AnsiballZ_command.py'
Feb 02 15:17:32 compute-0 sudo[161347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:32 compute-0 python3.9[161349]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:17:32 compute-0 sudo[161347]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:32 compute-0 sudo[161500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwvsrdqvewyocnnnpuwtpdtrvbppijhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045452.5665672-238-31331040062957/AnsiballZ_command.py'
Feb 02 15:17:32 compute-0 sudo[161500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:33 compute-0 ceph-mon[75334]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:33 compute-0 python3.9[161502]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:17:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:17:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5491 writes, 23K keys, 5491 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5491 writes, 784 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5489 writes, 23K keys, 5489 commit groups, 1.0 writes per commit group, ingest: 18.38 MB, 0.03 MB/s
                                           Interval WAL: 5490 writes, 784 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedab4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedab4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedab4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 15:17:33 compute-0 sudo[161500]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:33 compute-0 sudo[161653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsgwhtmigvcfgldbowjcwyzoswawlsss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045453.2442214-238-7266302620754/AnsiballZ_command.py'
Feb 02 15:17:33 compute-0 sudo[161653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:33 compute-0 python3.9[161655]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:17:33 compute-0 sudo[161653]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:34 compute-0 sudo[161806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eynmvkctyksyunrlpjkgtllyjeahnptn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045453.8870077-238-40588228981730/AnsiballZ_command.py'
Feb 02 15:17:34 compute-0 sudo[161806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:34 compute-0 python3.9[161808]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:17:34 compute-0 sudo[161806]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:35 compute-0 ceph-mon[75334]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:35 compute-0 sudo[161959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fakodfxluvdbxnvpxycipllkhvujcfzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045454.7678852-292-5048467153051/AnsiballZ_getent.py'
Feb 02 15:17:35 compute-0 sudo[161959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:35 compute-0 python3.9[161961]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb 02 15:17:35 compute-0 sudo[161959]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:35 compute-0 ceph-mgr[75628]: [devicehealth INFO root] Check health
Feb 02 15:17:36 compute-0 sudo[162112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbybmlsmwjmogzmhwsyyhikqkctyujgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045455.5509558-300-16103948684388/AnsiballZ_group.py'
Feb 02 15:17:36 compute-0 sudo[162112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:36 compute-0 python3.9[162114]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 15:17:36 compute-0 groupadd[162115]: group added to /etc/group: name=libvirt, GID=42473
Feb 02 15:17:36 compute-0 groupadd[162115]: group added to /etc/gshadow: name=libvirt
Feb 02 15:17:36 compute-0 groupadd[162115]: new group: name=libvirt, GID=42473
Feb 02 15:17:36 compute-0 sudo[162112]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:36 compute-0 sudo[162270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjkwoigrmbwmixbkzwnqbwnbfmdhefzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045456.4603055-308-49493013321182/AnsiballZ_user.py'
Feb 02 15:17:36 compute-0 sudo[162270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:37 compute-0 ceph-mon[75334]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:37 compute-0 python3.9[162272]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 02 15:17:37 compute-0 useradd[162274]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Feb 02 15:17:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:17:37 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:17:37 compute-0 sudo[162270]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:37 compute-0 sudo[162431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmumfmcjtmuturdoarvckwfwcfbeeyvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045457.607196-319-117871656619028/AnsiballZ_setup.py'
Feb 02 15:17:37 compute-0 sudo[162431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:38 compute-0 python3.9[162433]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:17:38 compute-0 sudo[162431]: pam_unix(sudo:session): session closed for user root
Feb 02 15:17:38 compute-0 sudo[162515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uikmrtavkhovkweoonckydcnkbluuzqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045457.607196-319-117871656619028/AnsiballZ_dnf.py'
Feb 02 15:17:38 compute-0 sudo[162515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:17:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:39 compute-0 python3.9[162517]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:17:39 compute-0 ceph-mon[75334]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:41 compute-0 ceph-mon[75334]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:17:42
Feb 02 15:17:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:17:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:17:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'cephfs.cephfs.data']
Feb 02 15:17:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:17:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:43 compute-0 ceph-mon[75334]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:17:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:45 compute-0 ceph-mon[75334]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:47 compute-0 ceph-mon[75334]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:50 compute-0 ceph-mon[75334]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:52 compute-0 ceph-mon[75334]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:54 compute-0 ceph-mon[75334]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:17:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:56 compute-0 ceph-mon[75334]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:56 compute-0 podman[162533]: 2026-02-02 15:17:56.349561183 +0000 UTC m=+0.090898513 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 02 15:17:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:17:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:58 compute-0 ceph-mon[75334]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:58 compute-0 podman[162561]: 2026-02-02 15:17:58.3243168 +0000 UTC m=+0.065502232 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 15:17:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:17:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:59.227 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:17:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:59.228 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:17:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:17:59.228 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:18:00 compute-0 ceph-mon[75334]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:02 compute-0 ceph-mon[75334]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:04 compute-0 ceph-mon[75334]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:06 compute-0 ceph-mon[75334]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:06 compute-0 sudo[162752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:18:06 compute-0 sudo[162752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:18:06 compute-0 sudo[162752]: pam_unix(sudo:session): session closed for user root
Feb 02 15:18:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:06 compute-0 sudo[162777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:18:06 compute-0 sudo[162777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:18:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:07 compute-0 sudo[162777]: pam_unix(sudo:session): session closed for user root
Feb 02 15:18:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 15:18:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:18:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:18:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:18:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:18:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:18:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:18:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:18:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:18:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:18:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:18:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:18:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:18:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:18:07 compute-0 sudo[162834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:18:07 compute-0 sudo[162834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:18:07 compute-0 sudo[162834]: pam_unix(sudo:session): session closed for user root
Feb 02 15:18:07 compute-0 sudo[162859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:18:07 compute-0 sudo[162859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:18:07 compute-0 podman[162897]: 2026-02-02 15:18:07.863173153 +0000 UTC m=+0.061203608 container create 308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:18:07 compute-0 systemd[1]: Started libpod-conmon-308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46.scope.
Feb 02 15:18:07 compute-0 podman[162897]: 2026-02-02 15:18:07.838551829 +0000 UTC m=+0.036582374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:18:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:18:07 compute-0 podman[162897]: 2026-02-02 15:18:07.960559552 +0000 UTC m=+0.158590017 container init 308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:18:07 compute-0 podman[162897]: 2026-02-02 15:18:07.968535625 +0000 UTC m=+0.166566100 container start 308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:18:07 compute-0 podman[162897]: 2026-02-02 15:18:07.973218508 +0000 UTC m=+0.171248973 container attach 308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:18:07 compute-0 vigorous_cohen[162914]: 167 167
Feb 02 15:18:07 compute-0 systemd[1]: libpod-308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46.scope: Deactivated successfully.
Feb 02 15:18:07 compute-0 conmon[162914]: conmon 308edcb689cbb4c7006d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46.scope/container/memory.events
Feb 02 15:18:07 compute-0 podman[162897]: 2026-02-02 15:18:07.977960523 +0000 UTC m=+0.175990988 container died 308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f95a99ed2015f51fb8a6fdd089e0d3d9cafed954d05d700cdf935e4ec121d97e-merged.mount: Deactivated successfully.
Feb 02 15:18:08 compute-0 podman[162897]: 2026-02-02 15:18:08.031144476 +0000 UTC m=+0.229174981 container remove 308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:18:08 compute-0 systemd[1]: libpod-conmon-308edcb689cbb4c7006d7ded06a06322df6c11f227d1dd141a0e206e6ce7ad46.scope: Deactivated successfully.
Feb 02 15:18:08 compute-0 ceph-mon[75334]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:18:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:18:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:18:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:18:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:18:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:18:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:18:08 compute-0 podman[162939]: 2026-02-02 15:18:08.179592668 +0000 UTC m=+0.059298283 container create ab93b5d1e24cff8492690e785e70d6bda117ab27c43eb73517eda1727698d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:18:08 compute-0 systemd[1]: Started libpod-conmon-ab93b5d1e24cff8492690e785e70d6bda117ab27c43eb73517eda1727698d69c.scope.
Feb 02 15:18:08 compute-0 podman[162939]: 2026-02-02 15:18:08.147964604 +0000 UTC m=+0.027670269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:18:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd0788b8b3291655807c5afb39f154bc91770241887e3094b14114f78b960aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd0788b8b3291655807c5afb39f154bc91770241887e3094b14114f78b960aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd0788b8b3291655807c5afb39f154bc91770241887e3094b14114f78b960aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd0788b8b3291655807c5afb39f154bc91770241887e3094b14114f78b960aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd0788b8b3291655807c5afb39f154bc91770241887e3094b14114f78b960aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:08 compute-0 podman[162939]: 2026-02-02 15:18:08.267481897 +0000 UTC m=+0.147187572 container init ab93b5d1e24cff8492690e785e70d6bda117ab27c43eb73517eda1727698d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_beaver, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:18:08 compute-0 podman[162939]: 2026-02-02 15:18:08.27667999 +0000 UTC m=+0.156385615 container start ab93b5d1e24cff8492690e785e70d6bda117ab27c43eb73517eda1727698d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_beaver, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:18:08 compute-0 podman[162939]: 2026-02-02 15:18:08.279844746 +0000 UTC m=+0.159550381 container attach ab93b5d1e24cff8492690e785e70d6bda117ab27c43eb73517eda1727698d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_beaver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:18:08 compute-0 recursing_beaver[162956]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:18:08 compute-0 recursing_beaver[162956]: --> All data devices are unavailable
Feb 02 15:18:08 compute-0 systemd[1]: libpod-ab93b5d1e24cff8492690e785e70d6bda117ab27c43eb73517eda1727698d69c.scope: Deactivated successfully.
Feb 02 15:18:08 compute-0 podman[162976]: 2026-02-02 15:18:08.787010422 +0000 UTC m=+0.030602049 container died ab93b5d1e24cff8492690e785e70d6bda117ab27c43eb73517eda1727698d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_beaver, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bd0788b8b3291655807c5afb39f154bc91770241887e3094b14114f78b960aa-merged.mount: Deactivated successfully.
Feb 02 15:18:08 compute-0 podman[162976]: 2026-02-02 15:18:08.82293538 +0000 UTC m=+0.066526966 container remove ab93b5d1e24cff8492690e785e70d6bda117ab27c43eb73517eda1727698d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_beaver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:18:08 compute-0 systemd[1]: libpod-conmon-ab93b5d1e24cff8492690e785e70d6bda117ab27c43eb73517eda1727698d69c.scope: Deactivated successfully.
Feb 02 15:18:08 compute-0 sudo[162859]: pam_unix(sudo:session): session closed for user root
Feb 02 15:18:08 compute-0 sudo[162991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:18:08 compute-0 sudo[162991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:18:08 compute-0 sudo[162991]: pam_unix(sudo:session): session closed for user root
Feb 02 15:18:08 compute-0 sudo[163016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:18:08 compute-0 sudo[163016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:18:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:09 compute-0 podman[163053]: 2026-02-02 15:18:09.287922289 +0000 UTC m=+0.057526789 container create c561bb71198b4e1e9307c7327091f6948bd7e6cf742f0282ae7608f241bdd7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:18:09 compute-0 systemd[1]: Started libpod-conmon-c561bb71198b4e1e9307c7327091f6948bd7e6cf742f0282ae7608f241bdd7fe.scope.
Feb 02 15:18:09 compute-0 podman[163053]: 2026-02-02 15:18:09.255837174 +0000 UTC m=+0.025441774 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:18:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:18:09 compute-0 podman[163053]: 2026-02-02 15:18:09.370296996 +0000 UTC m=+0.139901516 container init c561bb71198b4e1e9307c7327091f6948bd7e6cf742f0282ae7608f241bdd7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:18:09 compute-0 podman[163053]: 2026-02-02 15:18:09.386361354 +0000 UTC m=+0.155965874 container start c561bb71198b4e1e9307c7327091f6948bd7e6cf742f0282ae7608f241bdd7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:18:09 compute-0 podman[163053]: 2026-02-02 15:18:09.390469353 +0000 UTC m=+0.160073933 container attach c561bb71198b4e1e9307c7327091f6948bd7e6cf742f0282ae7608f241bdd7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:18:09 compute-0 inspiring_elbakyan[163070]: 167 167
Feb 02 15:18:09 compute-0 systemd[1]: libpod-c561bb71198b4e1e9307c7327091f6948bd7e6cf742f0282ae7608f241bdd7fe.scope: Deactivated successfully.
Feb 02 15:18:09 compute-0 podman[163053]: 2026-02-02 15:18:09.393130627 +0000 UTC m=+0.162735157 container died c561bb71198b4e1e9307c7327091f6948bd7e6cf742f0282ae7608f241bdd7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c28f8f418f88bd17eb248065f762499b937d6d9a59ede0ce646d4853a91b138f-merged.mount: Deactivated successfully.
Feb 02 15:18:09 compute-0 podman[163053]: 2026-02-02 15:18:09.432561589 +0000 UTC m=+0.202166129 container remove c561bb71198b4e1e9307c7327091f6948bd7e6cf742f0282ae7608f241bdd7fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:18:09 compute-0 systemd[1]: libpod-conmon-c561bb71198b4e1e9307c7327091f6948bd7e6cf742f0282ae7608f241bdd7fe.scope: Deactivated successfully.
Feb 02 15:18:09 compute-0 podman[163095]: 2026-02-02 15:18:09.569366349 +0000 UTC m=+0.049185937 container create 1c9a8079f0975d82c0657087516b2f30b171dbb43fec469d19e486eaafa0b6db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:18:09 compute-0 systemd[1]: Started libpod-conmon-1c9a8079f0975d82c0657087516b2f30b171dbb43fec469d19e486eaafa0b6db.scope.
Feb 02 15:18:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a5e5de3765515def9475973a5462430988ec152fdf7e8c8074394c0f8eab60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:09 compute-0 podman[163095]: 2026-02-02 15:18:09.550046883 +0000 UTC m=+0.029866541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a5e5de3765515def9475973a5462430988ec152fdf7e8c8074394c0f8eab60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a5e5de3765515def9475973a5462430988ec152fdf7e8c8074394c0f8eab60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a5e5de3765515def9475973a5462430988ec152fdf7e8c8074394c0f8eab60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:09 compute-0 podman[163095]: 2026-02-02 15:18:09.679316063 +0000 UTC m=+0.159135711 container init 1c9a8079f0975d82c0657087516b2f30b171dbb43fec469d19e486eaafa0b6db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:18:09 compute-0 podman[163095]: 2026-02-02 15:18:09.688241838 +0000 UTC m=+0.168061426 container start 1c9a8079f0975d82c0657087516b2f30b171dbb43fec469d19e486eaafa0b6db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:18:09 compute-0 podman[163095]: 2026-02-02 15:18:09.692011699 +0000 UTC m=+0.171831357 container attach 1c9a8079f0975d82c0657087516b2f30b171dbb43fec469d19e486eaafa0b6db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_tharp, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]: {
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:     "0": [
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:         {
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "devices": [
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "/dev/loop3"
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             ],
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_name": "ceph_lv0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_size": "21470642176",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "name": "ceph_lv0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "tags": {
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.cluster_name": "ceph",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.crush_device_class": "",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.encrypted": "0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.objectstore": "bluestore",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.osd_id": "0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.type": "block",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.vdo": "0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.with_tpm": "0"
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             },
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "type": "block",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "vg_name": "ceph_vg0"
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:         }
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:     ],
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:     "1": [
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:         {
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "devices": [
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "/dev/loop4"
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             ],
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_name": "ceph_lv1",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_size": "21470642176",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "name": "ceph_lv1",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "tags": {
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.cluster_name": "ceph",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.crush_device_class": "",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.encrypted": "0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.objectstore": "bluestore",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.osd_id": "1",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.type": "block",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.vdo": "0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.with_tpm": "0"
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             },
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "type": "block",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "vg_name": "ceph_vg1"
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:         }
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:     ],
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:     "2": [
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:         {
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "devices": [
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "/dev/loop5"
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             ],
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_name": "ceph_lv2",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_size": "21470642176",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "name": "ceph_lv2",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "tags": {
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.cluster_name": "ceph",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.crush_device_class": "",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.encrypted": "0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.objectstore": "bluestore",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.osd_id": "2",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.type": "block",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.vdo": "0",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:                 "ceph.with_tpm": "0"
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             },
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "type": "block",
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:             "vg_name": "ceph_vg2"
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:         }
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]:     ]
Feb 02 15:18:09 compute-0 wonderful_tharp[163112]: }
Feb 02 15:18:10 compute-0 systemd[1]: libpod-1c9a8079f0975d82c0657087516b2f30b171dbb43fec469d19e486eaafa0b6db.scope: Deactivated successfully.
Feb 02 15:18:10 compute-0 podman[163095]: 2026-02-02 15:18:10.029401949 +0000 UTC m=+0.509221537 container died 1c9a8079f0975d82c0657087516b2f30b171dbb43fec469d19e486eaafa0b6db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:18:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-65a5e5de3765515def9475973a5462430988ec152fdf7e8c8074394c0f8eab60-merged.mount: Deactivated successfully.
Feb 02 15:18:10 compute-0 podman[163095]: 2026-02-02 15:18:10.079526389 +0000 UTC m=+0.559346007 container remove 1c9a8079f0975d82c0657087516b2f30b171dbb43fec469d19e486eaafa0b6db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_tharp, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:18:10 compute-0 systemd[1]: libpod-conmon-1c9a8079f0975d82c0657087516b2f30b171dbb43fec469d19e486eaafa0b6db.scope: Deactivated successfully.
Feb 02 15:18:10 compute-0 sudo[163016]: pam_unix(sudo:session): session closed for user root
Feb 02 15:18:10 compute-0 ceph-mon[75334]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:10 compute-0 sudo[163134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:18:10 compute-0 sudo[163134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:18:10 compute-0 sudo[163134]: pam_unix(sudo:session): session closed for user root
Feb 02 15:18:10 compute-0 sudo[163159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:18:10 compute-0 sudo[163159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:18:10 compute-0 podman[163200]: 2026-02-02 15:18:10.538462951 +0000 UTC m=+0.039624166 container create 19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:18:10 compute-0 systemd[1]: Started libpod-conmon-19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b.scope.
Feb 02 15:18:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:18:10 compute-0 podman[163200]: 2026-02-02 15:18:10.521956444 +0000 UTC m=+0.023117679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:18:10 compute-0 podman[163200]: 2026-02-02 15:18:10.628884203 +0000 UTC m=+0.130045508 container init 19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:18:10 compute-0 podman[163200]: 2026-02-02 15:18:10.637199844 +0000 UTC m=+0.138361059 container start 19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:18:10 compute-0 wonderful_moore[163215]: 167 167
Feb 02 15:18:10 compute-0 systemd[1]: libpod-19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b.scope: Deactivated successfully.
Feb 02 15:18:10 compute-0 conmon[163215]: conmon 19becc03e853325ae9eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b.scope/container/memory.events
Feb 02 15:18:10 compute-0 podman[163200]: 2026-02-02 15:18:10.644469119 +0000 UTC m=+0.145630334 container attach 19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:18:10 compute-0 podman[163200]: 2026-02-02 15:18:10.645892614 +0000 UTC m=+0.147053849 container died 19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:18:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-98018e398b314d248f8f5c6bcdf84c5c3dd8bace8d256bdc83bdc72777c6c050-merged.mount: Deactivated successfully.
Feb 02 15:18:10 compute-0 podman[163200]: 2026-02-02 15:18:10.683542872 +0000 UTC m=+0.184704087 container remove 19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:18:10 compute-0 systemd[1]: libpod-conmon-19becc03e853325ae9ebb04c35f8bf02343b03de1b3290e332a2aa89fa0e0a5b.scope: Deactivated successfully.
Feb 02 15:18:10 compute-0 podman[163242]: 2026-02-02 15:18:10.841829981 +0000 UTC m=+0.059065056 container create 9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:18:10 compute-0 systemd[1]: Started libpod-conmon-9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27.scope.
Feb 02 15:18:10 compute-0 podman[163242]: 2026-02-02 15:18:10.818251332 +0000 UTC m=+0.035486437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:18:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7287ba17106964b98441aa76bb3e2891a4210bb9dc9a847e8eed141594116cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7287ba17106964b98441aa76bb3e2891a4210bb9dc9a847e8eed141594116cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7287ba17106964b98441aa76bb3e2891a4210bb9dc9a847e8eed141594116cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7287ba17106964b98441aa76bb3e2891a4210bb9dc9a847e8eed141594116cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:18:10 compute-0 podman[163242]: 2026-02-02 15:18:10.950479663 +0000 UTC m=+0.167714748 container init 9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:18:10 compute-0 podman[163242]: 2026-02-02 15:18:10.957977244 +0000 UTC m=+0.175212319 container start 9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_williamson, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:18:10 compute-0 podman[163242]: 2026-02-02 15:18:10.961741815 +0000 UTC m=+0.178976890 container attach 9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_williamson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:18:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:11 compute-0 lvm[163336]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:18:11 compute-0 lvm[163336]: VG ceph_vg0 finished
Feb 02 15:18:11 compute-0 lvm[163339]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:18:11 compute-0 lvm[163339]: VG ceph_vg1 finished
Feb 02 15:18:11 compute-0 lvm[163341]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:18:11 compute-0 lvm[163341]: VG ceph_vg2 finished
Feb 02 15:18:11 compute-0 ecstatic_williamson[163259]: {}
Feb 02 15:18:11 compute-0 systemd[1]: libpod-9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27.scope: Deactivated successfully.
Feb 02 15:18:11 compute-0 systemd[1]: libpod-9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27.scope: Consumed 1.149s CPU time.
Feb 02 15:18:11 compute-0 podman[163242]: 2026-02-02 15:18:11.743928317 +0000 UTC m=+0.961163422 container died 9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_williamson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:18:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7287ba17106964b98441aa76bb3e2891a4210bb9dc9a847e8eed141594116cd-merged.mount: Deactivated successfully.
Feb 02 15:18:11 compute-0 podman[163242]: 2026-02-02 15:18:11.813004514 +0000 UTC m=+1.030239589 container remove 9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:18:11 compute-0 systemd[1]: libpod-conmon-9c64e61c18acb3c5e88db98848cfbe2568b31c8770142a6e418771d8f9a0fc27.scope: Deactivated successfully.
Feb 02 15:18:11 compute-0 sudo[163159]: pam_unix(sudo:session): session closed for user root
Feb 02 15:18:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:18:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:18:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:18:11 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:18:11 compute-0 sudo[163357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:18:11 compute-0 sudo[163357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:18:11 compute-0 sudo[163357]: pam_unix(sudo:session): session closed for user root
Feb 02 15:18:12 compute-0 ceph-mon[75334]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:18:12 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:18:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:18:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:18:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:18:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:18:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:18:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:18:14 compute-0 ceph-mon[75334]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:16 compute-0 ceph-mon[75334]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:18 compute-0 ceph-mon[75334]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:20 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Feb 02 15:18:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 15:18:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 15:18:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 15:18:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 15:18:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 15:18:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 15:18:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 15:18:21 compute-0 ceph-mon[75334]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:22 compute-0 ceph-mon[75334]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:24 compute-0 ceph-mon[75334]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Feb 02 15:18:26 compute-0 ceph-mon[75334]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Feb 02 15:18:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:27 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Feb 02 15:18:27 compute-0 podman[163391]: 2026-02-02 15:18:27.447592092 +0000 UTC m=+0.171678783 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 15:18:28 compute-0 ceph-mon[75334]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:29 compute-0 podman[163418]: 2026-02-02 15:18:29.311959445 +0000 UTC m=+0.059146329 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Feb 02 15:18:29 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Feb 02 15:18:29 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 15:18:29 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 15:18:29 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 15:18:29 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 15:18:29 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 15:18:29 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 15:18:29 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 15:18:30 compute-0 ceph-mon[75334]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:32 compute-0 ceph-mon[75334]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:34 compute-0 ceph-mon[75334]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:36 compute-0 ceph-mon[75334]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:18:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Feb 02 15:18:38 compute-0 ceph-mon[75334]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Feb 02 15:18:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:40 compute-0 ceph-mon[75334]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:42 compute-0 ceph-mon[75334]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:18:42
Feb 02 15:18:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:18:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:18:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.meta']
Feb 02 15:18:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:18:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:44 compute-0 ceph-mon[75334]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:18:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:18:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:46 compute-0 ceph-mon[75334]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:48 compute-0 ceph-mon[75334]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:50 compute-0 ceph-mon[75334]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:52 compute-0 ceph-mon[75334]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:18:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:18:54 compute-0 ceph-mon[75334]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:18:56 compute-0 ceph-mon[75334]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:58 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb 02 15:18:58 compute-0 podman[176086]: 2026-02-02 15:18:58.379315134 +0000 UTC m=+0.109182265 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 02 15:18:58 compute-0 ceph-mon[75334]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:18:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:18:59.228 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:18:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:18:59.229 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:18:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:18:59.229 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:19:00 compute-0 podman[177489]: 2026-02-02 15:19:00.337617549 +0000 UTC m=+0.074500926 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 02 15:19:00 compute-0 ceph-mon[75334]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:02 compute-0 ceph-mon[75334]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:04 compute-0 ceph-mon[75334]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:06 compute-0 ceph-mon[75334]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:08 compute-0 ceph-mon[75334]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:10 compute-0 ceph-mon[75334]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:11 compute-0 sudo[180358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:19:12 compute-0 sudo[180358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:12 compute-0 sudo[180358]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:12 compute-0 sudo[180383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:19:12 compute-0 sudo[180383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:12 compute-0 ceph-mon[75334]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:12 compute-0 podman[180450]: 2026-02-02 15:19:12.546185755 +0000 UTC m=+0.079016322 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:19:12 compute-0 podman[180450]: 2026-02-02 15:19:12.631001848 +0000 UTC m=+0.163832365 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:19:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:13 compute-0 sudo[180383]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:19:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:19:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:13 compute-0 sudo[180638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:19:13 compute-0 sudo[180638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:13 compute-0 sudo[180638]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:13 compute-0 sudo[180663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:19:13 compute-0 sudo[180663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:13 compute-0 sudo[180663]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:19:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:19:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:19:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:19:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:19:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:19:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:19:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:19:13 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:19:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:19:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:19:14 compute-0 sudo[180719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:19:14 compute-0 sudo[180719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:14 compute-0 sudo[180719]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:14 compute-0 sudo[180744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:19:14 compute-0 sudo[180744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:14 compute-0 ceph-mon[75334]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:19:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:19:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:19:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:19:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:19:14 compute-0 podman[180781]: 2026-02-02 15:19:14.335665666 +0000 UTC m=+0.046086427 container create b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:19:14 compute-0 systemd[1]: Started libpod-conmon-b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d.scope.
Feb 02 15:19:14 compute-0 podman[180781]: 2026-02-02 15:19:14.31729919 +0000 UTC m=+0.027719921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:19:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:19:14 compute-0 podman[180781]: 2026-02-02 15:19:14.443023654 +0000 UTC m=+0.153444465 container init b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 15:19:14 compute-0 podman[180781]: 2026-02-02 15:19:14.448177108 +0000 UTC m=+0.158597839 container start b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_joliot, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:19:14 compute-0 podman[180781]: 2026-02-02 15:19:14.451477763 +0000 UTC m=+0.161898574 container attach b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_joliot, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:19:14 compute-0 systemd[1]: libpod-b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d.scope: Deactivated successfully.
Feb 02 15:19:14 compute-0 unruffled_joliot[180797]: 167 167
Feb 02 15:19:14 compute-0 conmon[180797]: conmon b7cb00c824330183a47a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d.scope/container/memory.events
Feb 02 15:19:14 compute-0 podman[180781]: 2026-02-02 15:19:14.456263487 +0000 UTC m=+0.166684248 container died b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-dddf94b8200e1a5059be272ef37b5b3ecc782c61a54ef77247faa7f43fcb3219-merged.mount: Deactivated successfully.
Feb 02 15:19:14 compute-0 podman[180781]: 2026-02-02 15:19:14.509087749 +0000 UTC m=+0.219508500 container remove b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:19:14 compute-0 systemd[1]: libpod-conmon-b7cb00c824330183a47a81880d0adaf5d166923312083de56b6d21cb8e0e9c2d.scope: Deactivated successfully.
Feb 02 15:19:14 compute-0 podman[180820]: 2026-02-02 15:19:14.641366294 +0000 UTC m=+0.039400855 container create 13f04cd6ca74467bbface502982657d69e117a3406ac960852646dce7239da2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hypatia, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 02 15:19:14 compute-0 systemd[1]: Started libpod-conmon-13f04cd6ca74467bbface502982657d69e117a3406ac960852646dce7239da2f.scope.
Feb 02 15:19:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:19:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:19:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d664d11d390bb425c8c813804c01eebb0b65de73005a78cf444312f7e30260e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:19:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:19:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:19:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d664d11d390bb425c8c813804c01eebb0b65de73005a78cf444312f7e30260e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d664d11d390bb425c8c813804c01eebb0b65de73005a78cf444312f7e30260e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d664d11d390bb425c8c813804c01eebb0b65de73005a78cf444312f7e30260e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d664d11d390bb425c8c813804c01eebb0b65de73005a78cf444312f7e30260e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:14 compute-0 podman[180820]: 2026-02-02 15:19:14.621307853 +0000 UTC m=+0.019342484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:19:14 compute-0 podman[180820]: 2026-02-02 15:19:14.722417637 +0000 UTC m=+0.120452198 container init 13f04cd6ca74467bbface502982657d69e117a3406ac960852646dce7239da2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hypatia, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:19:14 compute-0 podman[180820]: 2026-02-02 15:19:14.731159815 +0000 UTC m=+0.129194356 container start 13f04cd6ca74467bbface502982657d69e117a3406ac960852646dce7239da2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hypatia, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:19:14 compute-0 podman[180820]: 2026-02-02 15:19:14.735148179 +0000 UTC m=+0.133182740 container attach 13f04cd6ca74467bbface502982657d69e117a3406ac960852646dce7239da2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:19:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:15 compute-0 confident_hypatia[180836]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:19:15 compute-0 confident_hypatia[180836]: --> All data devices are unavailable
Feb 02 15:19:15 compute-0 systemd[1]: libpod-13f04cd6ca74467bbface502982657d69e117a3406ac960852646dce7239da2f.scope: Deactivated successfully.
Feb 02 15:19:15 compute-0 podman[180820]: 2026-02-02 15:19:15.224654338 +0000 UTC m=+0.622688919 container died 13f04cd6ca74467bbface502982657d69e117a3406ac960852646dce7239da2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hypatia, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d664d11d390bb425c8c813804c01eebb0b65de73005a78cf444312f7e30260e2-merged.mount: Deactivated successfully.
Feb 02 15:19:15 compute-0 podman[180820]: 2026-02-02 15:19:15.366880601 +0000 UTC m=+0.764915182 container remove 13f04cd6ca74467bbface502982657d69e117a3406ac960852646dce7239da2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:19:15 compute-0 systemd[1]: libpod-conmon-13f04cd6ca74467bbface502982657d69e117a3406ac960852646dce7239da2f.scope: Deactivated successfully.
Feb 02 15:19:15 compute-0 sudo[180744]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:15 compute-0 sudo[180868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:19:15 compute-0 sudo[180868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:15 compute-0 sudo[180868]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:15 compute-0 sudo[180893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:19:15 compute-0 sudo[180893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:15 compute-0 podman[180931]: 2026-02-02 15:19:15.791586417 +0000 UTC m=+0.053920020 container create 49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_nash, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:19:15 compute-0 systemd[1]: Started libpod-conmon-49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122.scope.
Feb 02 15:19:15 compute-0 podman[180931]: 2026-02-02 15:19:15.762230655 +0000 UTC m=+0.024564318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:19:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:19:15 compute-0 podman[180931]: 2026-02-02 15:19:15.876878572 +0000 UTC m=+0.139212235 container init 49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_nash, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:19:15 compute-0 podman[180931]: 2026-02-02 15:19:15.881907423 +0000 UTC m=+0.144241016 container start 49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:19:15 compute-0 podman[180931]: 2026-02-02 15:19:15.885406483 +0000 UTC m=+0.147740096 container attach 49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_nash, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:19:15 compute-0 objective_nash[180948]: 167 167
Feb 02 15:19:15 compute-0 systemd[1]: libpod-49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122.scope: Deactivated successfully.
Feb 02 15:19:15 compute-0 conmon[180948]: conmon 49c2b6afc54f43d51698 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122.scope/container/memory.events
Feb 02 15:19:15 compute-0 podman[180931]: 2026-02-02 15:19:15.887942849 +0000 UTC m=+0.150276422 container died 49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8cd37bb22e0cd1112058ee2a4d44316201781ecfdcf3e79a53e8b4862f2b6c2-merged.mount: Deactivated successfully.
Feb 02 15:19:15 compute-0 podman[180931]: 2026-02-02 15:19:15.929885398 +0000 UTC m=+0.192218971 container remove 49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:19:15 compute-0 systemd[1]: libpod-conmon-49c2b6afc54f43d516989fb8ee2828e8db52af6527957547617544e640ab9122.scope: Deactivated successfully.
Feb 02 15:19:16 compute-0 podman[180976]: 2026-02-02 15:19:16.088180048 +0000 UTC m=+0.034747873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:19:16 compute-0 ceph-mon[75334]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:16 compute-0 podman[180976]: 2026-02-02 15:19:16.559898946 +0000 UTC m=+0.506466761 container create ef4a66ea55e2d260940951d4699881c0dcc5cfb002c9fdbd41d22e99f00b8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:19:16 compute-0 systemd[1]: Started libpod-conmon-ef4a66ea55e2d260940951d4699881c0dcc5cfb002c9fdbd41d22e99f00b8abd.scope.
Feb 02 15:19:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8353b9d46d68dcf25ff76904cc67de0cad8dd5bcde762fb8e42b93471aea156b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8353b9d46d68dcf25ff76904cc67de0cad8dd5bcde762fb8e42b93471aea156b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8353b9d46d68dcf25ff76904cc67de0cad8dd5bcde762fb8e42b93471aea156b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8353b9d46d68dcf25ff76904cc67de0cad8dd5bcde762fb8e42b93471aea156b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:16 compute-0 podman[180976]: 2026-02-02 15:19:16.686110183 +0000 UTC m=+0.632678008 container init ef4a66ea55e2d260940951d4699881c0dcc5cfb002c9fdbd41d22e99f00b8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williams, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:19:16 compute-0 podman[180976]: 2026-02-02 15:19:16.69409217 +0000 UTC m=+0.640659935 container start ef4a66ea55e2d260940951d4699881c0dcc5cfb002c9fdbd41d22e99f00b8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williams, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:19:16 compute-0 podman[180976]: 2026-02-02 15:19:16.705493546 +0000 UTC m=+0.652061421 container attach ef4a66ea55e2d260940951d4699881c0dcc5cfb002c9fdbd41d22e99f00b8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williams, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:19:16 compute-0 kernel: SELinux:  Converting 2778 SID table entries...
Feb 02 15:19:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 02 15:19:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 02 15:19:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 02 15:19:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 02 15:19:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 02 15:19:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 02 15:19:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 02 15:19:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:16 compute-0 condescending_williams[180996]: {
Feb 02 15:19:16 compute-0 condescending_williams[180996]:     "0": [
Feb 02 15:19:16 compute-0 condescending_williams[180996]:         {
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "devices": [
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "/dev/loop3"
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             ],
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_name": "ceph_lv0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_size": "21470642176",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "name": "ceph_lv0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "tags": {
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.cluster_name": "ceph",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.crush_device_class": "",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.encrypted": "0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.objectstore": "bluestore",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.osd_id": "0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.type": "block",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.vdo": "0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.with_tpm": "0"
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             },
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "type": "block",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "vg_name": "ceph_vg0"
Feb 02 15:19:16 compute-0 condescending_williams[180996]:         }
Feb 02 15:19:16 compute-0 condescending_williams[180996]:     ],
Feb 02 15:19:16 compute-0 condescending_williams[180996]:     "1": [
Feb 02 15:19:16 compute-0 condescending_williams[180996]:         {
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "devices": [
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "/dev/loop4"
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             ],
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_name": "ceph_lv1",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_size": "21470642176",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "name": "ceph_lv1",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "tags": {
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.cluster_name": "ceph",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.crush_device_class": "",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.encrypted": "0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.objectstore": "bluestore",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.osd_id": "1",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.type": "block",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.vdo": "0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.with_tpm": "0"
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             },
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "type": "block",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "vg_name": "ceph_vg1"
Feb 02 15:19:16 compute-0 condescending_williams[180996]:         }
Feb 02 15:19:16 compute-0 condescending_williams[180996]:     ],
Feb 02 15:19:16 compute-0 condescending_williams[180996]:     "2": [
Feb 02 15:19:16 compute-0 condescending_williams[180996]:         {
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "devices": [
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "/dev/loop5"
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             ],
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_name": "ceph_lv2",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_size": "21470642176",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "name": "ceph_lv2",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "tags": {
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.cluster_name": "ceph",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.crush_device_class": "",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.encrypted": "0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.objectstore": "bluestore",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.osd_id": "2",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.type": "block",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.vdo": "0",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:                 "ceph.with_tpm": "0"
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             },
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "type": "block",
Feb 02 15:19:16 compute-0 condescending_williams[180996]:             "vg_name": "ceph_vg2"
Feb 02 15:19:16 compute-0 condescending_williams[180996]:         }
Feb 02 15:19:16 compute-0 condescending_williams[180996]:     ]
Feb 02 15:19:16 compute-0 condescending_williams[180996]: }
Feb 02 15:19:17 compute-0 systemd[1]: libpod-ef4a66ea55e2d260940951d4699881c0dcc5cfb002c9fdbd41d22e99f00b8abd.scope: Deactivated successfully.
Feb 02 15:19:17 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb 02 15:19:17 compute-0 podman[180976]: 2026-02-02 15:19:17.010390843 +0000 UTC m=+0.956958668 container died ef4a66ea55e2d260940951d4699881c0dcc5cfb002c9fdbd41d22e99f00b8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:19:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8353b9d46d68dcf25ff76904cc67de0cad8dd5bcde762fb8e42b93471aea156b-merged.mount: Deactivated successfully.
Feb 02 15:19:17 compute-0 podman[180976]: 2026-02-02 15:19:17.075592815 +0000 UTC m=+1.022160600 container remove ef4a66ea55e2d260940951d4699881c0dcc5cfb002c9fdbd41d22e99f00b8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:19:17 compute-0 systemd[1]: libpod-conmon-ef4a66ea55e2d260940951d4699881c0dcc5cfb002c9fdbd41d22e99f00b8abd.scope: Deactivated successfully.
Feb 02 15:19:17 compute-0 sudo[180893]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:17 compute-0 sudo[181020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:19:17 compute-0 sudo[181020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:17 compute-0 sudo[181020]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:17 compute-0 sudo[181045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:19:17 compute-0 sudo[181045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:17 compute-0 podman[181085]: 2026-02-02 15:19:17.537687573 +0000 UTC m=+0.053364907 container create 77ab52533d69875e4707609f82403926c06efef8a7ff3f7fd64a123979fa9378 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:19:17 compute-0 systemd[1]: Started libpod-conmon-77ab52533d69875e4707609f82403926c06efef8a7ff3f7fd64a123979fa9378.scope.
Feb 02 15:19:17 compute-0 podman[181085]: 2026-02-02 15:19:17.508252068 +0000 UTC m=+0.023929442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:19:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:19:17 compute-0 podman[181085]: 2026-02-02 15:19:17.633928802 +0000 UTC m=+0.149606166 container init 77ab52533d69875e4707609f82403926c06efef8a7ff3f7fd64a123979fa9378 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:19:17 compute-0 podman[181085]: 2026-02-02 15:19:17.639778853 +0000 UTC m=+0.155456177 container start 77ab52533d69875e4707609f82403926c06efef8a7ff3f7fd64a123979fa9378 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:19:17 compute-0 wizardly_williams[181102]: 167 167
Feb 02 15:19:17 compute-0 systemd[1]: libpod-77ab52533d69875e4707609f82403926c06efef8a7ff3f7fd64a123979fa9378.scope: Deactivated successfully.
Feb 02 15:19:17 compute-0 podman[181085]: 2026-02-02 15:19:17.645607985 +0000 UTC m=+0.161285359 container attach 77ab52533d69875e4707609f82403926c06efef8a7ff3f7fd64a123979fa9378 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:19:17 compute-0 podman[181085]: 2026-02-02 15:19:17.646373635 +0000 UTC m=+0.162050999 container died 77ab52533d69875e4707609f82403926c06efef8a7ff3f7fd64a123979fa9378 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:19:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-994ad6201f02ac4dc6a952a30b163bf447a1298a90360c415c7775192a98ed59-merged.mount: Deactivated successfully.
Feb 02 15:19:17 compute-0 podman[181085]: 2026-02-02 15:19:17.698781705 +0000 UTC m=+0.214459059 container remove 77ab52533d69875e4707609f82403926c06efef8a7ff3f7fd64a123979fa9378 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:19:17 compute-0 systemd[1]: libpod-conmon-77ab52533d69875e4707609f82403926c06efef8a7ff3f7fd64a123979fa9378.scope: Deactivated successfully.
Feb 02 15:19:17 compute-0 groupadd[181124]: group added to /etc/group: name=dnsmasq, GID=992
Feb 02 15:19:17 compute-0 groupadd[181124]: group added to /etc/gshadow: name=dnsmasq
Feb 02 15:19:17 compute-0 groupadd[181124]: new group: name=dnsmasq, GID=992
Feb 02 15:19:17 compute-0 useradd[181138]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Feb 02 15:19:17 compute-0 podman[181133]: 2026-02-02 15:19:17.875060082 +0000 UTC m=+0.057438542 container create 522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:19:17 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Feb 02 15:19:17 compute-0 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Feb 02 15:19:17 compute-0 systemd[1]: Started libpod-conmon-522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72.scope.
Feb 02 15:19:17 compute-0 podman[181133]: 2026-02-02 15:19:17.854115118 +0000 UTC m=+0.036493578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:19:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e833e18965d2ab9211a3da5413a2539d8984821d1cdd43c60f42d306f955547/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e833e18965d2ab9211a3da5413a2539d8984821d1cdd43c60f42d306f955547/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e833e18965d2ab9211a3da5413a2539d8984821d1cdd43c60f42d306f955547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e833e18965d2ab9211a3da5413a2539d8984821d1cdd43c60f42d306f955547/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:19:17 compute-0 podman[181133]: 2026-02-02 15:19:17.991512065 +0000 UTC m=+0.173890555 container init 522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:19:18 compute-0 podman[181133]: 2026-02-02 15:19:18.00517143 +0000 UTC m=+0.187549890 container start 522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:19:18 compute-0 podman[181133]: 2026-02-02 15:19:18.016111824 +0000 UTC m=+0.198490324 container attach 522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:19:18 compute-0 ceph-mon[75334]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:18 compute-0 lvm[181240]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:19:18 compute-0 lvm[181240]: VG ceph_vg0 finished
Feb 02 15:19:18 compute-0 lvm[181241]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:19:18 compute-0 lvm[181241]: VG ceph_vg1 finished
Feb 02 15:19:18 compute-0 lvm[181243]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:19:18 compute-0 lvm[181243]: VG ceph_vg2 finished
Feb 02 15:19:18 compute-0 sad_bohr[181162]: {}
Feb 02 15:19:18 compute-0 systemd[1]: libpod-522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72.scope: Deactivated successfully.
Feb 02 15:19:18 compute-0 systemd[1]: libpod-522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72.scope: Consumed 1.192s CPU time.
Feb 02 15:19:18 compute-0 podman[181247]: 2026-02-02 15:19:18.854663546 +0000 UTC m=+0.033418209 container died 522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e833e18965d2ab9211a3da5413a2539d8984821d1cdd43c60f42d306f955547-merged.mount: Deactivated successfully.
Feb 02 15:19:18 compute-0 groupadd[181262]: group added to /etc/group: name=clevis, GID=991
Feb 02 15:19:18 compute-0 groupadd[181262]: group added to /etc/gshadow: name=clevis
Feb 02 15:19:18 compute-0 groupadd[181262]: new group: name=clevis, GID=991
Feb 02 15:19:18 compute-0 podman[181247]: 2026-02-02 15:19:18.901339158 +0000 UTC m=+0.080093791 container remove 522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:19:18 compute-0 systemd[1]: libpod-conmon-522f240223b5c7aeb073e3381f1d3ef9271bb1a84e59cd5e8fa65527eea69d72.scope: Deactivated successfully.
Feb 02 15:19:18 compute-0 useradd[181271]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Feb 02 15:19:18 compute-0 sudo[181045]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:19:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:19:18 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:18 compute-0 usermod[181281]: add 'clevis' to group 'tss'
Feb 02 15:19:18 compute-0 usermod[181281]: add 'clevis' to shadow group 'tss'
Feb 02 15:19:19 compute-0 sudo[181282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:19:19 compute-0 sudo[181282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:19:19 compute-0 sudo[181282]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:19 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:19:20 compute-0 ceph-mon[75334]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:21 compute-0 polkitd[43651]: Reloading rules
Feb 02 15:19:21 compute-0 polkitd[43651]: Collecting garbage unconditionally...
Feb 02 15:19:21 compute-0 polkitd[43651]: Loading rules from directory /etc/polkit-1/rules.d
Feb 02 15:19:21 compute-0 polkitd[43651]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 02 15:19:21 compute-0 polkitd[43651]: Finished loading, compiling and executing 3 rules
Feb 02 15:19:21 compute-0 polkitd[43651]: Reloading rules
Feb 02 15:19:21 compute-0 polkitd[43651]: Collecting garbage unconditionally...
Feb 02 15:19:21 compute-0 polkitd[43651]: Loading rules from directory /etc/polkit-1/rules.d
Feb 02 15:19:21 compute-0 polkitd[43651]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 02 15:19:21 compute-0 polkitd[43651]: Finished loading, compiling and executing 3 rules
Feb 02 15:19:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:22 compute-0 ceph-mon[75334]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:24 compute-0 ceph-mon[75334]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:25 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Feb 02 15:19:25 compute-0 sshd[1005]: Received signal 15; terminating.
Feb 02 15:19:25 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Feb 02 15:19:25 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Feb 02 15:19:25 compute-0 systemd[1]: sshd.service: Consumed 2.130s CPU time, read 32.0K from disk, written 0B to disk.
Feb 02 15:19:25 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Feb 02 15:19:25 compute-0 systemd[1]: Stopping sshd-keygen.target...
Feb 02 15:19:25 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 15:19:25 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 15:19:25 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 02 15:19:25 compute-0 systemd[1]: Reached target sshd-keygen.target.
Feb 02 15:19:25 compute-0 systemd[1]: Starting OpenSSH server daemon...
Feb 02 15:19:25 compute-0 sshd[182114]: Server listening on 0.0.0.0 port 22.
Feb 02 15:19:25 compute-0 sshd[182114]: Server listening on :: port 22.
Feb 02 15:19:25 compute-0 systemd[1]: Started OpenSSH server daemon.
Feb 02 15:19:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:27 compute-0 ceph-mon[75334]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:27 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 15:19:27 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 15:19:27 compute-0 systemd[1]: Reloading.
Feb 02 15:19:27 compute-0 systemd-rc-local-generator[182365]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:27 compute-0 systemd-sysv-generator[182371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:27 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 15:19:28 compute-0 ceph-mon[75334]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:29 compute-0 podman[184346]: 2026-02-02 15:19:29.350073723 +0000 UTC m=+0.096621890 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Feb 02 15:19:30 compute-0 sudo[162515]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:30 compute-0 ceph-mon[75334]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:30 compute-0 sudo[186409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxwedgpfgpqsknxbnaprcwwcsuolqzfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045570.2506015-331-89568548307690/AnsiballZ_systemd.py'
Feb 02 15:19:30 compute-0 podman[186309]: 2026-02-02 15:19:30.84820646 +0000 UTC m=+0.047268059 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:19:30 compute-0 sudo[186409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:31 compute-0 python3.9[186436]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 15:19:31 compute-0 systemd[1]: Reloading.
Feb 02 15:19:31 compute-0 systemd-rc-local-generator[186959]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:31 compute-0 systemd-sysv-generator[186963]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:31 compute-0 sudo[186409]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:31 compute-0 sudo[187737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubfabgzmixpskkzcswqkcolcejmyzfiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045571.5954137-331-96792856967820/AnsiballZ_systemd.py'
Feb 02 15:19:31 compute-0 sudo[187737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:32 compute-0 python3.9[187756]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 15:19:32 compute-0 systemd[1]: Reloading.
Feb 02 15:19:32 compute-0 ceph-mon[75334]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:32 compute-0 systemd-sysv-generator[188342]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:32 compute-0 systemd-rc-local-generator[188338]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:32 compute-0 sudo[187737]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:32 compute-0 sudo[189120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yywddloejyaeymlmedrdhoyjhkppsdmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045572.6502182-331-200487633142113/AnsiballZ_systemd.py'
Feb 02 15:19:32 compute-0 sudo[189120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:33 compute-0 python3.9[189138]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 15:19:33 compute-0 systemd[1]: Reloading.
Feb 02 15:19:33 compute-0 systemd-rc-local-generator[189843]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:33 compute-0 systemd-sysv-generator[189848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:33 compute-0 sudo[189120]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:33 compute-0 sudo[190718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfxszhkrhufvktdgxeciczfusqoddpqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045573.604794-331-47781794380198/AnsiballZ_systemd.py'
Feb 02 15:19:33 compute-0 sudo[190718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:34 compute-0 python3.9[190752]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 15:19:34 compute-0 systemd[1]: Reloading.
Feb 02 15:19:34 compute-0 ceph-mon[75334]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:34 compute-0 systemd-sysv-generator[191295]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:34 compute-0 systemd-rc-local-generator[191290]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:34 compute-0 sudo[190718]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 15:19:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 15:19:34 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.011s CPU time.
Feb 02 15:19:34 compute-0 systemd[1]: run-r3818e7645f114fcd8d824bd24936b5d3.service: Deactivated successfully.
Feb 02 15:19:34 compute-0 sudo[191715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zivgkctxeufswmzxygivfxcmjfhuvamh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045574.702664-360-179240837247351/AnsiballZ_systemd.py'
Feb 02 15:19:34 compute-0 sudo[191715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:35 compute-0 python3.9[191717]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:35 compute-0 systemd[1]: Reloading.
Feb 02 15:19:35 compute-0 systemd-sysv-generator[191748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:35 compute-0 systemd-rc-local-generator[191743]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:35 compute-0 sudo[191715]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:36 compute-0 sudo[191905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agpbqbwrxzlalcgzbxcdbybxkxhjsbay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045575.7214894-360-142287640199943/AnsiballZ_systemd.py'
Feb 02 15:19:36 compute-0 sudo[191905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:36 compute-0 ceph-mon[75334]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:36 compute-0 python3.9[191907]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:36 compute-0 systemd[1]: Reloading.
Feb 02 15:19:36 compute-0 systemd-rc-local-generator[191936]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:36 compute-0 systemd-sysv-generator[191939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:36 compute-0 sudo[191905]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:37 compute-0 sudo[192096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dumszpdsspvcrnugwfnbxyirdtljgiky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045576.8728633-360-135817328328586/AnsiballZ_systemd.py'
Feb 02 15:19:37 compute-0 sudo[192096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:37 compute-0 python3.9[192098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:37 compute-0 systemd[1]: Reloading.
Feb 02 15:19:37 compute-0 systemd-rc-local-generator[192119]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:37 compute-0 systemd-sysv-generator[192123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:37 compute-0 sudo[192096]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:38 compute-0 ceph-mon[75334]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:38 compute-0 sudo[192285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdzxqyfpyvxoibmlvedibbmckqroxbne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045578.0608985-360-140816270353089/AnsiballZ_systemd.py'
Feb 02 15:19:38 compute-0 sudo[192285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:38 compute-0 python3.9[192287]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:38 compute-0 sudo[192285]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:39 compute-0 sudo[192440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlenbstpfkvqqwrnbtgkxgasfzxjftay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045578.8410525-360-32308422920622/AnsiballZ_systemd.py'
Feb 02 15:19:39 compute-0 sudo[192440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:39 compute-0 python3.9[192442]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:39 compute-0 systemd[1]: Reloading.
Feb 02 15:19:39 compute-0 systemd-sysv-generator[192473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:39 compute-0 systemd-rc-local-generator[192469]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:39 compute-0 sudo[192440]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:40 compute-0 ceph-mon[75334]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:40 compute-0 sudo[192630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irbelhldbxvywiunjyfnyckxquxegkdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045580.0026202-396-134234532314521/AnsiballZ_systemd.py'
Feb 02 15:19:40 compute-0 sudo[192630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:40 compute-0 python3.9[192632]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 02 15:19:40 compute-0 systemd[1]: Reloading.
Feb 02 15:19:40 compute-0 systemd-rc-local-generator[192661]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:19:40 compute-0 systemd-sysv-generator[192666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:19:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:41 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Feb 02 15:19:41 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb 02 15:19:41 compute-0 sudo[192630]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:41 compute-0 sudo[192824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpwulxvjkjobktxfdynwedwzxyctsxzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045581.2769828-404-116781958735239/AnsiballZ_systemd.py'
Feb 02 15:19:41 compute-0 sudo[192824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:41 compute-0 python3.9[192826]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:41 compute-0 sudo[192824]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:42 compute-0 ceph-mon[75334]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:42 compute-0 sudo[192979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckonodedosohznrwldlhdigjkwuiqceu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045582.0420377-404-259081438010104/AnsiballZ_systemd.py'
Feb 02 15:19:42 compute-0 sudo[192979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:42 compute-0 python3.9[192981]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:42 compute-0 sudo[192979]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:19:42
Feb 02 15:19:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:19:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:19:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'volumes', 'images', 'default.rgw.control']
Feb 02 15:19:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:19:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:43 compute-0 sudo[193134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwkfbueaccprbuesmqsksctifthhjucq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045582.8265908-404-147257202037422/AnsiballZ_systemd.py'
Feb 02 15:19:43 compute-0 sudo[193134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:43 compute-0 python3.9[193136]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:43 compute-0 sudo[193134]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:43 compute-0 sudo[193289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjuayupvsoksistjkuydxpjracykslvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045583.6515472-404-169421206959561/AnsiballZ_systemd.py'
Feb 02 15:19:43 compute-0 sudo[193289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:44 compute-0 python3.9[193291]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:44 compute-0 sudo[193289]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:44 compute-0 ceph-mon[75334]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:19:44 compute-0 sudo[193444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxdksnoinqfrvpiarfswhmvrnvzvygil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045584.4404783-404-263322652989376/AnsiballZ_systemd.py'
Feb 02 15:19:44 compute-0 sudo[193444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:19:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:19:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:45 compute-0 python3.9[193446]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:45 compute-0 sudo[193444]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:45 compute-0 sudo[193599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkkomzhzwemigbxcbthzzrcbritxfsfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045585.2661352-404-29879803902578/AnsiballZ_systemd.py'
Feb 02 15:19:45 compute-0 sudo[193599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:45 compute-0 python3.9[193601]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:45 compute-0 sudo[193599]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:46 compute-0 ceph-mon[75334]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:46 compute-0 sudo[193754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqmdxskxtaxylcgzfxjsijdfblatlmww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045586.1006994-404-89747385816585/AnsiballZ_systemd.py'
Feb 02 15:19:46 compute-0 sudo[193754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:46 compute-0 python3.9[193756]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:46 compute-0 sudo[193754]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:47 compute-0 sudo[193909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtwjtwqfgqbsjmdumufwjoymrjnibbsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045586.802831-404-131601018366389/AnsiballZ_systemd.py'
Feb 02 15:19:47 compute-0 sudo[193909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:47 compute-0 python3.9[193911]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:47 compute-0 sudo[193909]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:47 compute-0 sudo[194064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgbjjbhnrefnijfcyeaeigngxdkgjmop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045587.6300213-404-4382737065948/AnsiballZ_systemd.py'
Feb 02 15:19:47 compute-0 sudo[194064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:48 compute-0 python3.9[194066]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:48 compute-0 sudo[194064]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:48 compute-0 ceph-mon[75334]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:48 compute-0 sudo[194219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yycpphwkuhjdqdmpnvminhkguamgnhwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045588.3810725-404-1809855352787/AnsiballZ_systemd.py'
Feb 02 15:19:48 compute-0 sudo[194219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:48 compute-0 python3.9[194221]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:49 compute-0 sudo[194219]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:49 compute-0 sudo[194374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkrtausvlnmswrlrxhrqcctlktkiybzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045589.1367857-404-172989770979877/AnsiballZ_systemd.py'
Feb 02 15:19:49 compute-0 sudo[194374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:49 compute-0 python3.9[194376]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:49 compute-0 sudo[194374]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:50 compute-0 sudo[194529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fplthzyqdzznzqwtfecyhaaymkjkckit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045589.969172-404-19368146135271/AnsiballZ_systemd.py'
Feb 02 15:19:50 compute-0 sudo[194529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:50 compute-0 ceph-mon[75334]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:50 compute-0 python3.9[194531]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:50 compute-0 sudo[194529]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:51 compute-0 sudo[194684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzacazxenoaghqqzpfolgnzrrlntsbiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045590.7712364-404-188572339319673/AnsiballZ_systemd.py'
Feb 02 15:19:51 compute-0 sudo[194684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:51 compute-0 python3.9[194686]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:51 compute-0 sudo[194684]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:51 compute-0 sudo[194839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whdpcloiuplgzfqunoneplbjfjpsugfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045591.5108132-404-140810891170918/AnsiballZ_systemd.py'
Feb 02 15:19:51 compute-0 sudo[194839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:52 compute-0 python3.9[194841]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 02 15:19:52 compute-0 sudo[194839]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:52 compute-0 ceph-mon[75334]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:52 compute-0 sudo[194994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glpvqmirlphdxopvnfqdzixwmmmtfibk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045592.4684248-506-95922943346943/AnsiballZ_file.py'
Feb 02 15:19:52 compute-0 sudo[194994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:52 compute-0 python3.9[194996]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:19:52 compute-0 sudo[194994]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:53 compute-0 sudo[195146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrcwxdfphzojzdtckjsyvciwbohrixub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045593.166373-506-159811814234575/AnsiballZ_file.py'
Feb 02 15:19:53 compute-0 sudo[195146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:53 compute-0 python3.9[195148]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:19:53 compute-0 sudo[195146]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:54 compute-0 sudo[195298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsiuipbhgnboqqnwqneqyzulhutisalq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045593.825206-506-55876441537279/AnsiballZ_file.py'
Feb 02 15:19:54 compute-0 sudo[195298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:19:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:19:54 compute-0 python3.9[195300]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:19:54 compute-0 sudo[195298]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:54 compute-0 ceph-mon[75334]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:54 compute-0 sudo[195450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpcxxjhsyzlobyvfgswvwhwfbbgqxbrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045594.382869-506-107728836069577/AnsiballZ_file.py'
Feb 02 15:19:54 compute-0 sudo[195450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:54 compute-0 python3.9[195452]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:19:54 compute-0 sudo[195450]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:55 compute-0 sudo[195602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unpktaychmlrrnoqhnxbresuyrtuwaue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045595.0645945-506-272520012011289/AnsiballZ_file.py'
Feb 02 15:19:55 compute-0 sudo[195602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:55 compute-0 python3.9[195604]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:19:55 compute-0 sudo[195602]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:56 compute-0 sudo[195754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zikropfdelajhdcboycdocgqpjfsdaaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045595.7880778-506-185341618012153/AnsiballZ_file.py'
Feb 02 15:19:56 compute-0 sudo[195754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:56 compute-0 python3.9[195756]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:19:56 compute-0 sudo[195754]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:56 compute-0 ceph-mon[75334]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:19:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:57 compute-0 python3.9[195906]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:19:57 compute-0 sudo[196056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmzmmgrviminvbsnhekmhwntxzaaxlij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045597.3908567-557-120089454143016/AnsiballZ_stat.py'
Feb 02 15:19:57 compute-0 sudo[196056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:58 compute-0 python3.9[196058]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:19:58 compute-0 sudo[196056]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:58 compute-0 ceph-mon[75334]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:58 compute-0 sudo[196181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmvpfxaipntkbwiijyjmiikvjsltonpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045597.3908567-557-120089454143016/AnsiballZ_copy.py'
Feb 02 15:19:58 compute-0 sudo[196181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:58 compute-0 python3.9[196183]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770045597.3908567-557-120089454143016/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:19:58 compute-0 sudo[196181]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:19:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:19:59.229 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:19:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:19:59.231 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:19:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:19:59.231 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:19:59 compute-0 sudo[196333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrxzedvjxwmnubiycffpugbxgjywbohj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045599.0147104-557-213543344509753/AnsiballZ_stat.py'
Feb 02 15:19:59 compute-0 sudo[196333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:59 compute-0 python3.9[196335]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:19:59 compute-0 sudo[196333]: pam_unix(sudo:session): session closed for user root
Feb 02 15:19:59 compute-0 sudo[196475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxndbkseedpiglllunyevkipjzoadcsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045599.0147104-557-213543344509753/AnsiballZ_copy.py'
Feb 02 15:19:59 compute-0 sudo[196475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:19:59 compute-0 podman[196432]: 2026-02-02 15:19:59.945663717 +0000 UTC m=+0.091322710 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:20:00 compute-0 python3.9[196483]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770045599.0147104-557-213543344509753/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:00 compute-0 sudo[196475]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:00 compute-0 ceph-mon[75334]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:00 compute-0 sudo[196636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhzehagbtaoiigeusmzsncfkpvmgspvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045600.2664766-557-142048813990750/AnsiballZ_stat.py'
Feb 02 15:20:00 compute-0 sudo[196636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:00 compute-0 python3.9[196638]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:00 compute-0 sudo[196636]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:01 compute-0 sudo[196774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfteobgxqkbypuaeygjqkncajlfuvtjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045600.2664766-557-142048813990750/AnsiballZ_copy.py'
Feb 02 15:20:01 compute-0 sudo[196774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:01 compute-0 podman[196735]: 2026-02-02 15:20:01.196521099 +0000 UTC m=+0.063620854 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 02 15:20:01 compute-0 python3.9[196782]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770045600.2664766-557-142048813990750/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:01 compute-0 sudo[196774]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:01 compute-0 sudo[196932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajzuevpacxolipqfybrxnjxshrchesdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045601.54416-557-107313875678839/AnsiballZ_stat.py'
Feb 02 15:20:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:01 compute-0 sudo[196932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:02 compute-0 python3.9[196934]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:02 compute-0 sudo[196932]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:02 compute-0 sudo[197057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xshlzssjzqvchbkqpjwktqqopmqrxylz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045601.54416-557-107313875678839/AnsiballZ_copy.py'
Feb 02 15:20:02 compute-0 sudo[197057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:02 compute-0 ceph-mon[75334]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:02 compute-0 python3.9[197059]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770045601.54416-557-107313875678839/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:02 compute-0 sudo[197057]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:03 compute-0 sudo[197209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywkyfsobvoiukejgmwnbqpphpgibrzpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045602.7234437-557-113695491329788/AnsiballZ_stat.py'
Feb 02 15:20:03 compute-0 sudo[197209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:03 compute-0 python3.9[197211]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:03 compute-0 sudo[197209]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:03 compute-0 sudo[197334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hejetmzbdpxrygcigbbfffusnpphldwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045602.7234437-557-113695491329788/AnsiballZ_copy.py'
Feb 02 15:20:03 compute-0 sudo[197334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:03 compute-0 python3.9[197336]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770045602.7234437-557-113695491329788/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:03 compute-0 sudo[197334]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:04 compute-0 sudo[197486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ureysdirdnglgwgrcgyehlqewyohnahd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045604.0978281-557-250906521157573/AnsiballZ_stat.py'
Feb 02 15:20:04 compute-0 sudo[197486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:04 compute-0 ceph-mon[75334]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:04 compute-0 python3.9[197488]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:04 compute-0 sudo[197486]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:04 compute-0 sudo[197611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldzzimnbagjsufwyggcrgjhnzqvlpxuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045604.0978281-557-250906521157573/AnsiballZ_copy.py'
Feb 02 15:20:05 compute-0 sudo[197611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:05 compute-0 python3.9[197613]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770045604.0978281-557-250906521157573/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:05 compute-0 sudo[197611]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:05 compute-0 sudo[197763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwufkulrdtwvevfhhgvnyrnzvvlnepwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045605.376903-557-40168501438214/AnsiballZ_stat.py'
Feb 02 15:20:05 compute-0 sudo[197763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:05 compute-0 python3.9[197765]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:05 compute-0 sudo[197763]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:06 compute-0 sudo[197886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnxceupxpbqskoypeahsgjolzsefvhmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045605.376903-557-40168501438214/AnsiballZ_copy.py'
Feb 02 15:20:06 compute-0 sudo[197886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:06 compute-0 python3.9[197888]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770045605.376903-557-40168501438214/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:06 compute-0 ceph-mon[75334]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:06 compute-0 sudo[197886]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:06 compute-0 sudo[198038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-betuqjefxgdlmltfencexhgonqflmxjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045606.5561771-557-62596510739792/AnsiballZ_stat.py'
Feb 02 15:20:06 compute-0 sudo[198038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:06 compute-0 python3.9[198040]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:06 compute-0 sudo[198038]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:07 compute-0 sudo[198163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrbvntughuxaobscfkxdfvabxmnpsffn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045606.5561771-557-62596510739792/AnsiballZ_copy.py'
Feb 02 15:20:07 compute-0 sudo[198163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:07 compute-0 python3.9[198165]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770045606.5561771-557-62596510739792/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:07 compute-0 sudo[198163]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:07 compute-0 sudo[198315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thlqinjbhiupmzbrvxwszgacecjgnfef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045607.6848328-670-235040855573162/AnsiballZ_command.py'
Feb 02 15:20:07 compute-0 sudo[198315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:08 compute-0 python3.9[198317]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb 02 15:20:08 compute-0 sudo[198315]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:08 compute-0 ceph-mon[75334]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:08 compute-0 sudo[198468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfanchpqnarcrvgkxowaqnmzjhthowkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045608.4373107-679-56371499426230/AnsiballZ_file.py'
Feb 02 15:20:08 compute-0 sudo[198468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:08 compute-0 python3.9[198470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:09 compute-0 sudo[198468]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:09 compute-0 sudo[198620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioygmghlqwxumbioejoqwnyueswxiyiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045609.1426954-679-103773935851797/AnsiballZ_file.py'
Feb 02 15:20:09 compute-0 sudo[198620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:09 compute-0 python3.9[198622]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:09 compute-0 sudo[198620]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:10 compute-0 sudo[198772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjynhrpkijqdkdiwwlropdnojbuwxygk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045609.8378875-679-175019383479040/AnsiballZ_file.py'
Feb 02 15:20:10 compute-0 sudo[198772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:10 compute-0 python3.9[198774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:10 compute-0 sudo[198772]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:10 compute-0 ceph-mon[75334]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:10 compute-0 sudo[198924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiqsegmcxiayymrpibncqkmwjtnblvwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045610.4663713-679-241623809466256/AnsiballZ_file.py'
Feb 02 15:20:10 compute-0 sudo[198924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:10 compute-0 python3.9[198926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:10 compute-0 sudo[198924]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:11 compute-0 sudo[199076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brnxwxvldewlrzkedmxnpehxpvlnjkhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045611.0162766-679-161100500119665/AnsiballZ_file.py'
Feb 02 15:20:11 compute-0 sudo[199076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:11 compute-0 python3.9[199078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:11 compute-0 sudo[199076]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:11 compute-0 sudo[199228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eabwbcfwvmufganocheyummpeiflvhgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045611.6122663-679-88872209123136/AnsiballZ_file.py'
Feb 02 15:20:11 compute-0 sudo[199228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:12 compute-0 python3.9[199230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:12 compute-0 sudo[199228]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:12 compute-0 ceph-mon[75334]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:12 compute-0 sudo[199380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byfzszdkjqbdmztvtdpmfhwbaliunsoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045612.2565904-679-165721357718612/AnsiballZ_file.py'
Feb 02 15:20:12 compute-0 sudo[199380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:12 compute-0 python3.9[199382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:12 compute-0 sudo[199380]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:13 compute-0 sudo[199532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szisnkaagodfoaugydlkjzrzbyugxyyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045612.8665848-679-81278401040804/AnsiballZ_file.py'
Feb 02 15:20:13 compute-0 sudo[199532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:13 compute-0 python3.9[199534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:13 compute-0 sudo[199532]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:13 compute-0 sudo[199684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phzkcsjwrwyizdtyesjshexnusvazvaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045613.5455365-679-81669437104825/AnsiballZ_file.py'
Feb 02 15:20:13 compute-0 sudo[199684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:14 compute-0 python3.9[199686]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:14 compute-0 sudo[199684]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:14 compute-0 ceph-mon[75334]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:14 compute-0 sudo[199836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehzmvyikesemzvcknsssfvifyiqclgar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045614.352744-679-64252402755823/AnsiballZ_file.py'
Feb 02 15:20:14 compute-0 sudo[199836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:20:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:20:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:20:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:20:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:20:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:20:14 compute-0 python3.9[199838]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:14 compute-0 sudo[199836]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:15 compute-0 sudo[199988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elnraxpftoutbmnoxayfuzhqsrammvau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045615.0094182-679-88940255224563/AnsiballZ_file.py'
Feb 02 15:20:15 compute-0 sudo[199988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:15 compute-0 python3.9[199990]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:15 compute-0 sudo[199988]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:15 compute-0 sudo[200140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haferiuskueqarupxxaxwfjifrdmtjas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045615.6806629-679-135230527178490/AnsiballZ_file.py'
Feb 02 15:20:15 compute-0 sudo[200140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:16 compute-0 python3.9[200142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:16 compute-0 sudo[200140]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:16 compute-0 ceph-mon[75334]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:16 compute-0 sudo[200292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoemldzlwbddlkoianragrislfpohutk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045616.3650765-679-206553474333652/AnsiballZ_file.py'
Feb 02 15:20:16 compute-0 sudo[200292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:16 compute-0 python3.9[200294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:16 compute-0 sudo[200292]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:17 compute-0 sudo[200444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladnkczvdfbjfzoxkpaucydnjyvoshjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045617.02542-679-222979246100128/AnsiballZ_file.py'
Feb 02 15:20:17 compute-0 sudo[200444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:17 compute-0 python3.9[200446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:17 compute-0 sudo[200444]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:18 compute-0 sudo[200596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hucvswwamhcdslhqmlbpwuieyufqtzbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045617.7450147-778-228420847020237/AnsiballZ_stat.py'
Feb 02 15:20:18 compute-0 sudo[200596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:18 compute-0 python3.9[200598]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:18 compute-0 sudo[200596]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:18 compute-0 ceph-mon[75334]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:18 compute-0 sudo[200719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfeluasfxtfvbodkuifclaqljjftjyqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045617.7450147-778-228420847020237/AnsiballZ_copy.py'
Feb 02 15:20:18 compute-0 sudo[200719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:18 compute-0 python3.9[200721]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045617.7450147-778-228420847020237/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:18 compute-0 sudo[200719]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:19 compute-0 sudo[200794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:20:19 compute-0 sudo[200794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:20:19 compute-0 sudo[200794]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:19 compute-0 sudo[200823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:20:19 compute-0 sudo[200823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:20:19 compute-0 sudo[200921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dugspingkjqmcfsslqseymapfcckmyau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045618.9332964-778-209003017425647/AnsiballZ_stat.py'
Feb 02 15:20:19 compute-0 sudo[200921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:19 compute-0 python3.9[200923]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:19 compute-0 sudo[200921]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:19 compute-0 sudo[200823]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:20:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:20:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:20:19 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:20:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:20:19 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:20:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:20:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:20:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:20:19 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:20:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:20:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:20:19 compute-0 sudo[201004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:20:19 compute-0 sudo[201004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:20:19 compute-0 sudo[201004]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:19 compute-0 sudo[201052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:20:19 compute-0 sudo[201052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:20:19 compute-0 sudo[201127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnvicrdruspeekmvylxdalysuiagzusi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045618.9332964-778-209003017425647/AnsiballZ_copy.py'
Feb 02 15:20:19 compute-0 sudo[201127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:20 compute-0 podman[201143]: 2026-02-02 15:20:20.039641798 +0000 UTC m=+0.050718977 container create 014e57098114bac36820e32d54605ad8c685d95df23d0fe38c2e76b293662fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cerf, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:20:20 compute-0 python3.9[201129]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045618.9332964-778-209003017425647/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:20 compute-0 systemd[1]: Started libpod-conmon-014e57098114bac36820e32d54605ad8c685d95df23d0fe38c2e76b293662fa1.scope.
Feb 02 15:20:20 compute-0 sudo[201127]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:20 compute-0 podman[201143]: 2026-02-02 15:20:20.012906354 +0000 UTC m=+0.023983593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:20:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:20:20 compute-0 podman[201143]: 2026-02-02 15:20:20.128113511 +0000 UTC m=+0.139190740 container init 014e57098114bac36820e32d54605ad8c685d95df23d0fe38c2e76b293662fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:20:20 compute-0 podman[201143]: 2026-02-02 15:20:20.134455655 +0000 UTC m=+0.145532844 container start 014e57098114bac36820e32d54605ad8c685d95df23d0fe38c2e76b293662fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:20:20 compute-0 podman[201143]: 2026-02-02 15:20:20.139266788 +0000 UTC m=+0.150344027 container attach 014e57098114bac36820e32d54605ad8c685d95df23d0fe38c2e76b293662fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cerf, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:20:20 compute-0 trusting_cerf[201160]: 167 167
Feb 02 15:20:20 compute-0 systemd[1]: libpod-014e57098114bac36820e32d54605ad8c685d95df23d0fe38c2e76b293662fa1.scope: Deactivated successfully.
Feb 02 15:20:20 compute-0 podman[201182]: 2026-02-02 15:20:20.180836574 +0000 UTC m=+0.028489235 container died 014e57098114bac36820e32d54605ad8c685d95df23d0fe38c2e76b293662fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cerf, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-51935c468203225b96082a818897bfbbd2503f7e67143c24d5a714f2fd6bd4da-merged.mount: Deactivated successfully.
Feb 02 15:20:20 compute-0 podman[201182]: 2026-02-02 15:20:20.219918535 +0000 UTC m=+0.067571126 container remove 014e57098114bac36820e32d54605ad8c685d95df23d0fe38c2e76b293662fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:20:20 compute-0 systemd[1]: libpod-conmon-014e57098114bac36820e32d54605ad8c685d95df23d0fe38c2e76b293662fa1.scope: Deactivated successfully.
Feb 02 15:20:20 compute-0 podman[201264]: 2026-02-02 15:20:20.371840475 +0000 UTC m=+0.052351522 container create ebdea4952fc8067568db8886c74c6d0b5f4c05f401f04827269774510fa819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:20:20 compute-0 systemd[1]: Started libpod-conmon-ebdea4952fc8067568db8886c74c6d0b5f4c05f401f04827269774510fa819ed.scope.
Feb 02 15:20:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ceeffb98b206095dda034829acff3dd0a12ba0df3fa871ddcfe76b3fb1d414/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ceeffb98b206095dda034829acff3dd0a12ba0df3fa871ddcfe76b3fb1d414/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ceeffb98b206095dda034829acff3dd0a12ba0df3fa871ddcfe76b3fb1d414/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ceeffb98b206095dda034829acff3dd0a12ba0df3fa871ddcfe76b3fb1d414/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ceeffb98b206095dda034829acff3dd0a12ba0df3fa871ddcfe76b3fb1d414/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:20 compute-0 podman[201264]: 2026-02-02 15:20:20.352693917 +0000 UTC m=+0.033205004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:20:20 compute-0 podman[201264]: 2026-02-02 15:20:20.46984216 +0000 UTC m=+0.150353237 container init ebdea4952fc8067568db8886c74c6d0b5f4c05f401f04827269774510fa819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:20:20 compute-0 podman[201264]: 2026-02-02 15:20:20.475725379 +0000 UTC m=+0.156236426 container start ebdea4952fc8067568db8886c74c6d0b5f4c05f401f04827269774510fa819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:20:20 compute-0 podman[201264]: 2026-02-02 15:20:20.479150675 +0000 UTC m=+0.159661722 container attach ebdea4952fc8067568db8886c74c6d0b5f4c05f401f04827269774510fa819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:20:20 compute-0 ceph-mon[75334]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:20:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:20:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:20:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:20:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:20:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:20:20 compute-0 sudo[201358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osfgzbpspsdmfiqzvmcdsyxsjxofkwxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045620.2405372-778-237525154692916/AnsiballZ_stat.py'
Feb 02 15:20:20 compute-0 sudo[201358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:20 compute-0 python3.9[201360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:20 compute-0 sudo[201358]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:20 compute-0 elastic_ritchie[201321]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:20:20 compute-0 elastic_ritchie[201321]: --> All data devices are unavailable
Feb 02 15:20:21 compute-0 systemd[1]: libpod-ebdea4952fc8067568db8886c74c6d0b5f4c05f401f04827269774510fa819ed.scope: Deactivated successfully.
Feb 02 15:20:21 compute-0 podman[201264]: 2026-02-02 15:20:21.010919442 +0000 UTC m=+0.691430519 container died ebdea4952fc8067568db8886c74c6d0b5f4c05f401f04827269774510fa819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3ceeffb98b206095dda034829acff3dd0a12ba0df3fa871ddcfe76b3fb1d414-merged.mount: Deactivated successfully.
Feb 02 15:20:21 compute-0 podman[201264]: 2026-02-02 15:20:21.060019193 +0000 UTC m=+0.740530240 container remove ebdea4952fc8067568db8886c74c6d0b5f4c05f401f04827269774510fa819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:20:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:21 compute-0 systemd[1]: libpod-conmon-ebdea4952fc8067568db8886c74c6d0b5f4c05f401f04827269774510fa819ed.scope: Deactivated successfully.
Feb 02 15:20:21 compute-0 sudo[201052]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:21 compute-0 sudo[201529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uejcntkpbrnckyofkjyhubirdjsfpmgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045620.2405372-778-237525154692916/AnsiballZ_copy.py'
Feb 02 15:20:21 compute-0 sudo[201481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:20:21 compute-0 sudo[201529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:21 compute-0 sudo[201481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:20:21 compute-0 sudo[201481]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:21 compute-0 sudo[201534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:20:21 compute-0 sudo[201534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:20:21 compute-0 python3.9[201532]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045620.2405372-778-237525154692916/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:21 compute-0 sudo[201529]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:21 compute-0 podman[201598]: 2026-02-02 15:20:21.489511991 +0000 UTC m=+0.038283286 container create d1b4a2bbc5c0d8b63f8ed158f1df03753c9cb679c1171260ccf8c7a68fe14c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:20:21 compute-0 systemd[1]: Started libpod-conmon-d1b4a2bbc5c0d8b63f8ed158f1df03753c9cb679c1171260ccf8c7a68fe14c3e.scope.
Feb 02 15:20:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:20:21 compute-0 podman[201598]: 2026-02-02 15:20:21.565413178 +0000 UTC m=+0.114184583 container init d1b4a2bbc5c0d8b63f8ed158f1df03753c9cb679c1171260ccf8c7a68fe14c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 02 15:20:21 compute-0 podman[201598]: 2026-02-02 15:20:21.475428765 +0000 UTC m=+0.024200120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:20:21 compute-0 podman[201598]: 2026-02-02 15:20:21.575521891 +0000 UTC m=+0.124293196 container start d1b4a2bbc5c0d8b63f8ed158f1df03753c9cb679c1171260ccf8c7a68fe14c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:20:21 compute-0 admiring_panini[201662]: 167 167
Feb 02 15:20:21 compute-0 systemd[1]: libpod-d1b4a2bbc5c0d8b63f8ed158f1df03753c9cb679c1171260ccf8c7a68fe14c3e.scope: Deactivated successfully.
Feb 02 15:20:21 compute-0 podman[201598]: 2026-02-02 15:20:21.582461876 +0000 UTC m=+0.131233191 container attach d1b4a2bbc5c0d8b63f8ed158f1df03753c9cb679c1171260ccf8c7a68fe14c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:20:21 compute-0 podman[201598]: 2026-02-02 15:20:21.583274613 +0000 UTC m=+0.132045918 container died d1b4a2bbc5c0d8b63f8ed158f1df03753c9cb679c1171260ccf8c7a68fe14c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc956bf86af09a1f34461eb8c0c52a3cf851fb42e619a728553dae4b392ec2d1-merged.mount: Deactivated successfully.
Feb 02 15:20:21 compute-0 podman[201598]: 2026-02-02 15:20:21.630215451 +0000 UTC m=+0.178986756 container remove d1b4a2bbc5c0d8b63f8ed158f1df03753c9cb679c1171260ccf8c7a68fe14c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:20:21 compute-0 systemd[1]: libpod-conmon-d1b4a2bbc5c0d8b63f8ed158f1df03753c9cb679c1171260ccf8c7a68fe14c3e.scope: Deactivated successfully.
Feb 02 15:20:21 compute-0 sudo[201754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jffqvedfrkkclzvocrlssrkeskpfmxje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045621.4759705-778-94130978544667/AnsiballZ_stat.py'
Feb 02 15:20:21 compute-0 sudo[201754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:21 compute-0 podman[201762]: 2026-02-02 15:20:21.790062138 +0000 UTC m=+0.033788914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:20:21 compute-0 python3.9[201757]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:21 compute-0 sudo[201754]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:22 compute-0 podman[201762]: 2026-02-02 15:20:22.160798019 +0000 UTC m=+0.404524725 container create 27eeb397e0de66b26f8a7046785f56e6e2e146072d24388da9d6dbdcc81c5043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:20:22 compute-0 sudo[201896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqvokpxnroiccmaobycvhldcaqycwomz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045621.4759705-778-94130978544667/AnsiballZ_copy.py'
Feb 02 15:20:22 compute-0 sudo[201896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:22 compute-0 systemd[1]: Started libpod-conmon-27eeb397e0de66b26f8a7046785f56e6e2e146072d24388da9d6dbdcc81c5043.scope.
Feb 02 15:20:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af466b9ff71d44091f9332a76d7605056c3887566786a892a35240e030f42bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af466b9ff71d44091f9332a76d7605056c3887566786a892a35240e030f42bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af466b9ff71d44091f9332a76d7605056c3887566786a892a35240e030f42bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af466b9ff71d44091f9332a76d7605056c3887566786a892a35240e030f42bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:22 compute-0 python3.9[201898]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045621.4759705-778-94130978544667/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:22 compute-0 sudo[201896]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:22 compute-0 podman[201762]: 2026-02-02 15:20:22.607154566 +0000 UTC m=+0.850881332 container init 27eeb397e0de66b26f8a7046785f56e6e2e146072d24388da9d6dbdcc81c5043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:20:22 compute-0 ceph-mon[75334]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:22 compute-0 podman[201762]: 2026-02-02 15:20:22.613842683 +0000 UTC m=+0.857569399 container start 27eeb397e0de66b26f8a7046785f56e6e2e146072d24388da9d6dbdcc81c5043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:20:22 compute-0 podman[201762]: 2026-02-02 15:20:22.617966392 +0000 UTC m=+0.861693158 container attach 27eeb397e0de66b26f8a7046785f56e6e2e146072d24388da9d6dbdcc81c5043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:20:22 compute-0 youthful_tesla[201901]: {
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:     "0": [
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:         {
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "devices": [
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "/dev/loop3"
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             ],
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_name": "ceph_lv0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_size": "21470642176",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "name": "ceph_lv0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "tags": {
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.cluster_name": "ceph",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.crush_device_class": "",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.encrypted": "0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.objectstore": "bluestore",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.osd_id": "0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.type": "block",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.vdo": "0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.with_tpm": "0"
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             },
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "type": "block",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "vg_name": "ceph_vg0"
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:         }
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:     ],
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:     "1": [
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:         {
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "devices": [
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "/dev/loop4"
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             ],
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_name": "ceph_lv1",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_size": "21470642176",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "name": "ceph_lv1",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "tags": {
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.cluster_name": "ceph",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.crush_device_class": "",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.encrypted": "0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.objectstore": "bluestore",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.osd_id": "1",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.type": "block",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.vdo": "0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.with_tpm": "0"
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             },
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "type": "block",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "vg_name": "ceph_vg1"
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:         }
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:     ],
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:     "2": [
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:         {
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "devices": [
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "/dev/loop5"
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             ],
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_name": "ceph_lv2",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_size": "21470642176",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "name": "ceph_lv2",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "tags": {
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.cluster_name": "ceph",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.crush_device_class": "",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.encrypted": "0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.objectstore": "bluestore",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.osd_id": "2",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.type": "block",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.vdo": "0",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:                 "ceph.with_tpm": "0"
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             },
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "type": "block",
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:             "vg_name": "ceph_vg2"
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:         }
Feb 02 15:20:22 compute-0 youthful_tesla[201901]:     ]
Feb 02 15:20:22 compute-0 youthful_tesla[201901]: }
Feb 02 15:20:22 compute-0 sudo[202059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yefxeiwxcjoljoxkqrnzoanbdpjyulea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045622.6626947-778-103100314065055/AnsiballZ_stat.py'
Feb 02 15:20:22 compute-0 sudo[202059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:22 compute-0 systemd[1]: libpod-27eeb397e0de66b26f8a7046785f56e6e2e146072d24388da9d6dbdcc81c5043.scope: Deactivated successfully.
Feb 02 15:20:22 compute-0 podman[201762]: 2026-02-02 15:20:22.934842351 +0000 UTC m=+1.178569037 container died 27eeb397e0de66b26f8a7046785f56e6e2e146072d24388da9d6dbdcc81c5043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af466b9ff71d44091f9332a76d7605056c3887566786a892a35240e030f42bf-merged.mount: Deactivated successfully.
Feb 02 15:20:22 compute-0 podman[201762]: 2026-02-02 15:20:22.976921705 +0000 UTC m=+1.220648391 container remove 27eeb397e0de66b26f8a7046785f56e6e2e146072d24388da9d6dbdcc81c5043 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:20:22 compute-0 systemd[1]: libpod-conmon-27eeb397e0de66b26f8a7046785f56e6e2e146072d24388da9d6dbdcc81c5043.scope: Deactivated successfully.
Feb 02 15:20:23 compute-0 sudo[201534]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:23 compute-0 sudo[202074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:20:23 compute-0 sudo[202074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:20:23 compute-0 sudo[202074]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:23 compute-0 sudo[202099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:20:23 compute-0 sudo[202099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:20:23 compute-0 python3.9[202061]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:23 compute-0 sudo[202059]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:23 compute-0 podman[202206]: 2026-02-02 15:20:23.404804518 +0000 UTC m=+0.054571847 container create 05ecacb82ac5e0db0bd17e3ce2c4caa72ac6b49ebb858499b20f72fdf173894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:20:23 compute-0 systemd[1]: Started libpod-conmon-05ecacb82ac5e0db0bd17e3ce2c4caa72ac6b49ebb858499b20f72fdf173894e.scope.
Feb 02 15:20:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:20:23 compute-0 sudo[202275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsmycfxnivppmrksjyttpxcdqalacznw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045622.6626947-778-103100314065055/AnsiballZ_copy.py'
Feb 02 15:20:23 compute-0 sudo[202275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:23 compute-0 podman[202206]: 2026-02-02 15:20:23.382414731 +0000 UTC m=+0.032182120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:20:23 compute-0 podman[202206]: 2026-02-02 15:20:23.481646227 +0000 UTC m=+0.131413586 container init 05ecacb82ac5e0db0bd17e3ce2c4caa72ac6b49ebb858499b20f72fdf173894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:20:23 compute-0 podman[202206]: 2026-02-02 15:20:23.487016459 +0000 UTC m=+0.136783798 container start 05ecacb82ac5e0db0bd17e3ce2c4caa72ac6b49ebb858499b20f72fdf173894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:20:23 compute-0 podman[202206]: 2026-02-02 15:20:23.490818208 +0000 UTC m=+0.140585517 container attach 05ecacb82ac5e0db0bd17e3ce2c4caa72ac6b49ebb858499b20f72fdf173894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:20:23 compute-0 vibrant_hermann[202273]: 167 167
Feb 02 15:20:23 compute-0 systemd[1]: libpod-05ecacb82ac5e0db0bd17e3ce2c4caa72ac6b49ebb858499b20f72fdf173894e.scope: Deactivated successfully.
Feb 02 15:20:23 compute-0 podman[202206]: 2026-02-02 15:20:23.492632299 +0000 UTC m=+0.142399608 container died 05ecacb82ac5e0db0bd17e3ce2c4caa72ac6b49ebb858499b20f72fdf173894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7851ddf7ec61856e4b2bb51a037fc497ca8f96bf6e1b432d2b599e60f75eddd8-merged.mount: Deactivated successfully.
Feb 02 15:20:23 compute-0 podman[202206]: 2026-02-02 15:20:23.525792241 +0000 UTC m=+0.175559550 container remove 05ecacb82ac5e0db0bd17e3ce2c4caa72ac6b49ebb858499b20f72fdf173894e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:20:23 compute-0 systemd[1]: libpod-conmon-05ecacb82ac5e0db0bd17e3ce2c4caa72ac6b49ebb858499b20f72fdf173894e.scope: Deactivated successfully.
Feb 02 15:20:23 compute-0 python3.9[202278]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045622.6626947-778-103100314065055/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:23 compute-0 sudo[202275]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:23 compute-0 podman[202298]: 2026-02-02 15:20:23.714471083 +0000 UTC m=+0.057513647 container create c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 02 15:20:23 compute-0 systemd[1]: Started libpod-conmon-c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3.scope.
Feb 02 15:20:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e1eb8b064314e3dd14fcb7c6fc11a167bbff2718d548f83fd57c9a9bf37f62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e1eb8b064314e3dd14fcb7c6fc11a167bbff2718d548f83fd57c9a9bf37f62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e1eb8b064314e3dd14fcb7c6fc11a167bbff2718d548f83fd57c9a9bf37f62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e1eb8b064314e3dd14fcb7c6fc11a167bbff2718d548f83fd57c9a9bf37f62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:20:23 compute-0 podman[202298]: 2026-02-02 15:20:23.692680106 +0000 UTC m=+0.035722720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:20:23 compute-0 podman[202298]: 2026-02-02 15:20:23.80720814 +0000 UTC m=+0.150250774 container init c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:20:23 compute-0 podman[202298]: 2026-02-02 15:20:23.816766383 +0000 UTC m=+0.159808947 container start c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:20:23 compute-0 podman[202298]: 2026-02-02 15:20:23.820465668 +0000 UTC m=+0.163508242 container attach c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:20:24 compute-0 sudo[202479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ommjeejojgzxwcqxgkbjyaenbwvghban ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045623.8436959-778-183374289906215/AnsiballZ_stat.py'
Feb 02 15:20:24 compute-0 sudo[202479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:24 compute-0 python3.9[202481]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:24 compute-0 sudo[202479]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:24 compute-0 lvm[202572]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:20:24 compute-0 lvm[202572]: VG ceph_vg0 finished
Feb 02 15:20:24 compute-0 lvm[202590]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:20:24 compute-0 lvm[202590]: VG ceph_vg1 finished
Feb 02 15:20:24 compute-0 lvm[202595]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:20:24 compute-0 lvm[202595]: VG ceph_vg2 finished
Feb 02 15:20:24 compute-0 awesome_almeida[202339]: {}
Feb 02 15:20:24 compute-0 systemd[1]: libpod-c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3.scope: Deactivated successfully.
Feb 02 15:20:24 compute-0 podman[202298]: 2026-02-02 15:20:24.613606897 +0000 UTC m=+0.956649431 container died c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:20:24 compute-0 systemd[1]: libpod-c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3.scope: Consumed 1.113s CPU time.
Feb 02 15:20:24 compute-0 ceph-mon[75334]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-21e1eb8b064314e3dd14fcb7c6fc11a167bbff2718d548f83fd57c9a9bf37f62-merged.mount: Deactivated successfully.
Feb 02 15:20:24 compute-0 sudo[202678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuujlxksjxjqartuyczlbkacrfvujhmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045623.8436959-778-183374289906215/AnsiballZ_copy.py'
Feb 02 15:20:24 compute-0 podman[202298]: 2026-02-02 15:20:24.653839178 +0000 UTC m=+0.996881712 container remove c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:20:24 compute-0 sudo[202678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:24 compute-0 systemd[1]: libpod-conmon-c1e70795059788931c611eee6afa7091a7d8ce444c632bde4d8286d8ba2935a3.scope: Deactivated successfully.
Feb 02 15:20:24 compute-0 sudo[202099]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:20:24 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:20:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:20:24 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:20:24 compute-0 sudo[202686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:20:24 compute-0 sudo[202686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:20:24 compute-0 sudo[202686]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:24 compute-0 python3.9[202685]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045623.8436959-778-183374289906215/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:24 compute-0 sudo[202678]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:25 compute-0 sudo[202860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paoknxfylgtpfaaeiqhisowecbfpzvqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045624.9905617-778-229846175557999/AnsiballZ_stat.py'
Feb 02 15:20:25 compute-0 sudo[202860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:25 compute-0 python3.9[202862]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:25 compute-0 sudo[202860]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:20:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:20:25 compute-0 sudo[202983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcrccrzijwkjlxburtbywcgwegqcsoyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045624.9905617-778-229846175557999/AnsiballZ_copy.py'
Feb 02 15:20:25 compute-0 sudo[202983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:26 compute-0 python3.9[202985]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045624.9905617-778-229846175557999/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:26 compute-0 sudo[202983]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:26 compute-0 sudo[203135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhukytqsamkzhjjvxtxihtygrgwrejct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045626.2318187-778-165692792560692/AnsiballZ_stat.py'
Feb 02 15:20:26 compute-0 sudo[203135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:26 compute-0 python3.9[203137]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:26 compute-0 sudo[203135]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:26 compute-0 ceph-mon[75334]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:26 compute-0 sudo[203258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlffxosuqicbsuugjypjzpyjprwsplhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045626.2318187-778-165692792560692/AnsiballZ_copy.py'
Feb 02 15:20:26 compute-0 sudo[203258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:27 compute-0 python3.9[203260]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045626.2318187-778-165692792560692/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:27 compute-0 sudo[203258]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:27 compute-0 sudo[203410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-troqclcgtstjnbzalxodjuwjcctdqmro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045627.3754258-778-213481043966698/AnsiballZ_stat.py'
Feb 02 15:20:27 compute-0 sudo[203410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:27 compute-0 python3.9[203412]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:27 compute-0 sudo[203410]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:28 compute-0 sudo[203533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whgirqwepbusmukvcdxlgpmhfksshaxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045627.3754258-778-213481043966698/AnsiballZ_copy.py'
Feb 02 15:20:28 compute-0 sudo[203533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:31 compute-0 podman[203547]: 2026-02-02 15:20:31.391889899 +0000 UTC m=+0.132699790 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 02 15:20:31 compute-0 ceph-mon[75334]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:31 compute-0 ceph-mon[75334]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:31 compute-0 python3.9[203535]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045627.3754258-778-213481043966698/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:31 compute-0 podman[203536]: 2026-02-02 15:20:31.471225852 +0000 UTC m=+1.207873377 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:20:31 compute-0 sudo[203533]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:31 compute-0 sudo[203731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywawgjobirwsxilaouggtozzwkeltpci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045631.6119018-778-228381276668910/AnsiballZ_stat.py'
Feb 02 15:20:31 compute-0 sudo[203731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:32 compute-0 python3.9[203733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:32 compute-0 sudo[203731]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:32 compute-0 ceph-mon[75334]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:32 compute-0 sudo[203854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymccvyernyggctvovimpwuwyhvsusxgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045631.6119018-778-228381276668910/AnsiballZ_copy.py'
Feb 02 15:20:32 compute-0 sudo[203854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:32 compute-0 python3.9[203856]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045631.6119018-778-228381276668910/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:32 compute-0 sudo[203854]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:32 compute-0 sudo[204006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mannvwihhlwqtkznegpkuzmjuiamgftz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045632.7493534-778-162182065176214/AnsiballZ_stat.py'
Feb 02 15:20:32 compute-0 sudo[204006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:33 compute-0 python3.9[204008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:33 compute-0 sudo[204006]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:33 compute-0 sudo[204129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpzwxdbusgljbyhextlmcnybtkveivnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045632.7493534-778-162182065176214/AnsiballZ_copy.py'
Feb 02 15:20:33 compute-0 sudo[204129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:33 compute-0 python3.9[204131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045632.7493534-778-162182065176214/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:33 compute-0 sudo[204129]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:33 compute-0 sudo[204281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhbtsnolysnwrcklspqkcxprxyxbnkag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045633.7570863-778-142837669887201/AnsiballZ_stat.py'
Feb 02 15:20:33 compute-0 sudo[204281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:34 compute-0 python3.9[204283]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:34 compute-0 sudo[204281]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:34 compute-0 ceph-mon[75334]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:34 compute-0 sudo[204404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxdgvqgjgzhqarxvvvanwicrlgxhsyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045633.7570863-778-142837669887201/AnsiballZ_copy.py'
Feb 02 15:20:34 compute-0 sudo[204404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:34 compute-0 python3.9[204406]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045633.7570863-778-142837669887201/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:34 compute-0 sudo[204404]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:35 compute-0 sudo[204556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npukhxwwniurypgpjfpmwwmimpejsgnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045634.8444715-778-167554039715317/AnsiballZ_stat.py'
Feb 02 15:20:35 compute-0 sudo[204556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:35 compute-0 python3.9[204558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:35 compute-0 sudo[204556]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:35 compute-0 sudo[204679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrboeenltixdsylrlyuulgnpfyuaurss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045634.8444715-778-167554039715317/AnsiballZ_copy.py'
Feb 02 15:20:35 compute-0 sudo[204679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:35 compute-0 python3.9[204681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045634.8444715-778-167554039715317/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:35 compute-0 sudo[204679]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:36 compute-0 sudo[204831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdtdixdhlsishthefgsnnbfdwlnqusgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045636.0602026-778-114402575463546/AnsiballZ_stat.py'
Feb 02 15:20:36 compute-0 sudo[204831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:36 compute-0 ceph-mon[75334]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:36 compute-0 python3.9[204833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:36 compute-0 sudo[204831]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:36 compute-0 sudo[204954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcswjvwhiauraiyejvftbjcswkvolpeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045636.0602026-778-114402575463546/AnsiballZ_copy.py'
Feb 02 15:20:36 compute-0 sudo[204954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:37 compute-0 python3.9[204956]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045636.0602026-778-114402575463546/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:37 compute-0 sudo[204954]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.441636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045637441695, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2046, "num_deletes": 251, "total_data_size": 3579286, "memory_usage": 3629288, "flush_reason": "Manual Compaction"}
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045637463191, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3492049, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9683, "largest_seqno": 11728, "table_properties": {"data_size": 3482732, "index_size": 5939, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17926, "raw_average_key_size": 19, "raw_value_size": 3464287, "raw_average_value_size": 3765, "num_data_blocks": 269, "num_entries": 920, "num_filter_entries": 920, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770045403, "oldest_key_time": 1770045403, "file_creation_time": 1770045637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 21751 microseconds, and 9060 cpu microseconds.
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.463387) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3492049 bytes OK
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.463445) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.464992) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.465016) EVENT_LOG_v1 {"time_micros": 1770045637465009, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.465039) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3570737, prev total WAL file size 3570737, number of live WAL files 2.
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.466288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3410KB)], [26(6040KB)]
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045637466337, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9678030, "oldest_snapshot_seqno": -1}
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3696 keys, 8151937 bytes, temperature: kUnknown
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045637512646, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8151937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8123352, "index_size": 18235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88720, "raw_average_key_size": 24, "raw_value_size": 8052765, "raw_average_value_size": 2178, "num_data_blocks": 791, "num_entries": 3696, "num_filter_entries": 3696, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770045637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.512950) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8151937 bytes
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.514511) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.6 rd, 175.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4210, records dropped: 514 output_compression: NoCompression
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.514543) EVENT_LOG_v1 {"time_micros": 1770045637514529, "job": 10, "event": "compaction_finished", "compaction_time_micros": 46406, "compaction_time_cpu_micros": 23924, "output_level": 6, "num_output_files": 1, "total_output_size": 8151937, "num_input_records": 4210, "num_output_records": 3696, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045637515273, "job": 10, "event": "table_file_deletion", "file_number": 28}
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045637516289, "job": 10, "event": "table_file_deletion", "file_number": 26}
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.466203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.516335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.516341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.516344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.516347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:20:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:20:37.516350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:20:37 compute-0 python3.9[205106]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:20:38 compute-0 sudo[205259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esnhcgahdhtvnoumfuhcekgxgyoxdeos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045638.0225446-984-108146968954752/AnsiballZ_seboolean.py'
Feb 02 15:20:38 compute-0 sudo[205259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:38 compute-0 ceph-mon[75334]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:38 compute-0 python3.9[205261]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 02 15:20:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:39 compute-0 sudo[205259]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:39 compute-0 auditd[700]: Audit daemon rotating log files
Feb 02 15:20:40 compute-0 sudo[205415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gugjxsceqtkxqahdzkvspkrmairkxkkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045639.8989716-992-224808958098939/AnsiballZ_copy.py'
Feb 02 15:20:40 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb 02 15:20:40 compute-0 sudo[205415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:40 compute-0 python3.9[205417]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:40 compute-0 sudo[205415]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:40 compute-0 ceph-mon[75334]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:40 compute-0 sudo[205567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcwwcevhkpmqekealoivaqarvbbgowyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045640.5581455-992-227513835244016/AnsiballZ_copy.py'
Feb 02 15:20:40 compute-0 sudo[205567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:41 compute-0 python3.9[205569]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:41 compute-0 sudo[205567]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:41 compute-0 sudo[205719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mosyctwiyvmcrfmmqdphqhjukdznfdyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045641.2102697-992-127800182848474/AnsiballZ_copy.py'
Feb 02 15:20:41 compute-0 sudo[205719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:41 compute-0 python3.9[205721]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:41 compute-0 sudo[205719]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:42 compute-0 sudo[205871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rodzkqfbqsxlegdkxwrqywwaxptxvqnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045641.7791827-992-156238020892016/AnsiballZ_copy.py'
Feb 02 15:20:42 compute-0 sudo[205871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:42 compute-0 python3.9[205873]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:42 compute-0 sudo[205871]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:42 compute-0 ceph-mon[75334]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:42 compute-0 sudo[206023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umrlrpyqcdmgrqqebkrbmxswpaukfmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045642.3789222-992-270150231464150/AnsiballZ_copy.py'
Feb 02 15:20:42 compute-0 sudo[206023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:20:42
Feb 02 15:20:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:20:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:20:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'volumes']
Feb 02 15:20:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:20:42 compute-0 python3.9[206025]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:42 compute-0 sudo[206023]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:43 compute-0 sudo[206175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrobsrrndexbehixidbvpwgciuhhragj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045643.011454-1028-54108049439865/AnsiballZ_copy.py'
Feb 02 15:20:43 compute-0 sudo[206175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:43 compute-0 python3.9[206177]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:43 compute-0 sudo[206175]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:43 compute-0 sudo[206327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqygpldfmpmrufewzhlosmwebvliyjbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045643.6647782-1028-18948231820893/AnsiballZ_copy.py'
Feb 02 15:20:43 compute-0 sudo[206327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:44 compute-0 python3.9[206329]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:44 compute-0 sudo[206327]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:44 compute-0 sudo[206479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chvvpzdwpeubuszbprxnaukhbmsqtprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045644.1622298-1028-992664894200/AnsiballZ_copy.py'
Feb 02 15:20:44 compute-0 sudo[206479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:44 compute-0 ceph-mon[75334]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:44 compute-0 python3.9[206481]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:44 compute-0 sudo[206479]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:20:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:20:44 compute-0 sudo[206631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyamnwjmdqqeryqeyuojngeihxmxsqya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045644.75252-1028-80707124083170/AnsiballZ_copy.py'
Feb 02 15:20:44 compute-0 sudo[206631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:45 compute-0 python3.9[206633]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:45 compute-0 sudo[206631]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:45 compute-0 sudo[206783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvcqfwrniqvbbjmadcrbbnrvmbgafkjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045645.3321352-1028-247652251615322/AnsiballZ_copy.py'
Feb 02 15:20:45 compute-0 sudo[206783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:45 compute-0 python3.9[206785]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:45 compute-0 sudo[206783]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:46 compute-0 sudo[206935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdwgolyambnmfdcnvwzpjbskhsuuoftl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045645.9172215-1064-232395702837205/AnsiballZ_systemd.py'
Feb 02 15:20:46 compute-0 sudo[206935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:46 compute-0 python3.9[206937]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:20:46 compute-0 systemd[1]: Reloading.
Feb 02 15:20:46 compute-0 systemd-rc-local-generator[206963]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:20:46 compute-0 systemd-sysv-generator[206966]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:20:46 compute-0 ceph-mon[75334]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:46 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Feb 02 15:20:46 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Feb 02 15:20:46 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Feb 02 15:20:46 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb 02 15:20:46 compute-0 systemd[1]: Starting libvirt logging daemon...
Feb 02 15:20:46 compute-0 systemd[1]: Started libvirt logging daemon.
Feb 02 15:20:46 compute-0 sudo[206935]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:47 compute-0 sudo[207127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chngkvnpsaocamqxcqjrpxzuhsuqfbwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045647.103187-1064-184386521490828/AnsiballZ_systemd.py'
Feb 02 15:20:47 compute-0 sudo[207127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:47 compute-0 python3.9[207129]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:20:47 compute-0 systemd[1]: Reloading.
Feb 02 15:20:47 compute-0 systemd-sysv-generator[207161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:20:47 compute-0 systemd-rc-local-generator[207157]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:20:47 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Feb 02 15:20:47 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb 02 15:20:47 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb 02 15:20:47 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb 02 15:20:47 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb 02 15:20:47 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb 02 15:20:47 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 02 15:20:47 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 02 15:20:47 compute-0 sudo[207127]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:48 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 02 15:20:48 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 02 15:20:48 compute-0 sudo[207345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-petuxpfyyinvtbzsbgtstimbudjujugw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045648.100128-1064-252074472735455/AnsiballZ_systemd.py'
Feb 02 15:20:48 compute-0 sudo[207345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:48 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb 02 15:20:48 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb 02 15:20:48 compute-0 python3.9[207347]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:20:48 compute-0 systemd[1]: Reloading.
Feb 02 15:20:48 compute-0 systemd-rc-local-generator[207377]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:20:48 compute-0 systemd-sysv-generator[207380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:20:48 compute-0 ceph-mon[75334]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:48 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb 02 15:20:48 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb 02 15:20:48 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb 02 15:20:48 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb 02 15:20:48 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 02 15:20:48 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 02 15:20:48 compute-0 sudo[207345]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:49 compute-0 setroubleshoot[207193]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 871b6a20-1181-481b-9211-7bb360d521a0
Feb 02 15:20:49 compute-0 setroubleshoot[207193]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 02 15:20:49 compute-0 setroubleshoot[207193]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 871b6a20-1181-481b-9211-7bb360d521a0
Feb 02 15:20:49 compute-0 setroubleshoot[207193]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 02 15:20:49 compute-0 sudo[207566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prttnmfadgsmdjgdzlukdmxiayzjlbdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045649.1365685-1064-204924336462128/AnsiballZ_systemd.py'
Feb 02 15:20:49 compute-0 sudo[207566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:49 compute-0 python3.9[207568]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:20:49 compute-0 systemd[1]: Reloading.
Feb 02 15:20:49 compute-0 systemd-sysv-generator[207600]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:20:49 compute-0 systemd-rc-local-generator[207596]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:20:50 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Feb 02 15:20:50 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Feb 02 15:20:50 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 02 15:20:50 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb 02 15:20:50 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb 02 15:20:50 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb 02 15:20:50 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb 02 15:20:50 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb 02 15:20:50 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb 02 15:20:50 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb 02 15:20:50 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 02 15:20:50 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 02 15:20:50 compute-0 sudo[207566]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:50 compute-0 sudo[207782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hspskdwphcqrsovibkiqmkeeewuetcjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045650.3186414-1064-168295768300762/AnsiballZ_systemd.py'
Feb 02 15:20:50 compute-0 sudo[207782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:50 compute-0 ceph-mon[75334]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:50 compute-0 python3.9[207784]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:20:50 compute-0 systemd[1]: Reloading.
Feb 02 15:20:51 compute-0 systemd-sysv-generator[207811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:20:51 compute-0 systemd-rc-local-generator[207806]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:20:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:51 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Feb 02 15:20:51 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Feb 02 15:20:51 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Feb 02 15:20:51 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb 02 15:20:51 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb 02 15:20:51 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb 02 15:20:51 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 02 15:20:51 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 02 15:20:51 compute-0 sudo[207782]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:51 compute-0 sudo[207994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqdhihjlncyehzezfdojaqplxcuszull ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045651.5791454-1101-226376320488391/AnsiballZ_file.py'
Feb 02 15:20:51 compute-0 sudo[207994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:52 compute-0 python3.9[207996]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:52 compute-0 sudo[207994]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:52 compute-0 sudo[208146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-webapswzbkwpgjrjkbcbeaymqniwvvpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045652.2970645-1109-13498479839635/AnsiballZ_find.py'
Feb 02 15:20:52 compute-0 sudo[208146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:52 compute-0 ceph-mon[75334]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:52 compute-0 python3.9[208148]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 15:20:52 compute-0 sudo[208146]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:53 compute-0 sudo[208298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ritnwifapsaztbqsynpxggjhcfidkqpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045653.0311081-1117-211862576923977/AnsiballZ_command.py'
Feb 02 15:20:53 compute-0 sudo[208298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:53 compute-0 python3.9[208300]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:20:53 compute-0 sudo[208298]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:20:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:20:54 compute-0 python3.9[208454]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 15:20:54 compute-0 ceph-mon[75334]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:55 compute-0 python3.9[208604]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:20:55 compute-0 python3.9[208725]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045654.7141767-1136-271619740209510/.source.xml follow=False _original_basename=secret.xml.j2 checksum=b5a7626439a1f3b0f708899b151bf3c662216436 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:56 compute-0 sudo[208875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxgxiwrwdkppoqdfgiacdkujqvcsvvzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045656.003582-1151-207037480928531/AnsiballZ_command.py'
Feb 02 15:20:56 compute-0 sudo[208875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:56 compute-0 python3.9[208877]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine e43470b2-6632-573a-87d3-0f5428ec59e9
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:20:56 compute-0 polkitd[43651]: Registered Authentication Agent for unix-process:208879:314244 (system bus name :1.2543 [pkttyagent --process 208879 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Feb 02 15:20:56 compute-0 polkitd[43651]: Unregistered Authentication Agent for unix-process:208879:314244 (system bus name :1.2543, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 02 15:20:56 compute-0 polkitd[43651]: Registered Authentication Agent for unix-process:208878:314243 (system bus name :1.2544 [pkttyagent --process 208878 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Feb 02 15:20:56 compute-0 polkitd[43651]: Unregistered Authentication Agent for unix-process:208878:314243 (system bus name :1.2544, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 02 15:20:56 compute-0 sudo[208875]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:56 compute-0 ceph-mon[75334]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:20:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:57 compute-0 python3.9[209039]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:20:57 compute-0 sudo[209189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znpswqsbphylolopjoolexeklukepozv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045657.4959793-1167-103316708033393/AnsiballZ_command.py'
Feb 02 15:20:57 compute-0 sudo[209189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:58 compute-0 sudo[209189]: pam_unix(sudo:session): session closed for user root
Feb 02 15:20:58 compute-0 sudo[209342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czplypkrydobqojhyndgutfyfsfxoleg ; FSID=e43470b2-6632-573a-87d3-0f5428ec59e9 KEY=AQBNvYBpAAAAABAAhvMLOwrnQugwbkZIzlc9Gw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045658.2644582-1175-41440883552594/AnsiballZ_command.py'
Feb 02 15:20:58 compute-0 sudo[209342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:20:58 compute-0 ceph-mon[75334]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:58 compute-0 polkitd[43651]: Registered Authentication Agent for unix-process:209345:314470 (system bus name :1.2547 [pkttyagent --process 209345 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Feb 02 15:20:58 compute-0 polkitd[43651]: Unregistered Authentication Agent for unix-process:209345:314470 (system bus name :1.2547, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 02 15:20:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:20:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:20:59.230 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:20:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:20:59.231 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:20:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:20:59.231 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:20:59 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb 02 15:20:59 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 02 15:20:59 compute-0 sudo[209342]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:00 compute-0 sudo[209500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhmykdahnojchyexvzlvezmvyhzpikvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045660.047997-1183-55783477481288/AnsiballZ_copy.py'
Feb 02 15:21:00 compute-0 sudo[209500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:00 compute-0 python3.9[209502]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:00 compute-0 sudo[209500]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:00 compute-0 ceph-mon[75334]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:00 compute-0 sudo[209652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smruwothflhtztdrvqftyrwilryreoej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045660.6619759-1191-99889261011334/AnsiballZ_stat.py'
Feb 02 15:21:00 compute-0 sudo[209652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:01 compute-0 python3.9[209654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:01 compute-0 sudo[209652]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:01 compute-0 podman[209749]: 2026-02-02 15:21:01.514459419 +0000 UTC m=+0.076960281 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 02 15:21:01 compute-0 sudo[209785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esedcbsiuxxfuwuyinbqcryplfkhcnsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045660.6619759-1191-99889261011334/AnsiballZ_copy.py'
Feb 02 15:21:01 compute-0 sudo[209785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:01 compute-0 podman[209795]: 2026-02-02 15:21:01.669634922 +0000 UTC m=+0.142469123 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:21:01 compute-0 python3.9[209798]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045660.6619759-1191-99889261011334/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:01 compute-0 sudo[209785]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:02 compute-0 sudo[209973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cejwahyvetudfbcxclxjmensggygzqyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045662.0836616-1207-191577197501089/AnsiballZ_file.py'
Feb 02 15:21:02 compute-0 sudo[209973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:02 compute-0 python3.9[209975]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:02 compute-0 sudo[209973]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:02 compute-0 ceph-mon[75334]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:03 compute-0 sudo[210125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxsionxzjjqwerfunmjjmlnusznunsir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045662.7574322-1215-122448427230854/AnsiballZ_stat.py'
Feb 02 15:21:03 compute-0 sudo[210125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:03 compute-0 python3.9[210127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:03 compute-0 sudo[210125]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:03 compute-0 sudo[210203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnnqkbijdplnenslkvwwiumcjbwehlfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045662.7574322-1215-122448427230854/AnsiballZ_file.py'
Feb 02 15:21:03 compute-0 sudo[210203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:03 compute-0 python3.9[210205]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:03 compute-0 sudo[210203]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:04 compute-0 sudo[210355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvwpwaueghbvscrcfvadwvfvbmesxuxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045663.881672-1227-95922254615170/AnsiballZ_stat.py'
Feb 02 15:21:04 compute-0 sudo[210355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:04 compute-0 python3.9[210357]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:04 compute-0 sudo[210355]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:04 compute-0 sudo[210433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcxtzonuxfmadnugrlvskmamtwojrzrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045663.881672-1227-95922254615170/AnsiballZ_file.py'
Feb 02 15:21:04 compute-0 sudo[210433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:04 compute-0 ceph-mon[75334]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:04 compute-0 python3.9[210435]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3975nw_e recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:04 compute-0 sudo[210433]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:05 compute-0 sudo[210585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlyxwlddxfkfzrbldzqvggahmsazciii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045665.0405571-1239-17746754879145/AnsiballZ_stat.py'
Feb 02 15:21:05 compute-0 sudo[210585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:05 compute-0 python3.9[210587]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:05 compute-0 sudo[210585]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:05 compute-0 sudo[210663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjeztihrkajxenruqqpxfxxgjwatepnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045665.0405571-1239-17746754879145/AnsiballZ_file.py'
Feb 02 15:21:05 compute-0 sudo[210663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:06 compute-0 python3.9[210665]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:06 compute-0 sudo[210663]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:06 compute-0 sudo[210815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjwshdnaksoxyluxovyrmumwxfzlkjst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045666.2603953-1252-172952532478638/AnsiballZ_command.py'
Feb 02 15:21:06 compute-0 sudo[210815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:06 compute-0 python3.9[210817]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:21:06 compute-0 sudo[210815]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:06 compute-0 ceph-mon[75334]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:07 compute-0 sudo[210968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuaagiyotjqejeltwbodoveplvrtonlw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770045666.9340456-1260-215890084593429/AnsiballZ_edpm_nftables_from_files.py'
Feb 02 15:21:07 compute-0 sudo[210968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:07 compute-0 python3[210970]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 02 15:21:07 compute-0 sudo[210968]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:08 compute-0 sudo[211120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrxddzvfqcqwrdqjlnqumzvduypwpxpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045667.8203983-1268-78507842715144/AnsiballZ_stat.py'
Feb 02 15:21:08 compute-0 sudo[211120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:08 compute-0 python3.9[211122]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:08 compute-0 sudo[211120]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:08 compute-0 sudo[211198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nssthgnotaqwgayisiwjvmuiozpexupl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045667.8203983-1268-78507842715144/AnsiballZ_file.py'
Feb 02 15:21:08 compute-0 sudo[211198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:08 compute-0 python3.9[211200]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:08 compute-0 sudo[211198]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:08 compute-0 ceph-mon[75334]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:09 compute-0 sudo[211350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcqmfytmjdcuibbvkiiwcvilrrdnwxrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045668.8993895-1280-217196241673790/AnsiballZ_stat.py'
Feb 02 15:21:09 compute-0 sudo[211350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:09 compute-0 python3.9[211352]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:09 compute-0 sudo[211350]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:09 compute-0 sudo[211475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsupronhrflcxkncsnqxlokvahqoubqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045668.8993895-1280-217196241673790/AnsiballZ_copy.py'
Feb 02 15:21:09 compute-0 sudo[211475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:10 compute-0 python3.9[211477]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045668.8993895-1280-217196241673790/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:10 compute-0 sudo[211475]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:10 compute-0 sudo[211627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfqqamzzzwyakcydlzggzihvjkqraohq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045670.2746425-1295-257844813565863/AnsiballZ_stat.py'
Feb 02 15:21:10 compute-0 sudo[211627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:10 compute-0 python3.9[211629]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:10 compute-0 ceph-mon[75334]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:10 compute-0 sudo[211627]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:11 compute-0 sudo[211705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-volqtfekrobjejjwbtloyrssmskktulu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045670.2746425-1295-257844813565863/AnsiballZ_file.py'
Feb 02 15:21:11 compute-0 sudo[211705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:11 compute-0 python3.9[211707]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:11 compute-0 sudo[211705]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:11 compute-0 sudo[211857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hslmjwgwswatmxufxuccuvcbmxoyxvdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045671.3480856-1307-274018347974501/AnsiballZ_stat.py'
Feb 02 15:21:11 compute-0 sudo[211857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:11 compute-0 python3.9[211859]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:11 compute-0 sudo[211857]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:12 compute-0 sudo[211935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abjygkgyeadyeagzqrhvcnakfbtqfpmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045671.3480856-1307-274018347974501/AnsiballZ_file.py'
Feb 02 15:21:12 compute-0 sudo[211935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:12 compute-0 python3.9[211937]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:12 compute-0 sudo[211935]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:12 compute-0 ceph-mon[75334]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:12 compute-0 sudo[212087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xckimsuwqwbndloezmsqdbyfzitwdaev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045672.498667-1319-174375555919512/AnsiballZ_stat.py'
Feb 02 15:21:12 compute-0 sudo[212087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:13 compute-0 python3.9[212089]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:13 compute-0 sudo[212087]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:13 compute-0 sudo[212212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waiwghxbvmfjdjcvrktjmuzztbavyxtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045672.498667-1319-174375555919512/AnsiballZ_copy.py'
Feb 02 15:21:13 compute-0 sudo[212212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:13 compute-0 python3.9[212214]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770045672.498667-1319-174375555919512/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:13 compute-0 sudo[212212]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:14 compute-0 sudo[212364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdryuodefvyuoquqppxbbvdavzncapjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045673.968362-1334-65708967925212/AnsiballZ_file.py'
Feb 02 15:21:14 compute-0 sudo[212364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:14 compute-0 python3.9[212366]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:14 compute-0 sudo[212364]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:21:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:21:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:21:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:21:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:21:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:21:14 compute-0 ceph-mon[75334]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:15 compute-0 sudo[212516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqdipvolhhrhhuyceesrtchtggqvusym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045674.704607-1342-62706535250768/AnsiballZ_command.py'
Feb 02 15:21:15 compute-0 sudo[212516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:15 compute-0 python3.9[212518]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:21:15 compute-0 sudo[212516]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:15 compute-0 sudo[212671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgtilgrowhwtmczzdoqlrfkslxuqupbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045675.4694517-1350-263303807734338/AnsiballZ_blockinfile.py'
Feb 02 15:21:15 compute-0 sudo[212671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:16 compute-0 python3.9[212673]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:16 compute-0 sudo[212671]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:16 compute-0 sudo[212823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymemrtcdqxmzwdixaliztqhkbukbwist ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045676.325473-1359-115711753026393/AnsiballZ_command.py'
Feb 02 15:21:16 compute-0 sudo[212823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:16 compute-0 python3.9[212825]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:21:16 compute-0 ceph-mon[75334]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:16 compute-0 sudo[212823]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:17 compute-0 sudo[212976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nestfejeefrupbxhobgcdwmfnekzpyqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045677.0208433-1367-76358464508263/AnsiballZ_stat.py'
Feb 02 15:21:17 compute-0 sudo[212976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:17 compute-0 python3.9[212978]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:21:17 compute-0 sudo[212976]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:18 compute-0 sudo[213130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hllnwkkqxneyukghwthwkmjunoqtjmio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045677.7377365-1375-280841934010691/AnsiballZ_command.py'
Feb 02 15:21:18 compute-0 sudo[213130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:18 compute-0 python3.9[213132]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:21:18 compute-0 sudo[213130]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:18 compute-0 sudo[213285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eohxthndztemptgnolhhcajgmbyiyrhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045678.568368-1383-94294043716242/AnsiballZ_file.py'
Feb 02 15:21:18 compute-0 sudo[213285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:18 compute-0 ceph-mon[75334]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:18 compute-0 python3.9[213287]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:19 compute-0 sudo[213285]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:19 compute-0 sudo[213437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gasihjycwtkzvtjyharclpitbokqntgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045679.1679432-1391-81996912877825/AnsiballZ_stat.py'
Feb 02 15:21:19 compute-0 sudo[213437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:19 compute-0 python3.9[213439]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:19 compute-0 sudo[213437]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:19 compute-0 sudo[213560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzfvvmmxpatszbawvismpsrnvqiihug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045679.1679432-1391-81996912877825/AnsiballZ_copy.py'
Feb 02 15:21:19 compute-0 sudo[213560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:20 compute-0 python3.9[213562]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045679.1679432-1391-81996912877825/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:20 compute-0 sudo[213560]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:20 compute-0 sudo[213712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwakltstkflzedfpzardxdtjpgsplvag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045680.2347631-1406-100451302357873/AnsiballZ_stat.py'
Feb 02 15:21:20 compute-0 sudo[213712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:20 compute-0 python3.9[213714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:20 compute-0 sudo[213712]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:20 compute-0 ceph-mon[75334]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:21 compute-0 sudo[213835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvpvdtfhmhmskhhxdobdnzezhsqytydz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045680.2347631-1406-100451302357873/AnsiballZ_copy.py'
Feb 02 15:21:21 compute-0 sudo[213835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:21 compute-0 python3.9[213837]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045680.2347631-1406-100451302357873/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:21 compute-0 sudo[213835]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:21 compute-0 sudo[213987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvpkrvgpzxpzqetmysjeqrrnkunvigeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045681.392024-1421-214070705648367/AnsiballZ_stat.py'
Feb 02 15:21:21 compute-0 sudo[213987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:21 compute-0 python3.9[213989]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:21 compute-0 sudo[213987]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:22 compute-0 sudo[214110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxdukbowbhxugnzripdzmudznugeceeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045681.392024-1421-214070705648367/AnsiballZ_copy.py'
Feb 02 15:21:22 compute-0 sudo[214110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:22 compute-0 python3.9[214112]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045681.392024-1421-214070705648367/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:22 compute-0 sudo[214110]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:22 compute-0 ceph-mon[75334]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:23 compute-0 sudo[214262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjrsjhexqjpktghljiruuucublyogzml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045682.6768656-1436-207670282339738/AnsiballZ_systemd.py'
Feb 02 15:21:23 compute-0 sudo[214262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:23 compute-0 python3.9[214264]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:21:23 compute-0 systemd[1]: Reloading.
Feb 02 15:21:23 compute-0 systemd-rc-local-generator[214287]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:21:23 compute-0 systemd-sysv-generator[214294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:21:23 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Feb 02 15:21:23 compute-0 sudo[214262]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:24 compute-0 sudo[214454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzbgffxzxhmskxwgzknkcqqjoylfxelj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045684.0291924-1444-156828399035673/AnsiballZ_systemd.py'
Feb 02 15:21:24 compute-0 sudo[214454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:24 compute-0 python3.9[214456]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 02 15:21:24 compute-0 systemd[1]: Reloading.
Feb 02 15:21:24 compute-0 systemd-rc-local-generator[214506]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:21:24 compute-0 systemd-sysv-generator[214509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:21:24 compute-0 ceph-mon[75334]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:25 compute-0 sudo[214458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:21:25 compute-0 sudo[214458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:21:25 compute-0 sudo[214458]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:25 compute-0 systemd[1]: Reloading.
Feb 02 15:21:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:25 compute-0 systemd-sysv-generator[214567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:21:25 compute-0 systemd-rc-local-generator[214559]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:21:25 compute-0 sudo[214518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:21:25 compute-0 sudo[214454]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:25 compute-0 sudo[214518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:21:25 compute-0 sshd-session[156073]: Connection closed by 192.168.122.30 port 38562
Feb 02 15:21:25 compute-0 sshd-session[156049]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:21:25 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Feb 02 15:21:25 compute-0 systemd[1]: session-49.scope: Consumed 3min 10.612s CPU time.
Feb 02 15:21:25 compute-0 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Feb 02 15:21:25 compute-0 systemd-logind[786]: Removed session 49.
Feb 02 15:21:25 compute-0 sudo[214518]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:21:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:21:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:21:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:21:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:21:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:21:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:21:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:21:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:21:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:21:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:21:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:21:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:21:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:21:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:21:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:21:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:21:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:21:25 compute-0 sudo[214633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:21:25 compute-0 sudo[214633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:21:25 compute-0 sudo[214633]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:26 compute-0 sudo[214658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:21:26 compute-0 sudo[214658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:21:26 compute-0 podman[214697]: 2026-02-02 15:21:26.326627311 +0000 UTC m=+0.050676298 container create 0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:21:26 compute-0 systemd[1]: Started libpod-conmon-0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274.scope.
Feb 02 15:21:26 compute-0 podman[214697]: 2026-02-02 15:21:26.301005655 +0000 UTC m=+0.025054682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:21:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:21:26 compute-0 podman[214697]: 2026-02-02 15:21:26.42933421 +0000 UTC m=+0.153383177 container init 0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:21:26 compute-0 podman[214697]: 2026-02-02 15:21:26.437234546 +0000 UTC m=+0.161283533 container start 0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:21:26 compute-0 podman[214697]: 2026-02-02 15:21:26.442059531 +0000 UTC m=+0.166108568 container attach 0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williamson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:21:26 compute-0 boring_williamson[214713]: 167 167
Feb 02 15:21:26 compute-0 systemd[1]: libpod-0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274.scope: Deactivated successfully.
Feb 02 15:21:26 compute-0 conmon[214713]: conmon 0b61dc05075606d95e90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274.scope/container/memory.events
Feb 02 15:21:26 compute-0 podman[214697]: 2026-02-02 15:21:26.448502458 +0000 UTC m=+0.172551445 container died 0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-60ec5a25c7b756d7beec539402bb1c66f6abfda0b8f00e8e75dafe029b040946-merged.mount: Deactivated successfully.
Feb 02 15:21:26 compute-0 podman[214697]: 2026-02-02 15:21:26.486131196 +0000 UTC m=+0.210180183 container remove 0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb 02 15:21:26 compute-0 systemd[1]: libpod-conmon-0b61dc05075606d95e90dbd9b13090aad5755ed07bbfe8844bd9ab472d1cc274.scope: Deactivated successfully.
Feb 02 15:21:26 compute-0 podman[214737]: 2026-02-02 15:21:26.668053853 +0000 UTC m=+0.055810021 container create 68b911cc38d16f66204899dfd499672b170af09dda6086565993acc90b67f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_wiles, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:21:26 compute-0 systemd[1]: Started libpod-conmon-68b911cc38d16f66204899dfd499672b170af09dda6086565993acc90b67f1f4.scope.
Feb 02 15:21:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9556b6f52029a2c07295376347767c3ffe3d35d7c2ab2afd4ba49e6c6ab14891/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9556b6f52029a2c07295376347767c3ffe3d35d7c2ab2afd4ba49e6c6ab14891/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9556b6f52029a2c07295376347767c3ffe3d35d7c2ab2afd4ba49e6c6ab14891/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:26 compute-0 podman[214737]: 2026-02-02 15:21:26.649212354 +0000 UTC m=+0.036968542 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9556b6f52029a2c07295376347767c3ffe3d35d7c2ab2afd4ba49e6c6ab14891/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9556b6f52029a2c07295376347767c3ffe3d35d7c2ab2afd4ba49e6c6ab14891/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:26 compute-0 podman[214737]: 2026-02-02 15:21:26.773807342 +0000 UTC m=+0.161563560 container init 68b911cc38d16f66204899dfd499672b170af09dda6086565993acc90b67f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:21:26 compute-0 podman[214737]: 2026-02-02 15:21:26.791239294 +0000 UTC m=+0.178995492 container start 68b911cc38d16f66204899dfd499672b170af09dda6086565993acc90b67f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_wiles, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:21:26 compute-0 podman[214737]: 2026-02-02 15:21:26.795768953 +0000 UTC m=+0.183525161 container attach 68b911cc38d16f66204899dfd499672b170af09dda6086565993acc90b67f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:21:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:26 compute-0 ceph-mon[75334]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:27 compute-0 sharp_wiles[214755]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:21:27 compute-0 sharp_wiles[214755]: --> All data devices are unavailable
Feb 02 15:21:27 compute-0 systemd[1]: libpod-68b911cc38d16f66204899dfd499672b170af09dda6086565993acc90b67f1f4.scope: Deactivated successfully.
Feb 02 15:21:27 compute-0 podman[214737]: 2026-02-02 15:21:27.337045988 +0000 UTC m=+0.724802196 container died 68b911cc38d16f66204899dfd499672b170af09dda6086565993acc90b67f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9556b6f52029a2c07295376347767c3ffe3d35d7c2ab2afd4ba49e6c6ab14891-merged.mount: Deactivated successfully.
Feb 02 15:21:27 compute-0 podman[214737]: 2026-02-02 15:21:27.392305585 +0000 UTC m=+0.780061773 container remove 68b911cc38d16f66204899dfd499672b170af09dda6086565993acc90b67f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_wiles, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:21:27 compute-0 systemd[1]: libpod-conmon-68b911cc38d16f66204899dfd499672b170af09dda6086565993acc90b67f1f4.scope: Deactivated successfully.
Feb 02 15:21:27 compute-0 sudo[214658]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:27 compute-0 sudo[214786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:21:27 compute-0 sudo[214786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:21:27 compute-0 sudo[214786]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:27 compute-0 sudo[214811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:21:27 compute-0 sudo[214811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:21:27 compute-0 podman[214849]: 2026-02-02 15:21:27.82503915 +0000 UTC m=+0.042199937 container create 80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:21:27 compute-0 systemd[1]: Started libpod-conmon-80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363.scope.
Feb 02 15:21:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:21:27 compute-0 podman[214849]: 2026-02-02 15:21:27.900168532 +0000 UTC m=+0.117329429 container init 80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 15:21:27 compute-0 podman[214849]: 2026-02-02 15:21:27.806314123 +0000 UTC m=+0.023474950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:21:27 compute-0 podman[214849]: 2026-02-02 15:21:27.905683425 +0000 UTC m=+0.122844242 container start 80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mccarthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:21:27 compute-0 podman[214849]: 2026-02-02 15:21:27.909620128 +0000 UTC m=+0.126781025 container attach 80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mccarthy, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:21:27 compute-0 friendly_mccarthy[214865]: 167 167
Feb 02 15:21:27 compute-0 systemd[1]: libpod-80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363.scope: Deactivated successfully.
Feb 02 15:21:27 compute-0 conmon[214865]: conmon 80d5d72d55556e750c45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363.scope/container/memory.events
Feb 02 15:21:27 compute-0 podman[214849]: 2026-02-02 15:21:27.912314278 +0000 UTC m=+0.129475105 container died 80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1041c1d855f63752a95a522a37b93e1cb78e0875497d69e7405e027c43e172b-merged.mount: Deactivated successfully.
Feb 02 15:21:27 compute-0 podman[214849]: 2026-02-02 15:21:27.95624973 +0000 UTC m=+0.173410547 container remove 80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mccarthy, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:21:27 compute-0 systemd[1]: libpod-conmon-80d5d72d55556e750c45cf4da6225ab34fdf5cb1f233ac13ebed767ab03b3363.scope: Deactivated successfully.
Feb 02 15:21:28 compute-0 podman[214889]: 2026-02-02 15:21:28.110040276 +0000 UTC m=+0.046039178 container create 9c242bfe6352000aff386b80581566050267e351fbf5fe4d143e9ac960fdf0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_spence, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:21:28 compute-0 systemd[1]: Started libpod-conmon-9c242bfe6352000aff386b80581566050267e351fbf5fe4d143e9ac960fdf0bc.scope.
Feb 02 15:21:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:21:28 compute-0 podman[214889]: 2026-02-02 15:21:28.089873222 +0000 UTC m=+0.025872214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8e735a541601f4fca4262d28fbe0a5479830c26d270164d66175607b62e788/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8e735a541601f4fca4262d28fbe0a5479830c26d270164d66175607b62e788/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8e735a541601f4fca4262d28fbe0a5479830c26d270164d66175607b62e788/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8e735a541601f4fca4262d28fbe0a5479830c26d270164d66175607b62e788/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:28 compute-0 podman[214889]: 2026-02-02 15:21:28.212865129 +0000 UTC m=+0.148864041 container init 9c242bfe6352000aff386b80581566050267e351fbf5fe4d143e9ac960fdf0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:21:28 compute-0 podman[214889]: 2026-02-02 15:21:28.220596299 +0000 UTC m=+0.156595201 container start 9c242bfe6352000aff386b80581566050267e351fbf5fe4d143e9ac960fdf0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_spence, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:21:28 compute-0 podman[214889]: 2026-02-02 15:21:28.225553288 +0000 UTC m=+0.161552220 container attach 9c242bfe6352000aff386b80581566050267e351fbf5fe4d143e9ac960fdf0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_spence, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:21:28 compute-0 optimistic_spence[214907]: {
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:     "0": [
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:         {
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "devices": [
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "/dev/loop3"
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             ],
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_name": "ceph_lv0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_size": "21470642176",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "name": "ceph_lv0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "tags": {
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.cluster_name": "ceph",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.crush_device_class": "",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.encrypted": "0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.objectstore": "bluestore",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.osd_id": "0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.type": "block",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.vdo": "0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.with_tpm": "0"
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             },
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "type": "block",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "vg_name": "ceph_vg0"
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:         }
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:     ],
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:     "1": [
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:         {
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "devices": [
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "/dev/loop4"
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             ],
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_name": "ceph_lv1",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_size": "21470642176",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "name": "ceph_lv1",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "tags": {
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.cluster_name": "ceph",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.crush_device_class": "",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.encrypted": "0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.objectstore": "bluestore",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.osd_id": "1",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.type": "block",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.vdo": "0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.with_tpm": "0"
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             },
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "type": "block",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "vg_name": "ceph_vg1"
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:         }
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:     ],
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:     "2": [
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:         {
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "devices": [
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "/dev/loop5"
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             ],
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_name": "ceph_lv2",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_size": "21470642176",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "name": "ceph_lv2",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "tags": {
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.cluster_name": "ceph",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.crush_device_class": "",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.encrypted": "0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.objectstore": "bluestore",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.osd_id": "2",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.type": "block",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.vdo": "0",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:                 "ceph.with_tpm": "0"
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             },
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "type": "block",
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:             "vg_name": "ceph_vg2"
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:         }
Feb 02 15:21:28 compute-0 optimistic_spence[214907]:     ]
Feb 02 15:21:28 compute-0 optimistic_spence[214907]: }
Feb 02 15:21:28 compute-0 systemd[1]: libpod-9c242bfe6352000aff386b80581566050267e351fbf5fe4d143e9ac960fdf0bc.scope: Deactivated successfully.
Feb 02 15:21:28 compute-0 podman[214889]: 2026-02-02 15:21:28.567639408 +0000 UTC m=+0.503638310 container died 9c242bfe6352000aff386b80581566050267e351fbf5fe4d143e9ac960fdf0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_spence, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a8e735a541601f4fca4262d28fbe0a5479830c26d270164d66175607b62e788-merged.mount: Deactivated successfully.
Feb 02 15:21:28 compute-0 podman[214889]: 2026-02-02 15:21:28.614319091 +0000 UTC m=+0.550318033 container remove 9c242bfe6352000aff386b80581566050267e351fbf5fe4d143e9ac960fdf0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_spence, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:21:28 compute-0 systemd[1]: libpod-conmon-9c242bfe6352000aff386b80581566050267e351fbf5fe4d143e9ac960fdf0bc.scope: Deactivated successfully.
Feb 02 15:21:28 compute-0 sudo[214811]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:28 compute-0 sudo[214926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:21:28 compute-0 sudo[214926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:21:28 compute-0 sudo[214926]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:28 compute-0 sudo[214951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:21:28 compute-0 sudo[214951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:21:28 compute-0 ceph-mon[75334]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:29 compute-0 podman[214988]: 2026-02-02 15:21:29.0849008 +0000 UTC m=+0.056320875 container create a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:21:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:29 compute-0 systemd[1]: Started libpod-conmon-a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6.scope.
Feb 02 15:21:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:21:29 compute-0 podman[214988]: 2026-02-02 15:21:29.061397399 +0000 UTC m=+0.032817554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:21:29 compute-0 podman[214988]: 2026-02-02 15:21:29.170264519 +0000 UTC m=+0.141684624 container init a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:21:29 compute-0 podman[214988]: 2026-02-02 15:21:29.180760871 +0000 UTC m=+0.152180936 container start a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:21:29 compute-0 podman[214988]: 2026-02-02 15:21:29.184868138 +0000 UTC m=+0.156288223 container attach a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:21:29 compute-0 epic_hoover[215004]: 167 167
Feb 02 15:21:29 compute-0 systemd[1]: libpod-a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6.scope: Deactivated successfully.
Feb 02 15:21:29 compute-0 conmon[215004]: conmon a0689e2f95953b266ae6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6.scope/container/memory.events
Feb 02 15:21:29 compute-0 podman[214988]: 2026-02-02 15:21:29.190219597 +0000 UTC m=+0.161639702 container died a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb9107ac79ecca36cd457a180a09947263373498d84cee9739c104c8c6a96c9a-merged.mount: Deactivated successfully.
Feb 02 15:21:29 compute-0 podman[214988]: 2026-02-02 15:21:29.235245607 +0000 UTC m=+0.206665712 container remove a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:21:29 compute-0 systemd[1]: libpod-conmon-a0689e2f95953b266ae68d111410ef94480c8914053063e7c390b298a86323f6.scope: Deactivated successfully.
Feb 02 15:21:29 compute-0 podman[215030]: 2026-02-02 15:21:29.39238825 +0000 UTC m=+0.046971221 container create d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:21:29 compute-0 systemd[1]: Started libpod-conmon-d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3.scope.
Feb 02 15:21:29 compute-0 podman[215030]: 2026-02-02 15:21:29.370659936 +0000 UTC m=+0.025242997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:21:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50825432d7ee9eb994e3bbe7a4d2ed6b19281daaec0e3849851a89b7173cbea9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50825432d7ee9eb994e3bbe7a4d2ed6b19281daaec0e3849851a89b7173cbea9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50825432d7ee9eb994e3bbe7a4d2ed6b19281daaec0e3849851a89b7173cbea9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50825432d7ee9eb994e3bbe7a4d2ed6b19281daaec0e3849851a89b7173cbea9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:21:29 compute-0 podman[215030]: 2026-02-02 15:21:29.497306777 +0000 UTC m=+0.151889778 container init d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:21:29 compute-0 podman[215030]: 2026-02-02 15:21:29.50550823 +0000 UTC m=+0.160091221 container start d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:21:29 compute-0 podman[215030]: 2026-02-02 15:21:29.509173126 +0000 UTC m=+0.163756137 container attach d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:21:30 compute-0 lvm[215126]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:21:30 compute-0 lvm[215126]: VG ceph_vg0 finished
Feb 02 15:21:30 compute-0 lvm[215127]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:21:30 compute-0 lvm[215127]: VG ceph_vg1 finished
Feb 02 15:21:30 compute-0 lvm[215129]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:21:30 compute-0 lvm[215129]: VG ceph_vg2 finished
Feb 02 15:21:30 compute-0 angry_saha[215047]: {}
Feb 02 15:21:30 compute-0 systemd[1]: libpod-d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3.scope: Deactivated successfully.
Feb 02 15:21:30 compute-0 systemd[1]: libpod-d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3.scope: Consumed 1.166s CPU time.
Feb 02 15:21:30 compute-0 podman[215132]: 2026-02-02 15:21:30.350479719 +0000 UTC m=+0.040720800 container died d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:21:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-50825432d7ee9eb994e3bbe7a4d2ed6b19281daaec0e3849851a89b7173cbea9-merged.mount: Deactivated successfully.
Feb 02 15:21:30 compute-0 podman[215132]: 2026-02-02 15:21:30.392535822 +0000 UTC m=+0.082776823 container remove d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 15:21:30 compute-0 systemd[1]: libpod-conmon-d0ecda89bca79cbf0cce5bce9b95718f10c6ab71b72dc81d91d610f394dd1ad3.scope: Deactivated successfully.
Feb 02 15:21:30 compute-0 sudo[214951]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:21:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:21:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:21:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:21:30 compute-0 sudo[215147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:21:30 compute-0 sudo[215147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:21:30 compute-0 sudo[215147]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:30 compute-0 ceph-mon[75334]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:21:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:21:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:31 compute-0 sshd-session[215172]: Accepted publickey for zuul from 192.168.122.30 port 47640 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:21:31 compute-0 systemd-logind[786]: New session 50 of user zuul.
Feb 02 15:21:31 compute-0 systemd[1]: Started Session 50 of User zuul.
Feb 02 15:21:31 compute-0 sshd-session[215172]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:21:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:32 compute-0 podman[215300]: 2026-02-02 15:21:32.160527376 +0000 UTC m=+0.058499612 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:21:32 compute-0 podman[215299]: 2026-02-02 15:21:32.221569722 +0000 UTC m=+0.123945782 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Feb 02 15:21:32 compute-0 python3.9[215361]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:21:32 compute-0 ceph-mon[75334]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:33 compute-0 python3.9[215526]: ansible-ansible.builtin.service_facts Invoked
Feb 02 15:21:33 compute-0 network[215543]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 15:21:33 compute-0 network[215544]: 'network-scripts' will be removed from distribution in near future.
Feb 02 15:21:33 compute-0 network[215545]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 15:21:34 compute-0 ceph-mon[75334]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:36 compute-0 ceph-mon[75334]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:37 compute-0 sudo[215815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biqrurjolvbypcalhdtbujyviflmwqkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045696.9183602-42-213475665970731/AnsiballZ_setup.py'
Feb 02 15:21:37 compute-0 sudo[215815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:37 compute-0 python3.9[215817]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 02 15:21:37 compute-0 sudo[215815]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:38 compute-0 sudo[215899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwwhyfptsgnrhposfqkyfzhjrmpkehgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045696.9183602-42-213475665970731/AnsiballZ_dnf.py'
Feb 02 15:21:38 compute-0 sudo[215899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:38 compute-0 python3.9[215901]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:21:38 compute-0 ceph-mon[75334]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:40 compute-0 ceph-mon[75334]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:21:42
Feb 02 15:21:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:21:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:21:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.mgr', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Feb 02 15:21:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:21:42 compute-0 ceph-mon[75334]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:43 compute-0 sudo[215899]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:44 compute-0 sudo[216052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waqsrqmpexiehqmpamhgyaweryfuawab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045703.6243277-54-75720384315510/AnsiballZ_stat.py'
Feb 02 15:21:44 compute-0 sudo[216052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:44 compute-0 python3.9[216054]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:21:44 compute-0 sudo[216052]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:21:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:21:44 compute-0 ceph-mon[75334]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:45 compute-0 sudo[216204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zonfooonayrpxlubikkmeslazgrzmbcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045704.558297-64-254553055125025/AnsiballZ_command.py'
Feb 02 15:21:45 compute-0 sudo[216204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:45 compute-0 python3.9[216206]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:21:45 compute-0 sudo[216204]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:45 compute-0 sudo[216357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjnbaevdshpoavghlzpntahysdmuuqkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045705.5892396-74-93623815841213/AnsiballZ_stat.py'
Feb 02 15:21:45 compute-0 sudo[216357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:46 compute-0 python3.9[216359]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:21:46 compute-0 sudo[216357]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:46 compute-0 sudo[216509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfnldyztwijlsrakfsachjqswrdhtcve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045706.3255043-82-202043669619031/AnsiballZ_command.py'
Feb 02 15:21:46 compute-0 sudo[216509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:46 compute-0 python3.9[216511]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:21:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:46 compute-0 sudo[216509]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:47 compute-0 ceph-mon[75334]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:47 compute-0 sudo[216662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euiyqkiasmcghnnelawwnmfvadjafsgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045707.0324662-90-160386410469069/AnsiballZ_stat.py'
Feb 02 15:21:47 compute-0 sudo[216662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:47 compute-0 python3.9[216664]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:21:47 compute-0 sudo[216662]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:48 compute-0 sudo[216785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahigbulidrjejoxqpikghvgdoyliitja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045707.0324662-90-160386410469069/AnsiballZ_copy.py'
Feb 02 15:21:48 compute-0 sudo[216785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:48 compute-0 python3.9[216787]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045707.0324662-90-160386410469069/.source.iscsi _original_basename=.ri4ighpv follow=False checksum=ec3611d3d5be2e684df6c341c8e9068fc433de7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:48 compute-0 sudo[216785]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:48 compute-0 sudo[216937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvvjrjratbgchklinvxvsfyyzxriloqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045708.4187574-105-196975862960159/AnsiballZ_file.py'
Feb 02 15:21:48 compute-0 sudo[216937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:49 compute-0 ceph-mon[75334]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:49 compute-0 python3.9[216939]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:49 compute-0 sudo[216937]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:49 compute-0 sudo[217089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwitrcchtlbshicoqshlfcnbcyaalolt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045709.269949-113-222893177866816/AnsiballZ_lineinfile.py'
Feb 02 15:21:49 compute-0 sudo[217089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:49 compute-0 python3.9[217091]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:21:49 compute-0 sudo[217089]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:50 compute-0 sudo[217241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpefbntllhhmeggdwhfowurxolhwelve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045710.1677268-122-199604225931647/AnsiballZ_systemd_service.py'
Feb 02 15:21:50 compute-0 sudo[217241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:51 compute-0 ceph-mon[75334]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:51 compute-0 python3.9[217243]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:21:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:51 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 02 15:21:51 compute-0 sudo[217241]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:51 compute-0 sudo[217397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyxkpkfywlcrawmyrcuglivxpxggbtxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045711.3095329-130-13916695895059/AnsiballZ_systemd_service.py'
Feb 02 15:21:51 compute-0 sudo[217397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:51 compute-0 python3.9[217399]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:21:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:51 compute-0 systemd[1]: Reloading.
Feb 02 15:21:52 compute-0 systemd-sysv-generator[217426]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:21:52 compute-0 systemd-rc-local-generator[217420]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:21:52 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 02 15:21:52 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 02 15:21:52 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Feb 02 15:21:52 compute-0 systemd[1]: Started Open-iSCSI.
Feb 02 15:21:52 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb 02 15:21:52 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb 02 15:21:52 compute-0 sudo[217397]: pam_unix(sudo:session): session closed for user root
Feb 02 15:21:53 compute-0 ceph-mon[75334]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:53 compute-0 python3.9[217598]: ansible-ansible.builtin.service_facts Invoked
Feb 02 15:21:53 compute-0 network[217615]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 15:21:53 compute-0 network[217616]: 'network-scripts' will be removed from distribution in near future.
Feb 02 15:21:53 compute-0 network[217617]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:21:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:21:55 compute-0 ceph-mon[75334]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:56 compute-0 sudo[217887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocopurmarmfeysfabyzwkbhnbgqunogb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045716.3620834-153-21274568701117/AnsiballZ_dnf.py'
Feb 02 15:21:56 compute-0 sudo[217887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:21:56 compute-0 python3.9[217889]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:21:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:21:57 compute-0 ceph-mon[75334]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:58 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 15:21:58 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 15:21:58 compute-0 systemd[1]: Reloading.
Feb 02 15:21:59 compute-0 systemd-sysv-generator[217933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:21:59 compute-0 systemd-rc-local-generator[217929]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:21:59 compute-0 ceph-mon[75334]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:21:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:21:59.231 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:21:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:21:59.233 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:21:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:21:59.233 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:21:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 15:21:59 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 15:21:59 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 15:21:59 compute-0 systemd[1]: run-r682bfb4764b147e798d6644deab71d16.service: Deactivated successfully.
Feb 02 15:21:59 compute-0 sudo[217887]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:00 compute-0 sudo[218202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmjovetknwrvtbaaskwxqqcbjnyzduqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045720.0116818-162-170876978420705/AnsiballZ_file.py'
Feb 02 15:22:00 compute-0 sudo[218202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:00 compute-0 python3.9[218204]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 02 15:22:00 compute-0 sudo[218202]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:01 compute-0 ceph-mon[75334]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:01 compute-0 sudo[218354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuyfqeaftiyqnkejdmghnulbqontkdzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045720.779549-170-186891343127800/AnsiballZ_modprobe.py'
Feb 02 15:22:01 compute-0 sudo[218354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:01 compute-0 python3.9[218356]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 02 15:22:01 compute-0 sudo[218354]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:01 compute-0 sudo[218510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvukrammkdcqasftbmilfkbllhqzyrcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045721.6334345-178-144320523481213/AnsiballZ_stat.py'
Feb 02 15:22:01 compute-0 sudo[218510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:02 compute-0 ceph-mon[75334]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:02 compute-0 python3.9[218512]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:22:02 compute-0 sudo[218510]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:02 compute-0 podman[218578]: 2026-02-02 15:22:02.304691369 +0000 UTC m=+0.046863809 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 02 15:22:02 compute-0 podman[218563]: 2026-02-02 15:22:02.319205698 +0000 UTC m=+0.061220723 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 15:22:02 compute-0 sudo[218680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncijszjgizexfwbhlkujtzpbloljfqsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045721.6334345-178-144320523481213/AnsiballZ_copy.py'
Feb 02 15:22:02 compute-0 sudo[218680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:02 compute-0 python3.9[218682]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045721.6334345-178-144320523481213/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:02 compute-0 sudo[218680]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:03 compute-0 sudo[218832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dggcueaqpaejleprasvdyutbtvltwhtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045722.757363-194-91571731294853/AnsiballZ_lineinfile.py'
Feb 02 15:22:03 compute-0 sudo[218832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:03 compute-0 python3.9[218834]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:03 compute-0 sudo[218832]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:04 compute-0 sudo[218984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbnwdruhtfaqenemkkbrknurhmcukvld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045723.4205897-202-198306880447856/AnsiballZ_systemd.py'
Feb 02 15:22:04 compute-0 sudo[218984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:04 compute-0 ceph-mon[75334]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:04 compute-0 python3.9[218986]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:22:04 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 02 15:22:04 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 02 15:22:04 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 02 15:22:04 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 02 15:22:04 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 02 15:22:04 compute-0 sudo[218984]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:04 compute-0 sudo[219141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lottcwgxzngckckskereajtiizozfxyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045724.65651-210-241321647821570/AnsiballZ_command.py'
Feb 02 15:22:04 compute-0 sudo[219141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:05 compute-0 python3.9[219143]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:05 compute-0 sudo[219141]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:05 compute-0 sudo[219294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qovvwotiiuloftvcjslsjmufqayulftg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045725.511608-220-200287359624997/AnsiballZ_stat.py'
Feb 02 15:22:05 compute-0 sudo[219294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:06 compute-0 python3.9[219296]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:22:06 compute-0 sudo[219294]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:06 compute-0 ceph-mon[75334]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:06 compute-0 sudo[219446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adiruoddtdwohrqffnmjbhxkjhrfuruy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045726.251196-229-190617009581745/AnsiballZ_stat.py'
Feb 02 15:22:06 compute-0 sudo[219446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:06 compute-0 python3.9[219448]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:22:06 compute-0 sudo[219446]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:07 compute-0 sudo[219569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knpxgndhucwwowqnboynremmfehjuskl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045726.251196-229-190617009581745/AnsiballZ_copy.py'
Feb 02 15:22:07 compute-0 sudo[219569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:07 compute-0 python3.9[219571]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045726.251196-229-190617009581745/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:07 compute-0 sudo[219569]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:07 compute-0 sudo[219721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inbqisnonvnkqtiuaxtrtanpnkqtrpyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045727.526114-244-128021766930025/AnsiballZ_command.py'
Feb 02 15:22:07 compute-0 sudo[219721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:08 compute-0 python3.9[219723]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:08 compute-0 sudo[219721]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:08 compute-0 ceph-mon[75334]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:08 compute-0 sudo[219874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keyvsjormnfhployehmiujowkruidazg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045728.2331276-252-133667894310369/AnsiballZ_lineinfile.py'
Feb 02 15:22:08 compute-0 sudo[219874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:08 compute-0 python3.9[219876]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:08 compute-0 sudo[219874]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:09 compute-0 sudo[220026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsmftqhndsnwxpxztfrwdjpuiervevdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045728.893685-260-66680657775747/AnsiballZ_replace.py'
Feb 02 15:22:09 compute-0 sudo[220026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:09 compute-0 python3.9[220028]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:09 compute-0 sudo[220026]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:09 compute-0 sudo[220178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfldvcyshkgatfmpzcbaugkenvzpgyyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045729.7090957-268-142716133172022/AnsiballZ_replace.py'
Feb 02 15:22:09 compute-0 sudo[220178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:10 compute-0 python3.9[220180]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:10 compute-0 sudo[220178]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:10 compute-0 ceph-mon[75334]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:10 compute-0 sudo[220330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujdzzikipddoclyqggqeltqrdpbtelzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045730.363211-277-30783404820277/AnsiballZ_lineinfile.py'
Feb 02 15:22:10 compute-0 sudo[220330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:10 compute-0 python3.9[220332]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:10 compute-0 sudo[220330]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:11 compute-0 sudo[220482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knjlcqtfbjgzmdoxijrvonrlyrquoqub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045730.907043-277-210171333102904/AnsiballZ_lineinfile.py'
Feb 02 15:22:11 compute-0 sudo[220482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:11 compute-0 python3.9[220484]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:11 compute-0 sudo[220482]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:11 compute-0 sudo[220634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwlopywrzibkftiejnawvgcffkpxqyiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045731.4904115-277-67914583173815/AnsiballZ_lineinfile.py'
Feb 02 15:22:11 compute-0 sudo[220634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:11 compute-0 python3.9[220636]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:11 compute-0 sudo[220634]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:12 compute-0 sudo[220786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jguawrhzqloqbzpkflxlituakljxklpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045731.9896789-277-15920321409015/AnsiballZ_lineinfile.py'
Feb 02 15:22:12 compute-0 ceph-mon[75334]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:12 compute-0 sudo[220786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:12 compute-0 python3.9[220788]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:12 compute-0 sudo[220786]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:12 compute-0 sudo[220938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxhbcigqcogbwjvtcnjtodcmrctgizuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045732.6274295-306-122480093799502/AnsiballZ_stat.py'
Feb 02 15:22:12 compute-0 sudo[220938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:13 compute-0 python3.9[220940]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:22:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:13 compute-0 sudo[220938]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:13 compute-0 sudo[221092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpqatokmaeswpobrqpkpkhlhimgpsrwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045733.2861238-314-75912722148960/AnsiballZ_command.py'
Feb 02 15:22:13 compute-0 sudo[221092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:13 compute-0 python3.9[221094]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:13 compute-0 sudo[221092]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:14 compute-0 sudo[221245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmjupriwpjxakxsswejkujxuvpvohmgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045733.976171-323-111559750211857/AnsiballZ_systemd_service.py'
Feb 02 15:22:14 compute-0 sudo[221245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:14 compute-0 ceph-mon[75334]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:14 compute-0 python3.9[221247]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:14 compute-0 systemd[1]: Listening on multipathd control socket.
Feb 02 15:22:14 compute-0 sudo[221245]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:22:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:22:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:22:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:22:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:22:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:22:15 compute-0 sudo[221401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldgdhawnomvdnsgvaradgknudajgeidc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045734.6949983-331-167250137045572/AnsiballZ_systemd_service.py'
Feb 02 15:22:15 compute-0 sudo[221401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:15 compute-0 python3.9[221403]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:15 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb 02 15:22:15 compute-0 udevadm[221408]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb 02 15:22:15 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb 02 15:22:15 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 02 15:22:15 compute-0 multipathd[221411]: --------start up--------
Feb 02 15:22:15 compute-0 multipathd[221411]: read /etc/multipath.conf
Feb 02 15:22:15 compute-0 multipathd[221411]: path checkers start up
Feb 02 15:22:15 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 02 15:22:15 compute-0 sudo[221401]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:16 compute-0 sudo[221568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xibyfbwtrqplwcgbslfoafidgnusxqoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045735.8330245-343-7482551670144/AnsiballZ_file.py'
Feb 02 15:22:16 compute-0 sudo[221568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:16 compute-0 python3.9[221570]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 02 15:22:16 compute-0 ceph-mon[75334]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:16 compute-0 sudo[221568]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:16 compute-0 sudo[221720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkdfqcpcijmdptxgxxbbpgzvzmpsuxee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045736.4784894-351-141578267728229/AnsiballZ_modprobe.py'
Feb 02 15:22:16 compute-0 sudo[221720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:16 compute-0 python3.9[221722]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 02 15:22:16 compute-0 kernel: Key type psk registered
Feb 02 15:22:17 compute-0 sudo[221720]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:17 compute-0 sudo[221881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scqpjmuydompwhhmfwffiwnsntwgsjkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045737.2976277-359-41113514336402/AnsiballZ_stat.py'
Feb 02 15:22:17 compute-0 sudo[221881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:17 compute-0 python3.9[221883]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:22:17 compute-0 sudo[221881]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:18 compute-0 sudo[222004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqzxvgqidwxhojmgnemjsjfjeqheemjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045737.2976277-359-41113514336402/AnsiballZ_copy.py'
Feb 02 15:22:18 compute-0 sudo[222004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:18 compute-0 python3.9[222006]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770045737.2976277-359-41113514336402/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:18 compute-0 sudo[222004]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:18 compute-0 ceph-mon[75334]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:18 compute-0 sudo[222156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogoyvcizrytcmepyqauwwfcwgqpklzwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045738.417211-375-203342444892885/AnsiballZ_lineinfile.py'
Feb 02 15:22:18 compute-0 sudo[222156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:18 compute-0 python3.9[222158]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:18 compute-0 sudo[222156]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:19 compute-0 sudo[222308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijxbktfshueyifimlqurzfddnyxisbsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045738.9502535-383-194387507204681/AnsiballZ_systemd.py'
Feb 02 15:22:19 compute-0 sudo[222308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:19 compute-0 python3.9[222310]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:22:19 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 02 15:22:19 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 02 15:22:19 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 02 15:22:19 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 02 15:22:19 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 02 15:22:19 compute-0 sudo[222308]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:20 compute-0 sudo[222464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjrnutfrhaptmfvxmodypegcjblcdlsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045739.7890868-391-44823776047616/AnsiballZ_dnf.py'
Feb 02 15:22:20 compute-0 sudo[222464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:20 compute-0 ceph-mon[75334]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:20 compute-0 python3.9[222466]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 02 15:22:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:22 compute-0 systemd[1]: Reloading.
Feb 02 15:22:22 compute-0 ceph-mon[75334]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:22 compute-0 systemd-rc-local-generator[222497]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:22:22 compute-0 systemd-sysv-generator[222503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:22:22 compute-0 systemd[1]: Reloading.
Feb 02 15:22:22 compute-0 systemd-sysv-generator[222535]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:22:22 compute-0 systemd-rc-local-generator[222532]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:22:22 compute-0 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 02 15:22:23 compute-0 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 02 15:22:23 compute-0 lvm[222581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:22:23 compute-0 lvm[222582]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:22:23 compute-0 lvm[222582]: VG ceph_vg1 finished
Feb 02 15:22:23 compute-0 lvm[222581]: VG ceph_vg0 finished
Feb 02 15:22:23 compute-0 lvm[222583]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:22:23 compute-0 lvm[222583]: VG ceph_vg2 finished
Feb 02 15:22:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 02 15:22:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 02 15:22:23 compute-0 systemd[1]: Reloading.
Feb 02 15:22:23 compute-0 systemd-rc-local-generator[222630]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:22:23 compute-0 systemd-sysv-generator[222635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:22:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 02 15:22:24 compute-0 sudo[222464]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 02 15:22:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 02 15:22:24 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.058s CPU time.
Feb 02 15:22:24 compute-0 systemd[1]: run-r47549f38e65d4794af495865bca2235e.service: Deactivated successfully.
Feb 02 15:22:24 compute-0 ceph-mon[75334]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:24 compute-0 sudo[223936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpagoruxvohzvjyvkbwolefvcuhjwebc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045744.1827943-399-78715937806486/AnsiballZ_systemd_service.py'
Feb 02 15:22:24 compute-0 sudo[223936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:24 compute-0 python3.9[223938]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:22:24 compute-0 systemd[1]: Stopping Open-iSCSI...
Feb 02 15:22:24 compute-0 iscsid[217439]: iscsid shutting down.
Feb 02 15:22:24 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Feb 02 15:22:24 compute-0 systemd[1]: Stopped Open-iSCSI.
Feb 02 15:22:24 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 02 15:22:24 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 02 15:22:24 compute-0 systemd[1]: Started Open-iSCSI.
Feb 02 15:22:24 compute-0 sudo[223936]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:25 compute-0 sudo[224093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcxpuygqqjjqnfmybvxyfivzwlpsrkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045744.941627-407-234912378412251/AnsiballZ_systemd_service.py'
Feb 02 15:22:25 compute-0 sudo[224093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:25 compute-0 python3.9[224095]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:22:25 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb 02 15:22:25 compute-0 multipathd[221411]: exit (signal)
Feb 02 15:22:25 compute-0 multipathd[221411]: --------shut down-------
Feb 02 15:22:25 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Feb 02 15:22:25 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb 02 15:22:25 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 02 15:22:25 compute-0 multipathd[224101]: --------start up--------
Feb 02 15:22:25 compute-0 multipathd[224101]: read /etc/multipath.conf
Feb 02 15:22:25 compute-0 multipathd[224101]: path checkers start up
Feb 02 15:22:25 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 02 15:22:25 compute-0 sudo[224093]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:26 compute-0 python3.9[224258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 02 15:22:26 compute-0 ceph-mon[75334]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:26 compute-0 sudo[224412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaqwiwimlzbotxkubbvbzmsjoddpwafk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045746.72871-425-277891581801305/AnsiballZ_file.py'
Feb 02 15:22:26 compute-0 sudo[224412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:27 compute-0 python3.9[224414]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:27 compute-0 sudo[224412]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:27 compute-0 sudo[224564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtvxetqxtijtxdcbfsaraccovenzxqux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045747.489267-436-70048518466362/AnsiballZ_systemd_service.py'
Feb 02 15:22:27 compute-0 sudo[224564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:28 compute-0 python3.9[224566]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 15:22:28 compute-0 systemd[1]: Reloading.
Feb 02 15:22:28 compute-0 systemd-rc-local-generator[224587]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:22:28 compute-0 systemd-sysv-generator[224593]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:22:28 compute-0 ceph-mon[75334]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:28 compute-0 sudo[224564]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:28 compute-0 python3.9[224750]: ansible-ansible.builtin.service_facts Invoked
Feb 02 15:22:29 compute-0 network[224767]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 02 15:22:29 compute-0 network[224768]: 'network-scripts' will be removed from distribution in near future.
Feb 02 15:22:29 compute-0 network[224769]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 02 15:22:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:30 compute-0 ceph-mon[75334]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:30 compute-0 sudo[224830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:22:30 compute-0 sudo[224830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:22:30 compute-0 sudo[224830]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:30 compute-0 sudo[224859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:22:30 compute-0 sudo[224859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:22:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:31 compute-0 sudo[224859]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:22:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:22:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:22:31 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:22:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:22:31 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:22:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:22:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:22:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:22:31 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:22:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:22:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:22:31 compute-0 sudo[224915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:22:31 compute-0 sudo[224915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:22:31 compute-0 sudo[224915]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:31 compute-0 sudo[224940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:22:31 compute-0 sudo[224940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:22:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:22:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:22:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:22:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:22:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:22:31 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:22:31 compute-0 podman[224978]: 2026-02-02 15:22:31.674600968 +0000 UTC m=+0.055336291 container create 8dc9999f3c59cd2d4585f01349a31a119bb80bddb49b99649416ef7c24d075c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_faraday, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:22:31 compute-0 systemd[1]: Started libpod-conmon-8dc9999f3c59cd2d4585f01349a31a119bb80bddb49b99649416ef7c24d075c7.scope.
Feb 02 15:22:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:22:31 compute-0 podman[224978]: 2026-02-02 15:22:31.736491458 +0000 UTC m=+0.117226761 container init 8dc9999f3c59cd2d4585f01349a31a119bb80bddb49b99649416ef7c24d075c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:22:31 compute-0 podman[224978]: 2026-02-02 15:22:31.745787964 +0000 UTC m=+0.126523287 container start 8dc9999f3c59cd2d4585f01349a31a119bb80bddb49b99649416ef7c24d075c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:22:31 compute-0 podman[224978]: 2026-02-02 15:22:31.652960053 +0000 UTC m=+0.033695396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:22:31 compute-0 podman[224978]: 2026-02-02 15:22:31.748995282 +0000 UTC m=+0.129730595 container attach 8dc9999f3c59cd2d4585f01349a31a119bb80bddb49b99649416ef7c24d075c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_faraday, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:22:31 compute-0 elegant_faraday[224997]: 167 167
Feb 02 15:22:31 compute-0 systemd[1]: libpod-8dc9999f3c59cd2d4585f01349a31a119bb80bddb49b99649416ef7c24d075c7.scope: Deactivated successfully.
Feb 02 15:22:31 compute-0 podman[224978]: 2026-02-02 15:22:31.750981907 +0000 UTC m=+0.131717200 container died 8dc9999f3c59cd2d4585f01349a31a119bb80bddb49b99649416ef7c24d075c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_faraday, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 15:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-11f52fdddc97246b237cc929af1d92c371ccee25e0891a14ea104adad738f3a9-merged.mount: Deactivated successfully.
Feb 02 15:22:31 compute-0 podman[224978]: 2026-02-02 15:22:31.796583049 +0000 UTC m=+0.177318382 container remove 8dc9999f3c59cd2d4585f01349a31a119bb80bddb49b99649416ef7c24d075c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:22:31 compute-0 systemd[1]: libpod-conmon-8dc9999f3c59cd2d4585f01349a31a119bb80bddb49b99649416ef7c24d075c7.scope: Deactivated successfully.
Feb 02 15:22:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:31 compute-0 podman[225031]: 2026-02-02 15:22:31.945458099 +0000 UTC m=+0.057909412 container create 2dc952a89b4bce53a9e8f7a099a3b08e0c4e99f9d86185853ee72e4d0755555b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nash, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:22:31 compute-0 systemd[1]: Started libpod-conmon-2dc952a89b4bce53a9e8f7a099a3b08e0c4e99f9d86185853ee72e4d0755555b.scope.
Feb 02 15:22:32 compute-0 podman[225031]: 2026-02-02 15:22:31.917644696 +0000 UTC m=+0.030096099 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:22:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b035eca73eaf897804256a7360b4d52791bff78df215e43c6b9d162fadf9b28c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b035eca73eaf897804256a7360b4d52791bff78df215e43c6b9d162fadf9b28c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b035eca73eaf897804256a7360b4d52791bff78df215e43c6b9d162fadf9b28c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b035eca73eaf897804256a7360b4d52791bff78df215e43c6b9d162fadf9b28c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b035eca73eaf897804256a7360b4d52791bff78df215e43c6b9d162fadf9b28c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:32 compute-0 podman[225031]: 2026-02-02 15:22:32.048662714 +0000 UTC m=+0.161114067 container init 2dc952a89b4bce53a9e8f7a099a3b08e0c4e99f9d86185853ee72e4d0755555b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:22:32 compute-0 podman[225031]: 2026-02-02 15:22:32.063483922 +0000 UTC m=+0.175935265 container start 2dc952a89b4bce53a9e8f7a099a3b08e0c4e99f9d86185853ee72e4d0755555b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:22:32 compute-0 podman[225031]: 2026-02-02 15:22:32.068296183 +0000 UTC m=+0.180747526 container attach 2dc952a89b4bce53a9e8f7a099a3b08e0c4e99f9d86185853ee72e4d0755555b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nash, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:22:32 compute-0 ceph-mon[75334]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:32 compute-0 podman[225083]: 2026-02-02 15:22:32.427607074 +0000 UTC m=+0.070688353 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 02 15:22:32 compute-0 podman[225080]: 2026-02-02 15:22:32.45838076 +0000 UTC m=+0.101309985 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 02 15:22:32 compute-0 peaceful_nash[225051]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:22:32 compute-0 peaceful_nash[225051]: --> All data devices are unavailable
Feb 02 15:22:32 compute-0 podman[225031]: 2026-02-02 15:22:32.548556236 +0000 UTC m=+0.661007549 container died 2dc952a89b4bce53a9e8f7a099a3b08e0c4e99f9d86185853ee72e4d0755555b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nash, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:22:32 compute-0 systemd[1]: libpod-2dc952a89b4bce53a9e8f7a099a3b08e0c4e99f9d86185853ee72e4d0755555b.scope: Deactivated successfully.
Feb 02 15:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b035eca73eaf897804256a7360b4d52791bff78df215e43c6b9d162fadf9b28c-merged.mount: Deactivated successfully.
Feb 02 15:22:32 compute-0 podman[225031]: 2026-02-02 15:22:32.59709948 +0000 UTC m=+0.709550783 container remove 2dc952a89b4bce53a9e8f7a099a3b08e0c4e99f9d86185853ee72e4d0755555b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_nash, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 02 15:22:32 compute-0 systemd[1]: libpod-conmon-2dc952a89b4bce53a9e8f7a099a3b08e0c4e99f9d86185853ee72e4d0755555b.scope: Deactivated successfully.
Feb 02 15:22:32 compute-0 sudo[224940]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:32 compute-0 sudo[225163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:22:32 compute-0 sudo[225163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:22:32 compute-0 sudo[225163]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:32 compute-0 sudo[225189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:22:32 compute-0 sudo[225189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:22:33 compute-0 podman[225272]: 2026-02-02 15:22:33.000690137 +0000 UTC m=+0.059442163 container create 8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:22:33 compute-0 systemd[1]: Started libpod-conmon-8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd.scope.
Feb 02 15:22:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:22:33 compute-0 podman[225272]: 2026-02-02 15:22:32.977399977 +0000 UTC m=+0.036152013 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:22:33 compute-0 podman[225272]: 2026-02-02 15:22:33.082944787 +0000 UTC m=+0.141696803 container init 8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:22:33 compute-0 podman[225272]: 2026-02-02 15:22:33.088801908 +0000 UTC m=+0.147553864 container start 8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:22:33 compute-0 systemd[1]: libpod-8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd.scope: Deactivated successfully.
Feb 02 15:22:33 compute-0 jolly_jepsen[225341]: 167 167
Feb 02 15:22:33 compute-0 conmon[225341]: conmon 8bb9b16a3b3439b88aac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd.scope/container/memory.events
Feb 02 15:22:33 compute-0 podman[225272]: 2026-02-02 15:22:33.095560154 +0000 UTC m=+0.154312180 container attach 8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jepsen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 02 15:22:33 compute-0 podman[225272]: 2026-02-02 15:22:33.096443407 +0000 UTC m=+0.155195353 container died 8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:22:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c020e480e94465d27e419296d3a001ceede5811b0ba1cba91a9fd701ca5d87a-merged.mount: Deactivated successfully.
Feb 02 15:22:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:33 compute-0 podman[225272]: 2026-02-02 15:22:33.132656572 +0000 UTC m=+0.191408518 container remove 8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:22:33 compute-0 systemd[1]: libpod-conmon-8bb9b16a3b3439b88aac3536f6c2d9d2f2cf3e9c1fd5d7b92fa6b5970ee1a1dd.scope: Deactivated successfully.
Feb 02 15:22:33 compute-0 sudo[225409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyzdkqmmdktxnudmclgswfgaggesfkyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045752.9207604-455-25786122248151/AnsiballZ_systemd_service.py'
Feb 02 15:22:33 compute-0 sudo[225409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:33 compute-0 podman[225417]: 2026-02-02 15:22:33.28800894 +0000 UTC m=+0.053734417 container create 2064d0b23638c34d7b042c3a5f065d37738c9a205f14ba1bdcbd4b523daa7b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:22:33 compute-0 systemd[1]: Started libpod-conmon-2064d0b23638c34d7b042c3a5f065d37738c9a205f14ba1bdcbd4b523daa7b6c.scope.
Feb 02 15:22:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7a5d43251205e2c856982d984cf29319d7785d22c2c45157f5da163ed96605/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7a5d43251205e2c856982d984cf29319d7785d22c2c45157f5da163ed96605/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7a5d43251205e2c856982d984cf29319d7785d22c2c45157f5da163ed96605/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7a5d43251205e2c856982d984cf29319d7785d22c2c45157f5da163ed96605/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:33 compute-0 podman[225417]: 2026-02-02 15:22:33.264084953 +0000 UTC m=+0.029810410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:22:33 compute-0 podman[225417]: 2026-02-02 15:22:33.378147097 +0000 UTC m=+0.143872584 container init 2064d0b23638c34d7b042c3a5f065d37738c9a205f14ba1bdcbd4b523daa7b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:22:33 compute-0 podman[225417]: 2026-02-02 15:22:33.384506501 +0000 UTC m=+0.150231978 container start 2064d0b23638c34d7b042c3a5f065d37738c9a205f14ba1bdcbd4b523daa7b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 15:22:33 compute-0 podman[225417]: 2026-02-02 15:22:33.389264602 +0000 UTC m=+0.154990089 container attach 2064d0b23638c34d7b042c3a5f065d37738c9a205f14ba1bdcbd4b523daa7b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_merkle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:22:33 compute-0 python3.9[225411]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:33 compute-0 sudo[225409]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:33 compute-0 interesting_merkle[225434]: {
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:     "0": [
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:         {
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "devices": [
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "/dev/loop3"
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             ],
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_name": "ceph_lv0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_size": "21470642176",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "name": "ceph_lv0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "tags": {
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.cluster_name": "ceph",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.crush_device_class": "",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.encrypted": "0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.objectstore": "bluestore",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.osd_id": "0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.type": "block",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.vdo": "0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.with_tpm": "0"
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             },
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "type": "block",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "vg_name": "ceph_vg0"
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:         }
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:     ],
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:     "1": [
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:         {
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "devices": [
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "/dev/loop4"
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             ],
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_name": "ceph_lv1",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_size": "21470642176",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "name": "ceph_lv1",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "tags": {
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.cluster_name": "ceph",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.crush_device_class": "",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.encrypted": "0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.objectstore": "bluestore",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.osd_id": "1",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.type": "block",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.vdo": "0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.with_tpm": "0"
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             },
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "type": "block",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "vg_name": "ceph_vg1"
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:         }
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:     ],
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:     "2": [
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:         {
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "devices": [
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "/dev/loop5"
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             ],
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_name": "ceph_lv2",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_size": "21470642176",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "name": "ceph_lv2",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "tags": {
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.cluster_name": "ceph",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.crush_device_class": "",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.encrypted": "0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.objectstore": "bluestore",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.osd_id": "2",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.type": "block",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.vdo": "0",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:                 "ceph.with_tpm": "0"
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             },
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "type": "block",
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:             "vg_name": "ceph_vg2"
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:         }
Feb 02 15:22:33 compute-0 interesting_merkle[225434]:     ]
Feb 02 15:22:33 compute-0 interesting_merkle[225434]: }
Feb 02 15:22:33 compute-0 systemd[1]: libpod-2064d0b23638c34d7b042c3a5f065d37738c9a205f14ba1bdcbd4b523daa7b6c.scope: Deactivated successfully.
Feb 02 15:22:33 compute-0 podman[225417]: 2026-02-02 15:22:33.725675443 +0000 UTC m=+0.491400880 container died 2064d0b23638c34d7b042c3a5f065d37738c9a205f14ba1bdcbd4b523daa7b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_merkle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:22:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c7a5d43251205e2c856982d984cf29319d7785d22c2c45157f5da163ed96605-merged.mount: Deactivated successfully.
Feb 02 15:22:33 compute-0 podman[225417]: 2026-02-02 15:22:33.765324182 +0000 UTC m=+0.531049639 container remove 2064d0b23638c34d7b042c3a5f065d37738c9a205f14ba1bdcbd4b523daa7b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:22:33 compute-0 systemd[1]: libpod-conmon-2064d0b23638c34d7b042c3a5f065d37738c9a205f14ba1bdcbd4b523daa7b6c.scope: Deactivated successfully.
Feb 02 15:22:33 compute-0 sudo[225189]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:33 compute-0 sudo[225580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:22:33 compute-0 sudo[225580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:22:33 compute-0 sudo[225580]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:33 compute-0 sudo[225630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnoyigqrglztdvrhazosfwxvgtjrfxum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045753.6174736-455-18464210745916/AnsiballZ_systemd_service.py'
Feb 02 15:22:33 compute-0 sudo[225630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:33 compute-0 sudo[225634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:22:33 compute-0 sudo[225634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:22:34 compute-0 python3.9[225633]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:34 compute-0 sudo[225630]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:34 compute-0 podman[225672]: 2026-02-02 15:22:34.177582178 +0000 UTC m=+0.049671696 container create f4cf49acb9a8088b458d574366034aae55e78294cad9dd61bf94a7055519e2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lehmann, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:22:34 compute-0 systemd[1]: Started libpod-conmon-f4cf49acb9a8088b458d574366034aae55e78294cad9dd61bf94a7055519e2ee.scope.
Feb 02 15:22:34 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:22:34 compute-0 podman[225672]: 2026-02-02 15:22:34.246974124 +0000 UTC m=+0.119063632 container init f4cf49acb9a8088b458d574366034aae55e78294cad9dd61bf94a7055519e2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lehmann, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:22:34 compute-0 podman[225672]: 2026-02-02 15:22:34.157468655 +0000 UTC m=+0.029558193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:22:34 compute-0 podman[225672]: 2026-02-02 15:22:34.253015029 +0000 UTC m=+0.125104527 container start f4cf49acb9a8088b458d574366034aae55e78294cad9dd61bf94a7055519e2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:22:34 compute-0 podman[225672]: 2026-02-02 15:22:34.256346101 +0000 UTC m=+0.128435599 container attach f4cf49acb9a8088b458d574366034aae55e78294cad9dd61bf94a7055519e2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lehmann, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:22:34 compute-0 crazy_lehmann[225703]: 167 167
Feb 02 15:22:34 compute-0 systemd[1]: libpod-f4cf49acb9a8088b458d574366034aae55e78294cad9dd61bf94a7055519e2ee.scope: Deactivated successfully.
Feb 02 15:22:34 compute-0 podman[225672]: 2026-02-02 15:22:34.258875671 +0000 UTC m=+0.130965169 container died f4cf49acb9a8088b458d574366034aae55e78294cad9dd61bf94a7055519e2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ef59b2ec1050612f4c83e46808c425132d85f1d8757027fe1e80fd24c6dc4f0-merged.mount: Deactivated successfully.
Feb 02 15:22:34 compute-0 podman[225672]: 2026-02-02 15:22:34.29379316 +0000 UTC m=+0.165882648 container remove f4cf49acb9a8088b458d574366034aae55e78294cad9dd61bf94a7055519e2ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:22:34 compute-0 systemd[1]: libpod-conmon-f4cf49acb9a8088b458d574366034aae55e78294cad9dd61bf94a7055519e2ee.scope: Deactivated successfully.
Feb 02 15:22:34 compute-0 ceph-mon[75334]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:34 compute-0 podman[225789]: 2026-02-02 15:22:34.449291252 +0000 UTC m=+0.045431960 container create 1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chatterjee, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:22:34 compute-0 systemd[1]: Started libpod-conmon-1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998.scope.
Feb 02 15:22:34 compute-0 podman[225789]: 2026-02-02 15:22:34.430736361 +0000 UTC m=+0.026877149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:22:34 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:22:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bfe3fa063b9ac6dbac70218f71770fea0d673561372f2e828c789dc10ec3de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bfe3fa063b9ac6dbac70218f71770fea0d673561372f2e828c789dc10ec3de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bfe3fa063b9ac6dbac70218f71770fea0d673561372f2e828c789dc10ec3de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bfe3fa063b9ac6dbac70218f71770fea0d673561372f2e828c789dc10ec3de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:22:34 compute-0 podman[225789]: 2026-02-02 15:22:34.566865692 +0000 UTC m=+0.163006420 container init 1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:22:34 compute-0 podman[225789]: 2026-02-02 15:22:34.571961121 +0000 UTC m=+0.168101829 container start 1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:22:34 compute-0 podman[225789]: 2026-02-02 15:22:34.574850301 +0000 UTC m=+0.170991009 container attach 1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chatterjee, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:22:34 compute-0 sudo[225883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyisotpnhpvhmdquljcbivqdzsdivzwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045754.303184-455-24768776579917/AnsiballZ_systemd_service.py'
Feb 02 15:22:34 compute-0 sudo[225883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:34 compute-0 python3.9[225886]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:34 compute-0 sudo[225883]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:35 compute-0 lvm[226085]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:22:35 compute-0 lvm[226085]: VG ceph_vg1 finished
Feb 02 15:22:35 compute-0 lvm[226084]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:22:35 compute-0 lvm[226084]: VG ceph_vg0 finished
Feb 02 15:22:35 compute-0 lvm[226088]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:22:35 compute-0 lvm[226088]: VG ceph_vg2 finished
Feb 02 15:22:35 compute-0 sudo[226114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpogmfccdpscczicqfsfggtrazrwvzjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045754.9912307-455-186216087753562/AnsiballZ_systemd_service.py'
Feb 02 15:22:35 compute-0 sudo[226114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:35 compute-0 vibrant_chatterjee[225830]: {}
Feb 02 15:22:35 compute-0 systemd[1]: libpod-1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998.scope: Deactivated successfully.
Feb 02 15:22:35 compute-0 systemd[1]: libpod-1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998.scope: Consumed 1.071s CPU time.
Feb 02 15:22:35 compute-0 podman[225789]: 2026-02-02 15:22:35.277288108 +0000 UTC m=+0.873428836 container died 1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:22:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5bfe3fa063b9ac6dbac70218f71770fea0d673561372f2e828c789dc10ec3de-merged.mount: Deactivated successfully.
Feb 02 15:22:35 compute-0 podman[225789]: 2026-02-02 15:22:35.323103846 +0000 UTC m=+0.919244564 container remove 1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:22:35 compute-0 systemd[1]: libpod-conmon-1e03a5cb08b2b26f11b6fdabd7c0385929c323f88c9d09311eb2f11c12c83998.scope: Deactivated successfully.
Feb 02 15:22:35 compute-0 sudo[225634]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:22:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:22:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:22:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:22:35 compute-0 sudo[226131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:22:35 compute-0 sudo[226131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:22:35 compute-0 sudo[226131]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:35 compute-0 python3.9[226117]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:35 compute-0 sudo[226114]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:35 compute-0 sudo[226306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlfsjldbievdnoqualjauaxmpoipevsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045755.6764643-455-157923019717324/AnsiballZ_systemd_service.py'
Feb 02 15:22:35 compute-0 sudo[226306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:36 compute-0 python3.9[226308]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:36 compute-0 sudo[226306]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:36 compute-0 ceph-mon[75334]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:22:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:22:36 compute-0 sudo[226459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbjdudhdclbbrczhncdqeruvebbdyxnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045756.4754705-455-70239551809213/AnsiballZ_systemd_service.py'
Feb 02 15:22:36 compute-0 sudo[226459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:37 compute-0 python3.9[226461]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:37 compute-0 sudo[226459]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:37 compute-0 sudo[226612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wizazkpoifyjgzekafjzqttfoydkhgjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045757.1962032-455-98951078832985/AnsiballZ_systemd_service.py'
Feb 02 15:22:37 compute-0 sudo[226612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:37 compute-0 python3.9[226614]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:37 compute-0 sudo[226612]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:38 compute-0 sudo[226765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjijklzuoqlbopgupotfnyahbftaisyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045757.924881-455-4094003194160/AnsiballZ_systemd_service.py'
Feb 02 15:22:38 compute-0 sudo[226765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:38 compute-0 ceph-mon[75334]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:38 compute-0 python3.9[226767]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:22:38 compute-0 sudo[226765]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:39 compute-0 sudo[226918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfrychzdxpfztfgsksuzvmgijprawdcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045758.8040645-514-191779618561499/AnsiballZ_file.py'
Feb 02 15:22:39 compute-0 sudo[226918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:39 compute-0 python3.9[226920]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:39 compute-0 sudo[226918]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:39 compute-0 sudo[227070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvxxcynrlwrkqtdziztbmuyguxlxoujs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045759.3630018-514-70058819928782/AnsiballZ_file.py'
Feb 02 15:22:39 compute-0 sudo[227070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:39 compute-0 python3.9[227072]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:39 compute-0 sudo[227070]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:40 compute-0 sudo[227222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwxtnvxkzezzjtqtsgbnjfyuemdlcmkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045759.9591162-514-36049075085774/AnsiballZ_file.py'
Feb 02 15:22:40 compute-0 sudo[227222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:40 compute-0 python3.9[227224]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:40 compute-0 sudo[227222]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:40 compute-0 ceph-mon[75334]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:40 compute-0 sudo[227374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohnfpirkxdowflsqwdocozsjcakqttzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045760.5209837-514-264512620101598/AnsiballZ_file.py'
Feb 02 15:22:40 compute-0 sudo[227374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:41 compute-0 python3.9[227376]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:41 compute-0 sudo[227374]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:41 compute-0 sudo[227526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soznuebggviagofmqtidhwhmiidmjmtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045761.162779-514-19249630068843/AnsiballZ_file.py'
Feb 02 15:22:41 compute-0 sudo[227526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:41 compute-0 python3.9[227528]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:41 compute-0 sudo[227526]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:41 compute-0 sudo[227678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqhxxiawdbuwyzrxgttwnxpvltykamkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045761.710711-514-206243619277315/AnsiballZ_file.py'
Feb 02 15:22:41 compute-0 sudo[227678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:42 compute-0 python3.9[227680]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:42 compute-0 sudo[227678]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:42 compute-0 ceph-mon[75334]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:42 compute-0 sudo[227830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jayahpexdseifgxwqseuazxsorvdchii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045762.2631557-514-185587258414097/AnsiballZ_file.py'
Feb 02 15:22:42 compute-0 sudo[227830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:42 compute-0 python3.9[227832]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:42 compute-0 sudo[227830]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:22:42
Feb 02 15:22:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:22:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:22:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log']
Feb 02 15:22:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:22:43 compute-0 sudo[227982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqdrfqprpznpacsctgaaevowlvoewprq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045762.8330238-514-21499148248413/AnsiballZ_file.py'
Feb 02 15:22:43 compute-0 sudo[227982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:43 compute-0 python3.9[227984]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:43 compute-0 sudo[227982]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:43 compute-0 sudo[228134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gavxqmihgaylwrezliqvxdawpzuwuybx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045763.5233786-571-110481760317271/AnsiballZ_file.py'
Feb 02 15:22:43 compute-0 sudo[228134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:43 compute-0 python3.9[228136]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:44 compute-0 sudo[228134]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:44 compute-0 sudo[228286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnrhckvunkgebfznsdxmqkhvjpjjexup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045764.137567-571-269217701279157/AnsiballZ_file.py'
Feb 02 15:22:44 compute-0 sudo[228286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:44 compute-0 ceph-mon[75334]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:44 compute-0 python3.9[228288]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:44 compute-0 sudo[228286]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:22:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:22:44 compute-0 sudo[228438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ongkvuszfqneewzavxvsfrpfmbljtqke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045764.7385232-571-14116002102649/AnsiballZ_file.py'
Feb 02 15:22:44 compute-0 sudo[228438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:45 compute-0 python3.9[228440]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:45 compute-0 sudo[228438]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:45 compute-0 sudo[228590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdvxdepxqpqkcipghlnqscwmbvbyxilj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045765.320654-571-192718333673381/AnsiballZ_file.py'
Feb 02 15:22:45 compute-0 sudo[228590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:46 compute-0 python3.9[228592]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:46 compute-0 sudo[228590]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:46 compute-0 ceph-mon[75334]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:46 compute-0 sudo[228742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxxwysrzzqygrhauvjgetvldgqtzwbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045766.26538-571-225418389593717/AnsiballZ_file.py'
Feb 02 15:22:46 compute-0 sudo[228742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:46 compute-0 python3.9[228744]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:46 compute-0 sudo[228742]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:46 compute-0 sudo[228894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isafkyreqxemgxnnqjdokikkywfjvbnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045766.7434125-571-207593292470645/AnsiballZ_file.py'
Feb 02 15:22:46 compute-0 sudo[228894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:47 compute-0 python3.9[228896]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:47 compute-0 sudo[228894]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:47 compute-0 sudo[229046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idrimjrinhwzbglvtuiezsvnyawaffmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045767.367568-571-100717357741217/AnsiballZ_file.py'
Feb 02 15:22:47 compute-0 sudo[229046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:47 compute-0 python3.9[229048]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:47 compute-0 sudo[229046]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:47 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb 02 15:22:48 compute-0 sudo[229199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opybihawogwfdzidqmxuuopqlxbsimyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045768.0325882-571-228304942308966/AnsiballZ_file.py'
Feb 02 15:22:48 compute-0 sudo[229199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:48 compute-0 python3.9[229201]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:22:48 compute-0 ceph-mon[75334]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:48 compute-0 sudo[229199]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:48 compute-0 sudo[229351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgpmcflmlcciwsmgeahselsblpsylkrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045768.6852322-629-248273102472385/AnsiballZ_command.py'
Feb 02 15:22:48 compute-0 sudo[229351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:48 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 02 15:22:49 compute-0 python3.9[229353]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:49 compute-0 sudo[229351]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:49 compute-0 python3.9[229506]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 02 15:22:50 compute-0 sudo[229656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xghivxmiodbskyasjnwdajdtamqhwojk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045770.1551194-647-122616074570582/AnsiballZ_systemd_service.py'
Feb 02 15:22:50 compute-0 sudo[229656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:50 compute-0 ceph-mon[75334]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:50 compute-0 python3.9[229658]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 15:22:50 compute-0 systemd[1]: Reloading.
Feb 02 15:22:50 compute-0 systemd-rc-local-generator[229686]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:22:50 compute-0 systemd-sysv-generator[229689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:22:51 compute-0 sudo[229656]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:51 compute-0 sudo[229843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bavhixcbgbmzfqaaqqyrnlisycaqbydq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045771.2276783-655-117526898285876/AnsiballZ_command.py'
Feb 02 15:22:51 compute-0 sudo[229843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:51 compute-0 python3.9[229845]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:51 compute-0 sudo[229843]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:52 compute-0 sudo[229996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blmkibmblbrjpyujsuiwqzzjkibifctd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045771.8197124-655-179753005474780/AnsiballZ_command.py'
Feb 02 15:22:52 compute-0 sudo[229996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:52 compute-0 python3.9[229998]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:52 compute-0 sudo[229996]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:52 compute-0 ceph-mon[75334]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:52 compute-0 sudo[230149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziqzoucvjtxzqwwrzjafnryjmyzdvrht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045772.4050372-655-10676561342773/AnsiballZ_command.py'
Feb 02 15:22:52 compute-0 sudo[230149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:52 compute-0 python3.9[230151]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:52 compute-0 sudo[230149]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:53 compute-0 sudo[230302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwwzpixnugewfqseomvdiwlsoaqwhvur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045773.0000484-655-137919405740272/AnsiballZ_command.py'
Feb 02 15:22:53 compute-0 sudo[230302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:53 compute-0 python3.9[230304]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:53 compute-0 sudo[230302]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:53 compute-0 sudo[230455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bavqboxjpmkqpphocwxkboozmmqwdzio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045773.6520238-655-109437323518888/AnsiballZ_command.py'
Feb 02 15:22:53 compute-0 sudo[230455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:54 compute-0 python3.9[230457]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:54 compute-0 sudo[230455]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:22:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:22:54 compute-0 ceph-mon[75334]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:54 compute-0 sudo[230608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yugxdzyvqamspqncdzmxzrypbzbcrded ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045774.3061407-655-261371938113828/AnsiballZ_command.py'
Feb 02 15:22:54 compute-0 sudo[230608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:54 compute-0 python3.9[230610]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:54 compute-0 sudo[230608]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:55 compute-0 sudo[230761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajbjuclsmuwuaqbxwdpyeutxgxhdyjww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045774.8957248-655-16499053052421/AnsiballZ_command.py'
Feb 02 15:22:55 compute-0 sudo[230761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:55 compute-0 python3.9[230763]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:56 compute-0 sudo[230761]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:56 compute-0 ceph-mon[75334]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:56 compute-0 sudo[230914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycwoorutgrcwripcbmqmytuxmjbzlwit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045776.540959-655-46581547904656/AnsiballZ_command.py'
Feb 02 15:22:56 compute-0 sudo[230914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:22:56 compute-0 python3.9[230916]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 02 15:22:57 compute-0 sudo[230914]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:58 compute-0 sudo[231067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkthjwojfasmmgfeqbfvzmpxbdnihlat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045778.0240588-734-83020180155265/AnsiballZ_file.py'
Feb 02 15:22:58 compute-0 sudo[231067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:58 compute-0 python3.9[231069]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:22:58 compute-0 sudo[231067]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:58 compute-0 ceph-mon[75334]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:58 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 02 15:22:58 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Feb 02 15:22:58 compute-0 sudo[231221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grvyxjjrrcvltmithdiwqhiznxmwjywj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045778.6456869-734-272360877968329/AnsiballZ_file.py'
Feb 02 15:22:58 compute-0 sudo[231221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:59 compute-0 python3.9[231223]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:22:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:22:59 compute-0 sudo[231221]: pam_unix(sudo:session): session closed for user root
Feb 02 15:22:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:22:59.233 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:22:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:22:59.234 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:22:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:22:59.235 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:22:59 compute-0 sudo[231373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llwbwohtcfmqniuekvczwurilasqylds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045779.2869895-734-233204686641413/AnsiballZ_file.py'
Feb 02 15:22:59 compute-0 sudo[231373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:22:59 compute-0 python3.9[231375]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:22:59 compute-0 sudo[231373]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:00 compute-0 sudo[231525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffkctdejypoksphkfgrjcvbpufosmmzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045780.0688264-756-6950122848120/AnsiballZ_file.py'
Feb 02 15:23:00 compute-0 sudo[231525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:00 compute-0 python3.9[231527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:00 compute-0 ceph-mon[75334]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:00 compute-0 sudo[231525]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:00 compute-0 sudo[231677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjrieropmiyujcmpfaekmefjtglyoqfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045780.7118845-756-45239015816851/AnsiballZ_file.py'
Feb 02 15:23:00 compute-0 sudo[231677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:01 compute-0 python3.9[231679]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:01 compute-0 sudo[231677]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:01 compute-0 sudo[231829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtobqyobtdoqvrbgagwnebevhbdnlixs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045781.3315208-756-229450458164257/AnsiballZ_file.py'
Feb 02 15:23:01 compute-0 sudo[231829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:01 compute-0 anacron[35674]: Job `cron.daily' started
Feb 02 15:23:01 compute-0 anacron[35674]: Job `cron.daily' terminated
Feb 02 15:23:01 compute-0 python3.9[231831]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:01 compute-0 sudo[231829]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:02 compute-0 sudo[231983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfuaglcuntfokzorygqzvzvcdqlcysoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045781.9762778-756-7268197176365/AnsiballZ_file.py'
Feb 02 15:23:02 compute-0 sudo[231983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:02 compute-0 python3.9[231985]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:02 compute-0 sudo[231983]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:02 compute-0 ceph-mon[75334]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:02 compute-0 sudo[232162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfeeonplmndtvxvghyoecznafqppgpoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045782.5404272-756-107378011457696/AnsiballZ_file.py'
Feb 02 15:23:02 compute-0 sudo[232162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:02 compute-0 podman[232109]: 2026-02-02 15:23:02.84756928 +0000 UTC m=+0.088978076 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 02 15:23:02 compute-0 podman[232110]: 2026-02-02 15:23:02.847693433 +0000 UTC m=+0.089291484 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Feb 02 15:23:02 compute-0 python3.9[232173]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:03 compute-0 sudo[232162]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:03 compute-0 sudo[232329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcbzkzckryfxmajodspwqfngsvtzgusl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045783.1411757-756-150640963609270/AnsiballZ_file.py'
Feb 02 15:23:03 compute-0 sudo[232329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:03 compute-0 python3.9[232331]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:03 compute-0 sudo[232329]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:04 compute-0 sudo[232481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iewexiwhfvkeacoofypelqlqgwgtiwsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045783.7137508-756-172725261570932/AnsiballZ_file.py'
Feb 02 15:23:04 compute-0 sudo[232481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:04 compute-0 python3.9[232483]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:04 compute-0 sudo[232481]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:04 compute-0 ceph-mon[75334]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:06 compute-0 ceph-mon[75334]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:08 compute-0 ceph-mon[75334]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:09 compute-0 sudo[232633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yevtxzuarxwzkpluuzroszctbeasrsgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045789.0334737-945-59383899726341/AnsiballZ_getent.py'
Feb 02 15:23:09 compute-0 sudo[232633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:09 compute-0 python3.9[232635]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 02 15:23:09 compute-0 sudo[232633]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:10 compute-0 sudo[232786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npxxlnbjzkkomsawnryiidmxvljitxoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045789.8976138-953-91930426238344/AnsiballZ_group.py'
Feb 02 15:23:10 compute-0 sudo[232786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:10 compute-0 python3.9[232788]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 02 15:23:10 compute-0 groupadd[232789]: group added to /etc/group: name=nova, GID=42436
Feb 02 15:23:10 compute-0 groupadd[232789]: group added to /etc/gshadow: name=nova
Feb 02 15:23:10 compute-0 groupadd[232789]: new group: name=nova, GID=42436
Feb 02 15:23:10 compute-0 sudo[232786]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:10 compute-0 ceph-mon[75334]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:11 compute-0 sudo[232944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipxilmpwjdbiqdmaufxureiveqpgrrdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045790.7352357-961-191746676043272/AnsiballZ_user.py'
Feb 02 15:23:11 compute-0 sudo[232944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:11 compute-0 python3.9[232946]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 02 15:23:11 compute-0 useradd[232948]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Feb 02 15:23:11 compute-0 useradd[232948]: add 'nova' to group 'libvirt'
Feb 02 15:23:11 compute-0 useradd[232948]: add 'nova' to shadow group 'libvirt'
Feb 02 15:23:11 compute-0 sudo[232944]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:12 compute-0 sshd-session[232979]: Accepted publickey for zuul from 192.168.122.30 port 56832 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 15:23:12 compute-0 systemd-logind[786]: New session 51 of user zuul.
Feb 02 15:23:12 compute-0 systemd[1]: Started Session 51 of User zuul.
Feb 02 15:23:12 compute-0 sshd-session[232979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 15:23:12 compute-0 sshd-session[232982]: Received disconnect from 192.168.122.30 port 56832:11: disconnected by user
Feb 02 15:23:12 compute-0 sshd-session[232982]: Disconnected from user zuul 192.168.122.30 port 56832
Feb 02 15:23:12 compute-0 sshd-session[232979]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:23:12 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Feb 02 15:23:12 compute-0 systemd-logind[786]: Session 51 logged out. Waiting for processes to exit.
Feb 02 15:23:12 compute-0 systemd-logind[786]: Removed session 51.
Feb 02 15:23:12 compute-0 ceph-mon[75334]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:13 compute-0 python3.9[233132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:23:13 compute-0 python3.9[233253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045792.7259398-986-67282094263934/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:14 compute-0 python3.9[233403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:23:14 compute-0 python3.9[233479]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:14 compute-0 ceph-mon[75334]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:23:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:23:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:23:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:23:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:23:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:23:14 compute-0 python3.9[233629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:23:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:15 compute-0 python3.9[233750]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045794.606193-986-98791683793127/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:16 compute-0 python3.9[233900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:23:16 compute-0 python3.9[234021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045795.5936637-986-136174867110964/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:16 compute-0 ceph-mon[75334]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:17 compute-0 python3.9[234171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:23:17 compute-0 python3.9[234292]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045796.655609-986-134239804501025/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:18 compute-0 python3.9[234442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:23:18 compute-0 ceph-mon[75334]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:18 compute-0 python3.9[234563]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045797.896566-986-85221042426641/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:19 compute-0 sudo[234713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhjqbupmyjundrdnqdstudpsmrentnpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045798.970296-1069-2188119336785/AnsiballZ_file.py'
Feb 02 15:23:19 compute-0 sudo[234713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:19 compute-0 python3.9[234715]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:23:19 compute-0 sudo[234713]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:19 compute-0 sudo[234865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfnxyadobamvdzzmsxmangjtgnlnpphz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045799.503529-1077-137802215411787/AnsiballZ_copy.py'
Feb 02 15:23:19 compute-0 sudo[234865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:19 compute-0 python3.9[234867]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:23:19 compute-0 sudo[234865]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:20 compute-0 sudo[235017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvaxlnwwnihonvxdgrdvceaetptnbxeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045800.1663313-1085-273787024320655/AnsiballZ_stat.py'
Feb 02 15:23:20 compute-0 sudo[235017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:20 compute-0 python3.9[235019]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:23:20 compute-0 sudo[235017]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:20 compute-0 ceph-mon[75334]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:21 compute-0 sudo[235169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpzsgumorwxwyrhcsegquxjypurwvkqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045800.8117557-1093-164927429934191/AnsiballZ_stat.py'
Feb 02 15:23:21 compute-0 sudo[235169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:21 compute-0 python3.9[235171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:23:21 compute-0 sudo[235169]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:21 compute-0 sudo[235292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwtolmaubhsdygpbpyoabhynzkfeuadc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045800.8117557-1093-164927429934191/AnsiballZ_copy.py'
Feb 02 15:23:21 compute-0 sudo[235292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:21 compute-0 python3.9[235294]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1770045800.8117557-1093-164927429934191/.source _original_basename=.4kjp4l_u follow=False checksum=4f3e3fa551066bbfa3c7cc5495d12bd31390608a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb 02 15:23:21 compute-0 sudo[235292]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:22 compute-0 python3.9[235446]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:23:22 compute-0 ceph-mon[75334]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:23 compute-0 python3.9[235598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:23:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Feb 02 15:23:23 compute-0 python3.9[235719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045802.662267-1119-37613653243337/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:24 compute-0 python3.9[235869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 02 15:23:24 compute-0 ceph-mon[75334]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Feb 02 15:23:24 compute-0 python3.9[235990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770045803.8374727-1134-215920860512931/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 02 15:23:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 02 15:23:25 compute-0 sudo[236140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uscqbxnpacksvxgcldbgkhkvqcpyxwvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045805.0285316-1151-226875438387509/AnsiballZ_container_config_data.py'
Feb 02 15:23:25 compute-0 sudo[236140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:25 compute-0 python3.9[236142]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Feb 02 15:23:25 compute-0 sudo[236140]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:26 compute-0 sudo[236292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vystmgrfqqfaepgnghraxgunteejkdfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045805.9616556-1162-154667651750218/AnsiballZ_container_config_hash.py'
Feb 02 15:23:26 compute-0 sudo[236292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:26 compute-0 python3.9[236294]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 02 15:23:26 compute-0 sudo[236292]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:26 compute-0 ceph-mon[75334]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 02 15:23:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:23:27 compute-0 sudo[236444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plwqnscrcrafpjmjoycesjmoyuoqkvtf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770045806.8701613-1172-232561597396765/AnsiballZ_edpm_container_manage.py'
Feb 02 15:23:27 compute-0 sudo[236444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:27 compute-0 python3[236446]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb 02 15:23:28 compute-0 ceph-mon[75334]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:23:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:23:30 compute-0 ceph-mon[75334]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:23:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:23:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:32 compute-0 ceph-mon[75334]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:23:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:23:35 compute-0 ceph-mon[75334]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:23:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 15:23:35 compute-0 sudo[236548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:23:35 compute-0 sudo[236548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:23:35 compute-0 sudo[236548]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:35 compute-0 sudo[236573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:23:35 compute-0 sudo[236573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:23:36 compute-0 ceph-mon[75334]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 15:23:36 compute-0 podman[236527]: 2026-02-02 15:23:36.958491493 +0000 UTC m=+3.688349569 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 02 15:23:37 compute-0 podman[236526]: 2026-02-02 15:23:37.000457545 +0000 UTC m=+3.730125840 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Feb 02 15:23:37 compute-0 podman[236460]: 2026-02-02 15:23:37.041240819 +0000 UTC m=+9.510407707 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 02 15:23:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 02 15:23:37 compute-0 sudo[236573]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:37 compute-0 podman[236661]: 2026-02-02 15:23:37.182746703 +0000 UTC m=+0.027558987 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 02 15:23:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:23:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:23:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:23:37 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:23:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:23:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:37 compute-0 podman[236661]: 2026-02-02 15:23:37.306980108 +0000 UTC m=+0.151792332 container create 011abca8e5d7aee8e870286abc4e6b02b96151539af595a2722c2bb588b90bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, tcib_managed=true, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:23:37 compute-0 python3[236446]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 02 15:23:37 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:23:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:23:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:23:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:23:37 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:23:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:23:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:23:37 compute-0 sudo[236444]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:37 compute-0 sudo[236705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:23:37 compute-0 sudo[236705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:23:37 compute-0 sudo[236705]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:37 compute-0 sudo[236738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:23:37 compute-0 sudo[236738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:23:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:23:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:23:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:23:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:23:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:23:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:23:37 compute-0 podman[236854]: 2026-02-02 15:23:37.841399498 +0000 UTC m=+0.099557003 container create 41742210fc974ea36c2b0c41b1ce6a595c7475c0f55301e6f9fff94c5289d902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_elgamal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:23:37 compute-0 podman[236854]: 2026-02-02 15:23:37.76389748 +0000 UTC m=+0.022054995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:23:37 compute-0 systemd[1]: Started libpod-conmon-41742210fc974ea36c2b0c41b1ce6a595c7475c0f55301e6f9fff94c5289d902.scope.
Feb 02 15:23:37 compute-0 sudo[236940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaeigjnwfuheonovokbllstgaadflfpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045817.6428285-1180-200098887464213/AnsiballZ_stat.py'
Feb 02 15:23:37 compute-0 sudo[236940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:23:37 compute-0 podman[236854]: 2026-02-02 15:23:37.954097297 +0000 UTC m=+0.212254832 container init 41742210fc974ea36c2b0c41b1ce6a595c7475c0f55301e6f9fff94c5289d902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:23:37 compute-0 podman[236854]: 2026-02-02 15:23:37.9625644 +0000 UTC m=+0.220721915 container start 41742210fc974ea36c2b0c41b1ce6a595c7475c0f55301e6f9fff94c5289d902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_elgamal, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:23:37 compute-0 podman[236854]: 2026-02-02 15:23:37.98674007 +0000 UTC m=+0.244897595 container attach 41742210fc974ea36c2b0c41b1ce6a595c7475c0f55301e6f9fff94c5289d902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_elgamal, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:23:37 compute-0 epic_elgamal[236941]: 167 167
Feb 02 15:23:37 compute-0 systemd[1]: libpod-41742210fc974ea36c2b0c41b1ce6a595c7475c0f55301e6f9fff94c5289d902.scope: Deactivated successfully.
Feb 02 15:23:37 compute-0 podman[236854]: 2026-02-02 15:23:37.994401377 +0000 UTC m=+0.252558872 container died 41742210fc974ea36c2b0c41b1ce6a595c7475c0f55301e6f9fff94c5289d902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_elgamal, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-65b4609c338b026abe3cdf892f6c1628f705953b2e03b0a7bd10eb78147b6eb8-merged.mount: Deactivated successfully.
Feb 02 15:23:38 compute-0 python3.9[236945]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:23:38 compute-0 sudo[236940]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:38 compute-0 podman[236854]: 2026-02-02 15:23:38.191278223 +0000 UTC m=+0.449435768 container remove 41742210fc974ea36c2b0c41b1ce6a595c7475c0f55301e6f9fff94c5289d902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_elgamal, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:23:38 compute-0 systemd[1]: libpod-conmon-41742210fc974ea36c2b0c41b1ce6a595c7475c0f55301e6f9fff94c5289d902.scope: Deactivated successfully.
Feb 02 15:23:38 compute-0 podman[236994]: 2026-02-02 15:23:38.317747355 +0000 UTC m=+0.024616994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:23:38 compute-0 podman[236994]: 2026-02-02 15:23:38.416953074 +0000 UTC m=+0.123822693 container create 3756b95ec8321ac53e18c65b19841e6ee025d67b46b34707c6597b42ce4fdcb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:23:38 compute-0 systemd[1]: Started libpod-conmon-3756b95ec8321ac53e18c65b19841e6ee025d67b46b34707c6597b42ce4fdcb5.scope.
Feb 02 15:23:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08259066e6a5db060d6457e3a63577c59844007f96bce83e45bfbd5658c0f856/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08259066e6a5db060d6457e3a63577c59844007f96bce83e45bfbd5658c0f856/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08259066e6a5db060d6457e3a63577c59844007f96bce83e45bfbd5658c0f856/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08259066e6a5db060d6457e3a63577c59844007f96bce83e45bfbd5658c0f856/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08259066e6a5db060d6457e3a63577c59844007f96bce83e45bfbd5658c0f856/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:38 compute-0 podman[236994]: 2026-02-02 15:23:38.551906519 +0000 UTC m=+0.258776178 container init 3756b95ec8321ac53e18c65b19841e6ee025d67b46b34707c6597b42ce4fdcb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:23:38 compute-0 podman[236994]: 2026-02-02 15:23:38.558332207 +0000 UTC m=+0.265201846 container start 3756b95ec8321ac53e18c65b19841e6ee025d67b46b34707c6597b42ce4fdcb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:23:38 compute-0 podman[236994]: 2026-02-02 15:23:38.587344473 +0000 UTC m=+0.294214102 container attach 3756b95ec8321ac53e18c65b19841e6ee025d67b46b34707c6597b42ce4fdcb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:23:38 compute-0 sudo[237141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvnhnwmezxxbndruvkrpheptklxeflpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045818.5060742-1192-154716100052134/AnsiballZ_container_config_data.py'
Feb 02 15:23:38 compute-0 sudo[237141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:38 compute-0 ceph-mon[75334]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 02 15:23:38 compute-0 python3.9[237143]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Feb 02 15:23:38 compute-0 sudo[237141]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:39 compute-0 gracious_brown[237018]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:23:39 compute-0 gracious_brown[237018]: --> All data devices are unavailable
Feb 02 15:23:39 compute-0 systemd[1]: libpod-3756b95ec8321ac53e18c65b19841e6ee025d67b46b34707c6597b42ce4fdcb5.scope: Deactivated successfully.
Feb 02 15:23:39 compute-0 podman[236994]: 2026-02-02 15:23:39.209763338 +0000 UTC m=+0.916632947 container died 3756b95ec8321ac53e18c65b19841e6ee025d67b46b34707c6597b42ce4fdcb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:23:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-08259066e6a5db060d6457e3a63577c59844007f96bce83e45bfbd5658c0f856-merged.mount: Deactivated successfully.
Feb 02 15:23:39 compute-0 podman[236994]: 2026-02-02 15:23:39.346004052 +0000 UTC m=+1.052873671 container remove 3756b95ec8321ac53e18c65b19841e6ee025d67b46b34707c6597b42ce4fdcb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brown, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:23:39 compute-0 systemd[1]: libpod-conmon-3756b95ec8321ac53e18c65b19841e6ee025d67b46b34707c6597b42ce4fdcb5.scope: Deactivated successfully.
Feb 02 15:23:39 compute-0 sudo[236738]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:39 compute-0 sudo[237301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:23:39 compute-0 sudo[237301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:23:39 compute-0 sudo[237301]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:39 compute-0 sudo[237345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnkiuzftowviwkjzyfbzyxiqwplnriqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045819.226334-1203-223666686115980/AnsiballZ_container_config_hash.py'
Feb 02 15:23:39 compute-0 sudo[237345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:39 compute-0 sudo[237350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:23:39 compute-0 sudo[237350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:23:39 compute-0 python3.9[237349]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 02 15:23:39 compute-0 sudo[237345]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:39 compute-0 podman[237398]: 2026-02-02 15:23:39.741256026 +0000 UTC m=+0.040225130 container create 9fd2e32dee646c710130eb3fce306179ca8d8845347df89e936c9240b99a3db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:23:39 compute-0 systemd[1]: Started libpod-conmon-9fd2e32dee646c710130eb3fce306179ca8d8845347df89e936c9240b99a3db6.scope.
Feb 02 15:23:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:23:39 compute-0 podman[237398]: 2026-02-02 15:23:39.810638493 +0000 UTC m=+0.109607647 container init 9fd2e32dee646c710130eb3fce306179ca8d8845347df89e936c9240b99a3db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:23:39 compute-0 podman[237398]: 2026-02-02 15:23:39.815172556 +0000 UTC m=+0.114141660 container start 9fd2e32dee646c710130eb3fce306179ca8d8845347df89e936c9240b99a3db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:23:39 compute-0 podman[237398]: 2026-02-02 15:23:39.719966017 +0000 UTC m=+0.018935201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:23:39 compute-0 pedantic_shannon[237428]: 167 167
Feb 02 15:23:39 compute-0 podman[237398]: 2026-02-02 15:23:39.818567632 +0000 UTC m=+0.117536796 container attach 9fd2e32dee646c710130eb3fce306179ca8d8845347df89e936c9240b99a3db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:23:39 compute-0 systemd[1]: libpod-9fd2e32dee646c710130eb3fce306179ca8d8845347df89e936c9240b99a3db6.scope: Deactivated successfully.
Feb 02 15:23:39 compute-0 podman[237398]: 2026-02-02 15:23:39.819979023 +0000 UTC m=+0.118948137 container died 9fd2e32dee646c710130eb3fce306179ca8d8845347df89e936c9240b99a3db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:23:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-20ac4cde54b1e72143de52f2d2921b7c9018c6da6794b88a323d92f83b963831-merged.mount: Deactivated successfully.
Feb 02 15:23:39 compute-0 podman[237398]: 2026-02-02 15:23:39.849933335 +0000 UTC m=+0.148902439 container remove 9fd2e32dee646c710130eb3fce306179ca8d8845347df89e936c9240b99a3db6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:23:39 compute-0 systemd[1]: libpod-conmon-9fd2e32dee646c710130eb3fce306179ca8d8845347df89e936c9240b99a3db6.scope: Deactivated successfully.
Feb 02 15:23:39 compute-0 podman[237476]: 2026-02-02 15:23:39.973801018 +0000 UTC m=+0.045542550 container create e26d742e75461423834043e0f9424e88fea01dc69febcfccb77b276dcc5e1bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:23:40 compute-0 systemd[1]: Started libpod-conmon-e26d742e75461423834043e0f9424e88fea01dc69febcfccb77b276dcc5e1bb9.scope.
Feb 02 15:23:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b78e61c0c33ca6a0ac4a2ae37790e8448635be38567448019486bcf30975914/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b78e61c0c33ca6a0ac4a2ae37790e8448635be38567448019486bcf30975914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b78e61c0c33ca6a0ac4a2ae37790e8448635be38567448019486bcf30975914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b78e61c0c33ca6a0ac4a2ae37790e8448635be38567448019486bcf30975914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:40 compute-0 podman[237476]: 2026-02-02 15:23:39.955174609 +0000 UTC m=+0.026916151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:23:40 compute-0 sudo[237597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwgzuqkhrnmxpqvpygaoremmsxetfnlw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1770045819.913883-1213-59836220295626/AnsiballZ_edpm_container_manage.py'
Feb 02 15:23:40 compute-0 sudo[237597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:40 compute-0 podman[237476]: 2026-02-02 15:23:40.124339769 +0000 UTC m=+0.196081361 container init e26d742e75461423834043e0f9424e88fea01dc69febcfccb77b276dcc5e1bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 02 15:23:40 compute-0 podman[237476]: 2026-02-02 15:23:40.130551775 +0000 UTC m=+0.202293327 container start e26d742e75461423834043e0f9424e88fea01dc69febcfccb77b276dcc5e1bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:23:40 compute-0 podman[237476]: 2026-02-02 15:23:40.133991751 +0000 UTC m=+0.205733273 container attach e26d742e75461423834043e0f9424e88fea01dc69febcfccb77b276dcc5e1bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:23:40 compute-0 trusting_tu[237544]: {
Feb 02 15:23:40 compute-0 trusting_tu[237544]:     "0": [
Feb 02 15:23:40 compute-0 trusting_tu[237544]:         {
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "devices": [
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "/dev/loop3"
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             ],
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_name": "ceph_lv0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_size": "21470642176",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "name": "ceph_lv0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "tags": {
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.cluster_name": "ceph",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.crush_device_class": "",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.encrypted": "0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.objectstore": "bluestore",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.osd_id": "0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.type": "block",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.vdo": "0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.with_tpm": "0"
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             },
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "type": "block",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "vg_name": "ceph_vg0"
Feb 02 15:23:40 compute-0 trusting_tu[237544]:         }
Feb 02 15:23:40 compute-0 trusting_tu[237544]:     ],
Feb 02 15:23:40 compute-0 trusting_tu[237544]:     "1": [
Feb 02 15:23:40 compute-0 trusting_tu[237544]:         {
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "devices": [
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "/dev/loop4"
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             ],
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_name": "ceph_lv1",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_size": "21470642176",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "name": "ceph_lv1",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "tags": {
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.cluster_name": "ceph",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.crush_device_class": "",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.encrypted": "0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.objectstore": "bluestore",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.osd_id": "1",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.type": "block",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.vdo": "0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.with_tpm": "0"
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             },
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "type": "block",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "vg_name": "ceph_vg1"
Feb 02 15:23:40 compute-0 trusting_tu[237544]:         }
Feb 02 15:23:40 compute-0 trusting_tu[237544]:     ],
Feb 02 15:23:40 compute-0 trusting_tu[237544]:     "2": [
Feb 02 15:23:40 compute-0 trusting_tu[237544]:         {
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "devices": [
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "/dev/loop5"
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             ],
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_name": "ceph_lv2",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_size": "21470642176",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "name": "ceph_lv2",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "tags": {
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.cluster_name": "ceph",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.crush_device_class": "",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.encrypted": "0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.objectstore": "bluestore",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.osd_id": "2",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.type": "block",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.vdo": "0",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:                 "ceph.with_tpm": "0"
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             },
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "type": "block",
Feb 02 15:23:40 compute-0 trusting_tu[237544]:             "vg_name": "ceph_vg2"
Feb 02 15:23:40 compute-0 trusting_tu[237544]:         }
Feb 02 15:23:40 compute-0 trusting_tu[237544]:     ]
Feb 02 15:23:40 compute-0 trusting_tu[237544]: }
Feb 02 15:23:40 compute-0 python3[237599]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb 02 15:23:40 compute-0 systemd[1]: libpod-e26d742e75461423834043e0f9424e88fea01dc69febcfccb77b276dcc5e1bb9.scope: Deactivated successfully.
Feb 02 15:23:40 compute-0 podman[237613]: 2026-02-02 15:23:40.453933113 +0000 UTC m=+0.032460013 container died e26d742e75461423834043e0f9424e88fea01dc69febcfccb77b276dcc5e1bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b78e61c0c33ca6a0ac4a2ae37790e8448635be38567448019486bcf30975914-merged.mount: Deactivated successfully.
Feb 02 15:23:40 compute-0 podman[237613]: 2026-02-02 15:23:40.488328119 +0000 UTC m=+0.066854999 container remove e26d742e75461423834043e0f9424e88fea01dc69febcfccb77b276dcc5e1bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:23:40 compute-0 systemd[1]: libpod-conmon-e26d742e75461423834043e0f9424e88fea01dc69febcfccb77b276dcc5e1bb9.scope: Deactivated successfully.
Feb 02 15:23:40 compute-0 sudo[237350]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:40 compute-0 podman[237651]: 2026-02-02 15:23:40.531516271 +0000 UTC m=+0.046422646 container create 1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, container_name=nova_compute)
Feb 02 15:23:40 compute-0 podman[237651]: 2026-02-02 15:23:40.508689512 +0000 UTC m=+0.023595907 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 02 15:23:40 compute-0 python3[237599]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 02 15:23:40 compute-0 sudo[237664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:23:40 compute-0 sudo[237664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:23:40 compute-0 sudo[237664]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:40 compute-0 sudo[237703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:23:40 compute-0 sudo[237703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:23:40 compute-0 sudo[237597]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:40 compute-0 podman[237846]: 2026-02-02 15:23:40.858368345 +0000 UTC m=+0.043715957 container create 0805e2dc191c90326b68df8adfb823101804d872e926a98b71b0ffea31f042f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:23:40 compute-0 systemd[1]: Started libpod-conmon-0805e2dc191c90326b68df8adfb823101804d872e926a98b71b0ffea31f042f4.scope.
Feb 02 15:23:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:23:40 compute-0 ceph-mon[75334]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:40 compute-0 podman[237846]: 2026-02-02 15:23:40.836679924 +0000 UTC m=+0.022027496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:23:40 compute-0 podman[237846]: 2026-02-02 15:23:40.935803852 +0000 UTC m=+0.121151464 container init 0805e2dc191c90326b68df8adfb823101804d872e926a98b71b0ffea31f042f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_swartz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:23:40 compute-0 podman[237846]: 2026-02-02 15:23:40.94769577 +0000 UTC m=+0.133043352 container start 0805e2dc191c90326b68df8adfb823101804d872e926a98b71b0ffea31f042f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:23:40 compute-0 podman[237846]: 2026-02-02 15:23:40.951168637 +0000 UTC m=+0.136516219 container attach 0805e2dc191c90326b68df8adfb823101804d872e926a98b71b0ffea31f042f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_swartz, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:23:40 compute-0 stupefied_swartz[237893]: 167 167
Feb 02 15:23:40 compute-0 systemd[1]: libpod-0805e2dc191c90326b68df8adfb823101804d872e926a98b71b0ffea31f042f4.scope: Deactivated successfully.
Feb 02 15:23:40 compute-0 podman[237846]: 2026-02-02 15:23:40.955984382 +0000 UTC m=+0.141331974 container died 0805e2dc191c90326b68df8adfb823101804d872e926a98b71b0ffea31f042f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:23:40 compute-0 sudo[237923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcdswvmtqmmxhtnyovfashftwybvmfcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045820.738064-1221-167222141293145/AnsiballZ_stat.py'
Feb 02 15:23:40 compute-0 sudo[237923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4540b3bbebe0344bcfd19ced44403041dc1dec5a069e9d55a180285d764ba4f5-merged.mount: Deactivated successfully.
Feb 02 15:23:40 compute-0 podman[237846]: 2026-02-02 15:23:40.993761853 +0000 UTC m=+0.179109435 container remove 0805e2dc191c90326b68df8adfb823101804d872e926a98b71b0ffea31f042f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:23:41 compute-0 systemd[1]: libpod-conmon-0805e2dc191c90326b68df8adfb823101804d872e926a98b71b0ffea31f042f4.scope: Deactivated successfully.
Feb 02 15:23:41 compute-0 python3.9[237933]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:23:41 compute-0 podman[237946]: 2026-02-02 15:23:41.151084905 +0000 UTC m=+0.058427146 container create 9552dbda4bb5bb0d9abb851255b936f3f1a66cfc0eb8e36da8fe39790f160a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_rhodes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:23:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:41 compute-0 sudo[237923]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:41 compute-0 systemd[1]: Started libpod-conmon-9552dbda4bb5bb0d9abb851255b936f3f1a66cfc0eb8e36da8fe39790f160a0b.scope.
Feb 02 15:23:41 compute-0 podman[237946]: 2026-02-02 15:23:41.128481966 +0000 UTC m=+0.035824227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:23:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/366cd493d6b9683b2afb19eca9bb5d37961dfe5e3af87a77a080570b76fdd411/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/366cd493d6b9683b2afb19eca9bb5d37961dfe5e3af87a77a080570b76fdd411/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/366cd493d6b9683b2afb19eca9bb5d37961dfe5e3af87a77a080570b76fdd411/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/366cd493d6b9683b2afb19eca9bb5d37961dfe5e3af87a77a080570b76fdd411/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:41 compute-0 podman[237946]: 2026-02-02 15:23:41.242664787 +0000 UTC m=+0.150007058 container init 9552dbda4bb5bb0d9abb851255b936f3f1a66cfc0eb8e36da8fe39790f160a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_rhodes, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:23:41 compute-0 podman[237946]: 2026-02-02 15:23:41.249054164 +0000 UTC m=+0.156396405 container start 9552dbda4bb5bb0d9abb851255b936f3f1a66cfc0eb8e36da8fe39790f160a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:23:41 compute-0 podman[237946]: 2026-02-02 15:23:41.25241521 +0000 UTC m=+0.159757481 container attach 9552dbda4bb5bb0d9abb851255b936f3f1a66cfc0eb8e36da8fe39790f160a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:23:41 compute-0 sudo[238136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxqzbcjlgpxowrdcvmmhdsruuhxshcmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045821.381328-1230-121502397324339/AnsiballZ_file.py'
Feb 02 15:23:41 compute-0 sudo[238136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:41 compute-0 python3.9[238145]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:23:41 compute-0 sudo[238136]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:41 compute-0 lvm[238218]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:23:41 compute-0 lvm[238217]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:23:41 compute-0 lvm[238217]: VG ceph_vg0 finished
Feb 02 15:23:41 compute-0 lvm[238218]: VG ceph_vg1 finished
Feb 02 15:23:41 compute-0 lvm[238239]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:23:41 compute-0 lvm[238239]: VG ceph_vg2 finished
Feb 02 15:23:41 compute-0 wizardly_rhodes[237964]: {}
Feb 02 15:23:41 compute-0 systemd[1]: libpod-9552dbda4bb5bb0d9abb851255b936f3f1a66cfc0eb8e36da8fe39790f160a0b.scope: Deactivated successfully.
Feb 02 15:23:41 compute-0 podman[237946]: 2026-02-02 15:23:41.944830616 +0000 UTC m=+0.852172867 container died 9552dbda4bb5bb0d9abb851255b936f3f1a66cfc0eb8e36da8fe39790f160a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-366cd493d6b9683b2afb19eca9bb5d37961dfe5e3af87a77a080570b76fdd411-merged.mount: Deactivated successfully.
Feb 02 15:23:41 compute-0 podman[237946]: 2026-02-02 15:23:41.99634628 +0000 UTC m=+0.903688561 container remove 9552dbda4bb5bb0d9abb851255b936f3f1a66cfc0eb8e36da8fe39790f160a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_rhodes, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:23:42 compute-0 systemd[1]: libpod-conmon-9552dbda4bb5bb0d9abb851255b936f3f1a66cfc0eb8e36da8fe39790f160a0b.scope: Deactivated successfully.
Feb 02 15:23:42 compute-0 sudo[237703]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:23:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:23:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:23:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:23:42 compute-0 sudo[238334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:23:42 compute-0 sudo[238334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:23:42 compute-0 sudo[238334]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:42 compute-0 sudo[238384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jokglqcvqzvrkzpnduazcrkabufmmbjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045821.7903068-1230-17100709140561/AnsiballZ_copy.py'
Feb 02 15:23:42 compute-0 sudo[238384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:42 compute-0 python3.9[238387]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770045821.7903068-1230-17100709140561/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 02 15:23:42 compute-0 sudo[238384]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:42 compute-0 sudo[238461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qggskylechchcxmpoqrgsmpxmyimrkzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045821.7903068-1230-17100709140561/AnsiballZ_systemd.py'
Feb 02 15:23:42 compute-0 sudo[238461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:23:42
Feb 02 15:23:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:23:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:23:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', '.mgr', 'vms', 'default.rgw.meta']
Feb 02 15:23:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:23:42 compute-0 python3.9[238463]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 02 15:23:42 compute-0 systemd[1]: Reloading.
Feb 02 15:23:42 compute-0 systemd-rc-local-generator[238490]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:23:42 compute-0 systemd-sysv-generator[238494]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:23:43 compute-0 ceph-mon[75334]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:43 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:23:43 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:23:43 compute-0 sudo[238461]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:43 compute-0 sudo[238572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hahdqtfqfvrmmlbqmbhrquwehwevbrdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045821.7903068-1230-17100709140561/AnsiballZ_systemd.py'
Feb 02 15:23:43 compute-0 sudo[238572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:43 compute-0 python3.9[238574]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 02 15:23:43 compute-0 systemd[1]: Reloading.
Feb 02 15:23:43 compute-0 systemd-rc-local-generator[238600]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 02 15:23:43 compute-0 systemd-sysv-generator[238607]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 02 15:23:44 compute-0 systemd[1]: Starting nova_compute container...
Feb 02 15:23:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:44 compute-0 podman[238614]: 2026-02-02 15:23:44.14480413 +0000 UTC m=+0.121584307 container init 1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:23:44 compute-0 podman[238614]: 2026-02-02 15:23:44.159302387 +0000 UTC m=+0.136082494 container start 1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:23:44 compute-0 podman[238614]: nova_compute
Feb 02 15:23:44 compute-0 systemd[1]: Started nova_compute container.
Feb 02 15:23:44 compute-0 nova_compute[238629]: + sudo -E kolla_set_configs
Feb 02 15:23:44 compute-0 sudo[238572]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Validating config file
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying service configuration files
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Deleting /etc/ceph
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Creating directory /etc/ceph
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/ceph
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Writing out command to execute
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 02 15:23:44 compute-0 nova_compute[238629]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 02 15:23:44 compute-0 nova_compute[238629]: ++ cat /run_command
Feb 02 15:23:44 compute-0 nova_compute[238629]: + CMD=nova-compute
Feb 02 15:23:44 compute-0 nova_compute[238629]: + ARGS=
Feb 02 15:23:44 compute-0 nova_compute[238629]: + sudo kolla_copy_cacerts
Feb 02 15:23:44 compute-0 nova_compute[238629]: + [[ ! -n '' ]]
Feb 02 15:23:44 compute-0 nova_compute[238629]: + . kolla_extend_start
Feb 02 15:23:44 compute-0 nova_compute[238629]: Running command: 'nova-compute'
Feb 02 15:23:44 compute-0 nova_compute[238629]: + echo 'Running command: '\''nova-compute'\'''
Feb 02 15:23:44 compute-0 nova_compute[238629]: + umask 0022
Feb 02 15:23:44 compute-0 nova_compute[238629]: + exec nova-compute
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:23:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:23:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:45 compute-0 ceph-mon[75334]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:45 compute-0 python3.9[238790]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:23:46 compute-0 ceph-mon[75334]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:46 compute-0 python3.9[238941]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:23:47 compute-0 python3.9[239091]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 02 15:23:47 compute-0 nova_compute[238629]: 2026-02-02 15:23:47.074 238633 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 15:23:47 compute-0 nova_compute[238629]: 2026-02-02 15:23:47.074 238633 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 15:23:47 compute-0 nova_compute[238629]: 2026-02-02 15:23:47.074 238633 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 15:23:47 compute-0 nova_compute[238629]: 2026-02-02 15:23:47.074 238633 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 02 15:23:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:47 compute-0 nova_compute[238629]: 2026-02-02 15:23:47.215 238633 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:23:47 compute-0 nova_compute[238629]: 2026-02-02 15:23:47.242 238633 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:23:47 compute-0 nova_compute[238629]: 2026-02-02 15:23:47.242 238633 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 02 15:23:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:47 compute-0 sudo[239245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzdgkngmzfylmnhzvqwfufswvygaivdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045827.2884896-1290-158350423278469/AnsiballZ_podman_container.py'
Feb 02 15:23:47 compute-0 sudo[239245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:47 compute-0 nova_compute[238629]: 2026-02-02 15:23:47.876 238633 INFO nova.virt.driver [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 02 15:23:48 compute-0 python3.9[239247]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.010 238633 INFO nova.compute.provider_config [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 02 15:23:48 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.028 238633 DEBUG oslo_concurrency.lockutils [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.029 238633 DEBUG oslo_concurrency.lockutils [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.029 238633 DEBUG oslo_concurrency.lockutils [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.029 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.029 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.030 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.030 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.030 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.030 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.030 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.030 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.030 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.031 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.031 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.031 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.031 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.031 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.031 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.031 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.032 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.032 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.032 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.032 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.032 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.032 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.033 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.033 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.033 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.033 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.033 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.033 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.033 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.034 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.034 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.034 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.034 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.034 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.034 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.034 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.035 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.035 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.035 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.035 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.035 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.035 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.036 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.036 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.036 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.036 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.036 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.036 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.036 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.037 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.037 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.037 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.037 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.037 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.037 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.037 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.038 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.038 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.038 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.038 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.038 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.038 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.038 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.038 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.039 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.039 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.039 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.039 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.039 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.039 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.039 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.040 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.040 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.040 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.040 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.040 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.040 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.040 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.040 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.041 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.041 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.041 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.041 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.041 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.041 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.042 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.042 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.042 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.042 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.042 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.043 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.043 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.043 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.043 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.043 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.043 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.044 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.044 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.044 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.044 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.044 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.044 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.045 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.045 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.045 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.045 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.045 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.045 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.046 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.046 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.046 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.046 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.046 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.046 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.046 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.046 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.047 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.047 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.047 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.047 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.047 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.047 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.047 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.048 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.048 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.048 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.048 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.048 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.048 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.048 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.049 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.049 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.049 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.049 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.049 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.049 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.049 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.049 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.050 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.050 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.050 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.050 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.050 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.050 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.050 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.051 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.051 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.051 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.051 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.051 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.051 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.052 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.052 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.052 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.052 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.052 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.052 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.053 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.053 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.053 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.053 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.053 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.053 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.054 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.054 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.054 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.054 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.054 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.054 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.054 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.055 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.055 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.055 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.055 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.055 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.055 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.056 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.056 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.056 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.056 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.056 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.056 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.057 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.057 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.057 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.057 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.057 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.057 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.057 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.057 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.058 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.058 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.058 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.058 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.058 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.058 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.058 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.059 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.059 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.059 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.059 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.059 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.059 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.060 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.060 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.060 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.060 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.060 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.060 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.060 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.061 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.061 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.061 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.061 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.061 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.061 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.061 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.062 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.062 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.062 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.062 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.062 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.063 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.063 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.063 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.063 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.063 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.063 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.063 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.064 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.064 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.064 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.064 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.064 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.064 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.065 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.065 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.065 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.065 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.065 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.065 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.065 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.066 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.066 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.066 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.066 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.066 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.066 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.067 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.067 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.067 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.067 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.067 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.067 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.067 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.068 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.068 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.068 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.068 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.068 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.068 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.068 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.069 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.069 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.069 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.069 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.069 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.069 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.070 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.070 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.070 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.070 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.070 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.070 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.070 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.071 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.071 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.071 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.071 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.071 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.071 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.071 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.072 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.072 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.072 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.072 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.072 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.073 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.073 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.073 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.073 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.073 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.073 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.073 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.074 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.074 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.074 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.074 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.074 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.074 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.075 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.075 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.075 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.075 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.075 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.075 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.075 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.076 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.076 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.076 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.076 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.076 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.076 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.076 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.077 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.077 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.077 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.077 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.077 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.077 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.077 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.078 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.078 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.078 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.078 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.078 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.078 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.078 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.079 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.079 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.079 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.079 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.080 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.080 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.080 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.080 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.080 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.080 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.081 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.081 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.081 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.081 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.081 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.081 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.081 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.082 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.082 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.082 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.082 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.082 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.082 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.082 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.083 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.083 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.083 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.083 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.083 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.084 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.084 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.084 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.084 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.084 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.084 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.084 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.085 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.085 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.085 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.085 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.085 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.085 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.085 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.086 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.086 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.086 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.086 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.086 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.086 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.086 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.087 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.087 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.087 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.087 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.087 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.087 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.087 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.088 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.088 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.088 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.088 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.088 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.088 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.088 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.089 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.089 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.089 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.089 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.089 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.089 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.089 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.090 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.090 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.090 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.090 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.090 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.090 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.090 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.091 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.091 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.091 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.091 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.091 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.092 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.092 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.092 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.092 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.093 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.093 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.093 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.093 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.093 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.094 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.094 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.094 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.094 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.094 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.094 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.095 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.095 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.095 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.095 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.095 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.095 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.096 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.096 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.096 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.096 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.096 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.096 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.096 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.097 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.097 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.097 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.097 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.097 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.097 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.097 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.098 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.098 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.098 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.098 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.098 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.098 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.099 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.099 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.099 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.099 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.099 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.099 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.099 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.100 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.100 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.100 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.100 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.100 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.100 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.101 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.101 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.101 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.101 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.101 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.101 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.101 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.102 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.102 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.102 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.102 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.102 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.102 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.103 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.103 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.103 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.103 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.103 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.103 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.103 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.104 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.104 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 sudo[239245]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.104 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.104 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.104 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.104 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.105 238633 WARNING oslo_config.cfg [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 02 15:23:48 compute-0 nova_compute[238629]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 02 15:23:48 compute-0 nova_compute[238629]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 02 15:23:48 compute-0 nova_compute[238629]: and ``live_migration_inbound_addr`` respectively.
Feb 02 15:23:48 compute-0 nova_compute[238629]: ).  Its value may be silently ignored in the future.
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.105 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.105 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.105 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.105 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.106 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.106 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.106 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.106 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.106 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.106 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.107 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.107 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.107 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.107 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.107 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.107 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.108 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.108 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.108 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rbd_secret_uuid        = e43470b2-6632-573a-87d3-0f5428ec59e9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.108 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.108 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.108 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.108 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.109 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.109 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.109 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.109 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.111 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.111 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.112 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.112 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.113 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.113 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.114 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.114 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.115 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.115 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.115 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.116 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.116 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.116 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.117 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.117 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.117 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.118 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.118 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.118 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.118 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.119 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.119 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.120 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.120 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.120 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.121 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.121 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.121 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.121 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.122 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.122 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.122 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.123 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.123 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.123 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.124 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.124 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.124 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.125 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.125 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.126 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.126 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.126 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.127 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.127 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.128 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.128 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.128 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.129 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.129 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.130 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.130 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.131 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.131 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.132 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.132 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.133 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.133 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.133 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.134 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.134 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.134 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.135 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.135 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.135 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.136 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.136 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.136 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.137 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.137 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.137 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.138 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.138 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.138 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.139 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.139 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.140 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.140 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.140 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.141 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.141 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.141 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.142 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.142 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.142 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.143 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.143 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.143 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.144 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.144 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.145 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.145 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.145 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.146 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.146 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.147 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.147 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.147 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.148 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.148 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.149 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.149 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.150 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.150 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.151 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.151 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.151 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.152 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.152 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.152 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.153 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.153 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.153 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.154 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.154 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.154 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.154 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.155 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.155 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.155 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.155 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.155 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.156 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.156 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.156 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.156 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.156 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.157 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.157 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.157 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.158 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.158 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.158 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.158 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.159 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.159 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.159 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.159 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.159 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.160 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.160 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.160 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.160 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.160 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.161 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.161 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.161 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.161 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.162 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.162 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.162 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.163 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.163 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.163 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.164 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.164 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.164 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.164 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.165 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.165 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.165 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.165 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.166 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.166 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.166 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.167 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.167 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.167 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.167 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.168 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.168 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.168 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.168 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.169 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.169 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.169 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.169 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.170 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.170 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.170 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.170 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.170 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.171 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.171 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.171 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.171 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.171 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.172 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.172 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.172 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.172 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.172 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.172 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.173 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.173 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.173 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.173 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.173 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.174 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.174 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.174 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.174 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.174 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.175 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.175 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.175 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.175 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.175 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.176 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.176 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.176 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.176 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.176 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.177 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.177 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.177 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.177 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.178 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.178 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.178 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.178 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.179 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.179 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.179 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.179 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.180 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.180 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.180 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.180 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.180 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.181 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.181 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.181 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.181 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.181 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.182 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.182 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.182 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.182 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.183 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.183 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.183 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.183 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.183 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.184 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.184 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.184 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.184 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.184 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.185 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.185 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.185 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.185 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.185 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.186 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.186 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.186 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.186 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.186 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.187 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.187 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.187 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.187 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.187 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.188 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.188 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.188 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.188 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.189 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.189 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.189 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.189 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.189 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.190 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.190 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.190 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.190 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.190 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.191 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.191 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.191 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.191 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.191 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.192 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.192 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.192 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.192 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.192 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.192 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.192 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.193 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.193 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.193 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.193 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.193 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.193 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.193 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.194 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.194 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.194 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.194 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.194 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.194 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.194 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.195 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.195 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.195 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.195 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.195 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.195 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.195 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.196 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.196 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.196 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.196 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.196 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.196 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.196 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.197 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.197 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.197 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.197 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.197 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.197 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.197 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.197 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.198 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.198 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.198 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.198 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.198 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.198 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.198 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.198 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.199 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.199 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.199 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.199 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.199 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.199 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.199 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.200 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.200 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.200 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.200 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.200 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.200 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.200 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.200 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.201 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.201 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.201 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.201 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.201 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.201 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.202 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.202 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.202 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.202 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.202 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.202 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.203 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.203 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.203 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.203 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.203 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.203 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.204 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.204 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.205 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.205 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.205 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.205 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.205 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.205 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.206 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.206 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.206 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.206 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.206 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.206 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.207 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.207 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.207 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.207 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.207 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.207 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.208 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.208 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.208 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.208 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.208 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.208 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.209 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.209 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.209 238633 DEBUG oslo_service.service [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.211 238633 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.223 238633 DEBUG nova.virt.libvirt.host [None req-2d10fe1b-cd90-4e8f-9001-9307823ef38f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.223 238633 DEBUG nova.virt.libvirt.host [None req-2d10fe1b-cd90-4e8f-9001-9307823ef38f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.224 238633 DEBUG nova.virt.libvirt.host [None req-2d10fe1b-cd90-4e8f-9001-9307823ef38f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.224 238633 DEBUG nova.virt.libvirt.host [None req-2d10fe1b-cd90-4e8f-9001-9307823ef38f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 02 15:23:48 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 02 15:23:48 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 02 15:23:48 compute-0 ceph-mon[75334]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.315 238633 DEBUG nova.virt.libvirt.host [None req-2d10fe1b-cd90-4e8f-9001-9307823ef38f - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f932028bfd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.318 238633 DEBUG nova.virt.libvirt.host [None req-2d10fe1b-cd90-4e8f-9001-9307823ef38f - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f932028bfd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.320 238633 INFO nova.virt.libvirt.driver [None req-2d10fe1b-cd90-4e8f-9001-9307823ef38f - - - - - -] Connection event '1' reason 'None'
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.346 238633 WARNING nova.virt.libvirt.driver [None req-2d10fe1b-cd90-4e8f-9001-9307823ef38f - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.347 238633 DEBUG nova.virt.libvirt.volume.mount [None req-2d10fe1b-cd90-4e8f-9001-9307823ef38f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 02 15:23:48 compute-0 sudo[239472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjmfgrewbcdwonrtdxplrwuisobdkfpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045828.2835844-1298-12949288590980/AnsiballZ_systemd.py'
Feb 02 15:23:48 compute-0 sudo[239472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:48 compute-0 python3.9[239474]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 02 15:23:48 compute-0 systemd[1]: Stopping nova_compute container...
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.919 238633 DEBUG oslo_concurrency.lockutils [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.920 238633 DEBUG oslo_concurrency.lockutils [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:23:48 compute-0 nova_compute[238629]: 2026-02-02 15:23:48.920 238633 DEBUG oslo_concurrency.lockutils [None req-53589254-c7a1-416d-8859-facbbf7c25b8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:23:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:49 compute-0 virtqemud[239316]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Feb 02 15:23:49 compute-0 virtqemud[239316]: hostname: compute-0
Feb 02 15:23:49 compute-0 virtqemud[239316]: End of file while reading data: Input/output error
Feb 02 15:23:49 compute-0 systemd[1]: libpod-1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51.scope: Deactivated successfully.
Feb 02 15:23:49 compute-0 systemd[1]: libpod-1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51.scope: Consumed 2.882s CPU time.
Feb 02 15:23:49 compute-0 podman[239486]: 2026-02-02 15:23:49.667403834 +0000 UTC m=+0.805065966 container died 1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51-userdata-shm.mount: Deactivated successfully.
Feb 02 15:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b-merged.mount: Deactivated successfully.
Feb 02 15:23:50 compute-0 ceph-mon[75334]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:50 compute-0 podman[239486]: 2026-02-02 15:23:50.386009432 +0000 UTC m=+1.523671524 container cleanup 1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 02 15:23:50 compute-0 podman[239486]: nova_compute
Feb 02 15:23:50 compute-0 podman[239517]: nova_compute
Feb 02 15:23:50 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 02 15:23:50 compute-0 systemd[1]: Stopped nova_compute container.
Feb 02 15:23:50 compute-0 systemd[1]: Starting nova_compute container...
Feb 02 15:23:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d85473afb120e4c9e5a29c0ca409202bd6544d96295b17037d79a12e7214f6b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:50 compute-0 podman[239530]: 2026-02-02 15:23:50.584928034 +0000 UTC m=+0.109306228 container init 1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Feb 02 15:23:50 compute-0 podman[239530]: 2026-02-02 15:23:50.599337504 +0000 UTC m=+0.123715688 container start 1f74e3c4dd3ebd63aeebfd0693003c59982d6cc4c2b7a8783087a24fd5c03f51 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute, org.label-schema.license=GPLv2)
Feb 02 15:23:50 compute-0 podman[239530]: nova_compute
Feb 02 15:23:50 compute-0 systemd[1]: Started nova_compute container.
Feb 02 15:23:50 compute-0 nova_compute[239545]: + sudo -E kolla_set_configs
Feb 02 15:23:50 compute-0 sudo[239472]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Validating config file
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying service configuration files
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /etc/ceph
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Creating directory /etc/ceph
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/ceph
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Writing out command to execute
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 02 15:23:50 compute-0 nova_compute[239545]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 02 15:23:50 compute-0 nova_compute[239545]: ++ cat /run_command
Feb 02 15:23:50 compute-0 nova_compute[239545]: + CMD=nova-compute
Feb 02 15:23:50 compute-0 nova_compute[239545]: + ARGS=
Feb 02 15:23:50 compute-0 nova_compute[239545]: + sudo kolla_copy_cacerts
Feb 02 15:23:50 compute-0 nova_compute[239545]: + [[ ! -n '' ]]
Feb 02 15:23:50 compute-0 nova_compute[239545]: + . kolla_extend_start
Feb 02 15:23:50 compute-0 nova_compute[239545]: Running command: 'nova-compute'
Feb 02 15:23:50 compute-0 nova_compute[239545]: + echo 'Running command: '\''nova-compute'\'''
Feb 02 15:23:50 compute-0 nova_compute[239545]: + umask 0022
Feb 02 15:23:50 compute-0 nova_compute[239545]: + exec nova-compute
Feb 02 15:23:51 compute-0 sudo[239706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbgsphavyecqbydmytypivjffskeryca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1770045830.847615-1307-72528074788933/AnsiballZ_podman_container.py'
Feb 02 15:23:51 compute-0 sudo[239706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 15:23:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:51 compute-0 python3.9[239708]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 02 15:23:51 compute-0 systemd[1]: Started libpod-conmon-011abca8e5d7aee8e870286abc4e6b02b96151539af595a2722c2bb588b90bdf.scope.
Feb 02 15:23:51 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3586da03fb6e5758800ff4a83f6cb63af6f5010218e50d25fc20bd8181f4391c/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3586da03fb6e5758800ff4a83f6cb63af6f5010218e50d25fc20bd8181f4391c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3586da03fb6e5758800ff4a83f6cb63af6f5010218e50d25fc20bd8181f4391c/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 02 15:23:51 compute-0 podman[239734]: 2026-02-02 15:23:51.616279795 +0000 UTC m=+0.124202720 container init 011abca8e5d7aee8e870286abc4e6b02b96151539af595a2722c2bb588b90bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init)
Feb 02 15:23:51 compute-0 podman[239734]: 2026-02-02 15:23:51.622840208 +0000 UTC m=+0.130763143 container start 011abca8e5d7aee8e870286abc4e6b02b96151539af595a2722c2bb588b90bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:23:51 compute-0 python3.9[239708]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Applying nova statedir ownership
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 02 15:23:51 compute-0 nova_compute_init[239755]: INFO:nova_statedir:Nova statedir ownership complete
Feb 02 15:23:51 compute-0 systemd[1]: libpod-011abca8e5d7aee8e870286abc4e6b02b96151539af595a2722c2bb588b90bdf.scope: Deactivated successfully.
Feb 02 15:23:51 compute-0 podman[239769]: 2026-02-02 15:23:51.736440783 +0000 UTC m=+0.038607235 container died 011abca8e5d7aee8e870286abc4e6b02b96151539af595a2722c2bb588b90bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 15:23:51 compute-0 sudo[239706]: pam_unix(sudo:session): session closed for user root
Feb 02 15:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-011abca8e5d7aee8e870286abc4e6b02b96151539af595a2722c2bb588b90bdf-userdata-shm.mount: Deactivated successfully.
Feb 02 15:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3586da03fb6e5758800ff4a83f6cb63af6f5010218e50d25fc20bd8181f4391c-merged.mount: Deactivated successfully.
Feb 02 15:23:51 compute-0 podman[239769]: 2026-02-02 15:23:51.775872127 +0000 UTC m=+0.078038559 container cleanup 011abca8e5d7aee8e870286abc4e6b02b96151539af595a2722c2bb588b90bdf (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:23:51 compute-0 systemd[1]: libpod-conmon-011abca8e5d7aee8e870286abc4e6b02b96151539af595a2722c2bb588b90bdf.scope: Deactivated successfully.
Feb 02 15:23:52 compute-0 sshd-session[215175]: Connection closed by 192.168.122.30 port 47640
Feb 02 15:23:52 compute-0 sshd-session[215172]: pam_unix(sshd:session): session closed for user zuul
Feb 02 15:23:52 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Feb 02 15:23:52 compute-0 systemd[1]: session-50.scope: Consumed 1min 47.650s CPU time.
Feb 02 15:23:52 compute-0 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Feb 02 15:23:52 compute-0 systemd-logind[786]: Removed session 50.
Feb 02 15:23:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:52 compute-0 nova_compute[239545]: 2026-02-02 15:23:52.385 239549 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 15:23:52 compute-0 nova_compute[239545]: 2026-02-02 15:23:52.385 239549 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 15:23:52 compute-0 nova_compute[239545]: 2026-02-02 15:23:52.385 239549 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 02 15:23:52 compute-0 nova_compute[239545]: 2026-02-02 15:23:52.386 239549 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 02 15:23:52 compute-0 ceph-mon[75334]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:52 compute-0 nova_compute[239545]: 2026-02-02 15:23:52.510 239549 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:23:52 compute-0 nova_compute[239545]: 2026-02-02 15:23:52.535 239549 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:23:52 compute-0 nova_compute[239545]: 2026-02-02 15:23:52.536 239549 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.028 239549 INFO nova.virt.driver [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.115 239549 INFO nova.compute.provider_config [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.131 239549 DEBUG oslo_concurrency.lockutils [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.132 239549 DEBUG oslo_concurrency.lockutils [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.132 239549 DEBUG oslo_concurrency.lockutils [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.132 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.133 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.133 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.133 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.133 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.133 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.133 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.134 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.134 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.134 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.134 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.134 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.135 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.135 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.135 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.135 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.135 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.136 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.136 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.136 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.136 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.136 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.136 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.137 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.137 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.137 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.137 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.137 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.137 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.137 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.138 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.138 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.138 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.138 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.138 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.138 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.139 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.139 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.139 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.139 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.140 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.140 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.140 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.140 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.140 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.141 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.141 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.141 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.141 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.141 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.142 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.142 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.142 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.142 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.142 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.142 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.143 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.143 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.143 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.143 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.143 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.143 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.143 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.144 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.144 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.144 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.144 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.144 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.144 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.145 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.145 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.145 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.145 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.145 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.145 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.145 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.146 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.146 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.146 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.146 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.146 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.146 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.146 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.147 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.147 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.147 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.147 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.147 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.147 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.148 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.148 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.148 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.148 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.148 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.148 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.148 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.149 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.149 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.149 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.149 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.149 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.149 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.149 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.150 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.150 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.150 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.150 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.150 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.150 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.150 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.151 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.151 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.151 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.151 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.151 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.151 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.151 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.152 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.152 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.152 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.152 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.152 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.153 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.153 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.153 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.153 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.153 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.153 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.154 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.154 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.154 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.154 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.154 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.154 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.155 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.155 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.155 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.155 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.156 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.156 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.156 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.156 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.156 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.156 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.157 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.157 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.157 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.157 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.157 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.157 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.157 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.158 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.158 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.158 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.158 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.158 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.158 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.158 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.159 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.159 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.159 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.159 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.159 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.159 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.159 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.160 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.160 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.160 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.160 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.160 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.160 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.160 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.161 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.161 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.161 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.161 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.161 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.161 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.161 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.162 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.162 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.162 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.162 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.162 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.162 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.162 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.163 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.163 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.163 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.163 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.163 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.163 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.163 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.164 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.164 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.164 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.164 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.164 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.164 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.164 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.165 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.165 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.165 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.165 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.165 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.165 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.166 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.166 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.166 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.166 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.166 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.166 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.166 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.167 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.167 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.167 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.167 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.167 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.167 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.168 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.168 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.168 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.168 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.168 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.168 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.168 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.168 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.169 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.169 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.169 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.169 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.169 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.169 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.169 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.170 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.170 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.170 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.170 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.170 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.170 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.170 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.171 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.171 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.171 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.171 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.171 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.171 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.171 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.172 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.172 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.172 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.172 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.172 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.172 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.172 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.173 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.173 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.173 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.173 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.173 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.173 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.173 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.174 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.174 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.174 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.174 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.174 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.174 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.174 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.174 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.175 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.175 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.175 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.175 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.175 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.175 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.175 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.176 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.176 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.176 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.176 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.176 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.176 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.176 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.177 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.177 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.177 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.177 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.177 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.177 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.177 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.178 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.178 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.178 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.178 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.178 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.178 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.178 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.178 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.179 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.179 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.179 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.179 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.179 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.179 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.179 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.180 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.180 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.180 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.180 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.180 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.180 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.180 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.181 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.181 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.181 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.181 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.181 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.181 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.181 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.182 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.182 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.182 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.182 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.182 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.182 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.182 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.183 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.183 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.183 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.183 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.183 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.183 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.184 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.184 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.184 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.184 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.184 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.184 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.184 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.185 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.185 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.185 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.185 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.185 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.185 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.185 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.186 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.186 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.186 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.186 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.186 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.186 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.186 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.186 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.187 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.187 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.187 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.187 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.187 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.188 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.188 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.188 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.188 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.188 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.188 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.188 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.189 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.189 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.189 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.189 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.189 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.189 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.189 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.190 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.190 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.190 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.190 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.190 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.190 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.190 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.191 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.191 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.191 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.191 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.191 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.191 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.191 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.191 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.192 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.192 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.192 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.192 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.192 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.192 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.193 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.193 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.193 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.193 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.193 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.193 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.193 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.193 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.194 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.194 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.194 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.194 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.194 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.194 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.194 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.195 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.195 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.195 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.195 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.195 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.195 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.195 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.196 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.196 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.196 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.196 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.196 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.196 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.196 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.197 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.197 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.197 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.197 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.197 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.197 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.197 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.198 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.198 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.198 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.198 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.198 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.198 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.198 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.199 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.199 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.199 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.199 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.199 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.199 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.199 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.200 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.200 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.200 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.200 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.200 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.200 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.200 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.201 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.201 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.201 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.201 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.201 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.201 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.201 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.202 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.202 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.202 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.202 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.202 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.202 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.202 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.203 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.203 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.203 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.203 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.203 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.203 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.203 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.204 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.204 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.204 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.204 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.204 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.204 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.204 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.205 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.205 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.205 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.205 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.205 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.205 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.205 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.206 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.206 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.206 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.206 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.206 239549 WARNING oslo_config.cfg [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 02 15:23:53 compute-0 nova_compute[239545]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 02 15:23:53 compute-0 nova_compute[239545]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 02 15:23:53 compute-0 nova_compute[239545]: and ``live_migration_inbound_addr`` respectively.
Feb 02 15:23:53 compute-0 nova_compute[239545]: ).  Its value may be silently ignored in the future.
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.206 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.207 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.207 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.207 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.207 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.207 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.207 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.208 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.208 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.208 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.208 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.208 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.208 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.208 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.209 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.209 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.209 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.209 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.209 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rbd_secret_uuid        = e43470b2-6632-573a-87d3-0f5428ec59e9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.209 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.210 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.210 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.210 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.210 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.210 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.210 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.211 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.211 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.211 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.211 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.211 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.212 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.212 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.212 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.212 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.212 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.212 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.213 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.213 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.213 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.213 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.213 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.213 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.213 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.214 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.214 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.214 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.214 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.214 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.214 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.214 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.215 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.215 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.215 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.215 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.215 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.215 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.216 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.216 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.216 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.216 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.216 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.216 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.216 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.216 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.217 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.217 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.217 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.217 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.218 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.218 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.218 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.218 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.218 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.219 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.219 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.219 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.219 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.219 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.220 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.220 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.220 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.220 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.220 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.221 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.221 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.221 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.221 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.221 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.221 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.222 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.222 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.222 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.222 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.222 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.222 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.222 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.223 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.223 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.223 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.223 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.223 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.223 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.224 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.224 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.224 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.224 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.224 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.224 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.224 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.225 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.225 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.225 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.225 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.225 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.226 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.226 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.226 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.226 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.226 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.226 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.227 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.227 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.227 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.227 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.227 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.227 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.228 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.228 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.228 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.228 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.228 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.228 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.228 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.229 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.229 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.229 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.229 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.229 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.230 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.230 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.230 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.230 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.230 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.230 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.231 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.231 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.231 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.231 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.231 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.231 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.231 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.232 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.232 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.232 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.232 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.232 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.232 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.232 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.233 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.233 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.233 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.233 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.233 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.233 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.233 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.234 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.234 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.234 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.234 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.234 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.234 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.235 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.235 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.235 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.235 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.235 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.236 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.236 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.236 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.236 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.236 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.236 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.237 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.237 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.237 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.237 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.237 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.237 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.238 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.238 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.238 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.238 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.238 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.238 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.239 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.239 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.239 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.239 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.240 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.240 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.240 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.240 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.240 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.241 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.241 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.241 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.241 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.241 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.241 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.242 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.242 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.242 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.242 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.242 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.242 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.243 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.243 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.243 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.243 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.243 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.244 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.244 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.244 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.244 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.244 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.245 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.245 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.245 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.245 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.246 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.246 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.246 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.246 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.246 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.246 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.247 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.247 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.247 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.247 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.247 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.248 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.248 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.248 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.248 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.248 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.249 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.249 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.249 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.249 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.249 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.250 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.250 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.250 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.250 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.251 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.251 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.251 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.251 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.251 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.251 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.252 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.252 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.252 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.252 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.252 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.253 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.253 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.253 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.253 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.253 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.253 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.254 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.254 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.254 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.254 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.254 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.254 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.255 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.255 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.255 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.255 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.255 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.255 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.255 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.256 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.256 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.256 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.256 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.256 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.256 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.257 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.257 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.257 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.257 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.257 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.257 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.258 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.258 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.258 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.258 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.258 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.258 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.259 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.259 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.259 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.259 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.259 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.259 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.259 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.260 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.260 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.260 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.260 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.260 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.260 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.261 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.261 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.261 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.261 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.261 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.261 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.261 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.262 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.262 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.262 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.262 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.262 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.262 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.262 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.263 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.263 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.263 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.263 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.263 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.264 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.264 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.264 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.264 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.264 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.265 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.265 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.265 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.265 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.265 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.266 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.266 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.266 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.266 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.266 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.267 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.267 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.267 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.267 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.267 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.268 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.268 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.268 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.268 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.268 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.269 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.269 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.269 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.269 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.269 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.270 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.270 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.270 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.270 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.270 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.270 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.271 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.271 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.271 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.271 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.271 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.272 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.272 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.272 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.272 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.272 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.272 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.273 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.273 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.273 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.273 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.273 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.274 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.274 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.274 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.274 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.274 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.274 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.274 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.275 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.275 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.275 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.275 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.275 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.276 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.276 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.276 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.276 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.276 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.277 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.277 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.277 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.277 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.277 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.277 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.278 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.278 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.278 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.278 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.278 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.278 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.279 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.279 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.279 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.279 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.279 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.279 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.280 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.280 239549 DEBUG oslo_service.service [None req-8fe7b66a-57e3-4503-ad61-941dad444e4d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.281 239549 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.299 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.299 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.300 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.300 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.311 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff169d74250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.314 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff169d74250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.315 239549 INFO nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Connection event '1' reason 'None'
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.319 239549 INFO nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Libvirt host capabilities <capabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]: 
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <host>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <uuid>91f81291-8830-4d3a-ad9a-f49b9247697f</uuid>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <arch>x86_64</arch>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model>EPYC-Rome-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <vendor>AMD</vendor>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <microcode version='16777317'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <signature family='23' model='49' stepping='0'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='x2apic'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='tsc-deadline'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='osxsave'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='hypervisor'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='tsc_adjust'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='spec-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='stibp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='arch-capabilities'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='cmp_legacy'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='topoext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='virt-ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='lbrv'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='tsc-scale'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='vmcb-clean'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='pause-filter'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='pfthreshold'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='svme-addr-chk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='rdctl-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='skip-l1dfl-vmentry'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='mds-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature name='pschange-mc-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <pages unit='KiB' size='4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <pages unit='KiB' size='2048'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <pages unit='KiB' size='1048576'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <power_management>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <suspend_mem/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </power_management>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <iommu support='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <migration_features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <live/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <uri_transports>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <uri_transport>tcp</uri_transport>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <uri_transport>rdma</uri_transport>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </uri_transports>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </migration_features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <topology>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <cells num='1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <cell id='0'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:           <memory unit='KiB'>7864300</memory>
Feb 02 15:23:53 compute-0 nova_compute[239545]:           <pages unit='KiB' size='4'>1966075</pages>
Feb 02 15:23:53 compute-0 nova_compute[239545]:           <pages unit='KiB' size='2048'>0</pages>
Feb 02 15:23:53 compute-0 nova_compute[239545]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 02 15:23:53 compute-0 nova_compute[239545]:           <distances>
Feb 02 15:23:53 compute-0 nova_compute[239545]:             <sibling id='0' value='10'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:           </distances>
Feb 02 15:23:53 compute-0 nova_compute[239545]:           <cpus num='8'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:           </cpus>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         </cell>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </cells>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </topology>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <cache>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </cache>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <secmodel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model>selinux</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <doi>0</doi>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </secmodel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <secmodel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model>dac</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <doi>0</doi>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </secmodel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </host>
Feb 02 15:23:53 compute-0 nova_compute[239545]: 
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <guest>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <os_type>hvm</os_type>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <arch name='i686'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <wordsize>32</wordsize>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <domain type='qemu'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <domain type='kvm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </arch>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <pae/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <nonpae/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <acpi default='on' toggle='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <apic default='on' toggle='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <cpuselection/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <deviceboot/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <disksnapshot default='on' toggle='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <externalSnapshot/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </guest>
Feb 02 15:23:53 compute-0 nova_compute[239545]: 
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <guest>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <os_type>hvm</os_type>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <arch name='x86_64'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <wordsize>64</wordsize>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <domain type='qemu'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <domain type='kvm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </arch>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <acpi default='on' toggle='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <apic default='on' toggle='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <cpuselection/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <deviceboot/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <disksnapshot default='on' toggle='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <externalSnapshot/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </guest>
Feb 02 15:23:53 compute-0 nova_compute[239545]: 
Feb 02 15:23:53 compute-0 nova_compute[239545]: </capabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]: 
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.326 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.335 239549 WARNING nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.335 239549 DEBUG nova.virt.libvirt.volume.mount [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.349 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 02 15:23:53 compute-0 nova_compute[239545]: <domainCapabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <path>/usr/libexec/qemu-kvm</path>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <domain>kvm</domain>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <arch>i686</arch>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <vcpu max='4096'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <iothreads supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <os supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <enum name='firmware'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <loader supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>rom</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pflash</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='readonly'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>yes</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>no</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='secure'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>no</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </loader>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </os>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='host-passthrough' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='hostPassthroughMigratable'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>on</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>off</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='maximum' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='maximumMigratable'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>on</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>off</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='host-model' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <vendor>AMD</vendor>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='x2apic'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc-deadline'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='hypervisor'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc_adjust'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='spec-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='stibp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='cmp_legacy'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='overflow-recov'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='succor'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='amd-ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='virt-ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='lbrv'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc-scale'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='vmcb-clean'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='flushbyasid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='pause-filter'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='pfthreshold'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='svme-addr-chk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='disable' name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='custom' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='ClearwaterForest'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ddpd-u'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sha512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='ClearwaterForest-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ddpd-u'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sha512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Dhyana-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Turin'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vp2intersect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibpb-brtype'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbpb'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='srso-user-kernel-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Turin-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vp2intersect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibpb-brtype'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbpb'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='srso-user-kernel-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-128'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-256'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-128'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-256'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v6'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v7'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='KnightsMill'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4fmaps'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4vnniw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512er'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512pf'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='KnightsMill-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4fmaps'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4vnniw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512er'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512pf'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G4-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tbm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G5-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tbm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='athlon'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='athlon-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='core2duo'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='core2duo-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='coreduo'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='coreduo-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='n270'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='n270-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='phenom'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='phenom-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <memoryBacking supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <enum name='sourceType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>file</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>anonymous</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>memfd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </memoryBacking>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <disk supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='diskDevice'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>disk</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>cdrom</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>floppy</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>lun</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='bus'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>fdc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>scsi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>sata</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-non-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <graphics supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vnc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>egl-headless</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dbus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </graphics>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <video supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='modelType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vga</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>cirrus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>none</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>bochs</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ramfb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </video>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <hostdev supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='mode'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>subsystem</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='startupPolicy'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>default</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>mandatory</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>requisite</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>optional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='subsysType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pci</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>scsi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='capsType'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='pciBackend'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </hostdev>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <rng supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-non-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>random</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>egd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>builtin</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <filesystem supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='driverType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>path</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>handle</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtiofs</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </filesystem>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <tpm supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tpm-tis</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tpm-crb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>emulator</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>external</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendVersion'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>2.0</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </tpm>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <redirdev supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='bus'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </redirdev>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <channel supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pty</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>unix</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </channel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <crypto supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>qemu</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>builtin</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </crypto>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <interface supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>default</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>passt</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <panic supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>isa</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>hyperv</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </panic>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <console supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>null</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pty</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dev</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>file</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pipe</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>stdio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>udp</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tcp</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>unix</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>qemu-vdagent</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dbus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </console>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <gic supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <vmcoreinfo supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <genid supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <backingStoreInput supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <backup supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <async-teardown supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <s390-pv supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <ps2 supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <tdx supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <sev supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <sgx supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <hyperv supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='features'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>relaxed</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vapic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>spinlocks</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vpindex</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>runtime</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>synic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>stimer</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>reset</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vendor_id</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>frequencies</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>reenlightenment</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tlbflush</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ipi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>avic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>emsr_bitmap</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>xmm_input</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <defaults>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <spinlocks>4095</spinlocks>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <stimer_direct>on</stimer_direct>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <tlbflush_direct>on</tlbflush_direct>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <tlbflush_extended>on</tlbflush_extended>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </defaults>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </hyperv>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <launchSecurity supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </features>
Feb 02 15:23:53 compute-0 nova_compute[239545]: </domainCapabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.356 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 02 15:23:53 compute-0 nova_compute[239545]: <domainCapabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <path>/usr/libexec/qemu-kvm</path>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <domain>kvm</domain>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <arch>i686</arch>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <vcpu max='240'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <iothreads supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <os supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <enum name='firmware'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <loader supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>rom</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pflash</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='readonly'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>yes</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>no</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='secure'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>no</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </loader>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </os>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='host-passthrough' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='hostPassthroughMigratable'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>on</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>off</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='maximum' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='maximumMigratable'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>on</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>off</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='host-model' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <vendor>AMD</vendor>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='x2apic'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc-deadline'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='hypervisor'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc_adjust'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='spec-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='stibp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='cmp_legacy'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='overflow-recov'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='succor'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='amd-ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='virt-ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='lbrv'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc-scale'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='vmcb-clean'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='flushbyasid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='pause-filter'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='pfthreshold'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='svme-addr-chk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='disable' name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='custom' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='ClearwaterForest'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ddpd-u'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sha512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='ClearwaterForest-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ddpd-u'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sha512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Dhyana-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Turin'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vp2intersect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibpb-brtype'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbpb'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='srso-user-kernel-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Turin-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vp2intersect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibpb-brtype'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbpb'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='srso-user-kernel-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-128'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-256'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-128'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-256'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v6'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v7'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='KnightsMill'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4fmaps'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4vnniw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512er'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512pf'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='KnightsMill-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4fmaps'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4vnniw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512er'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512pf'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G4-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tbm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G5-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tbm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='athlon'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='athlon-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='core2duo'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='core2duo-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='coreduo'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='coreduo-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='n270'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='n270-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='phenom'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='phenom-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <memoryBacking supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <enum name='sourceType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>file</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>anonymous</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>memfd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </memoryBacking>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <disk supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='diskDevice'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>disk</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>cdrom</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>floppy</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>lun</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='bus'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ide</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>fdc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>scsi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>sata</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-non-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <graphics supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vnc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>egl-headless</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dbus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </graphics>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <video supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='modelType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vga</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>cirrus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>none</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>bochs</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ramfb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </video>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <hostdev supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='mode'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>subsystem</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='startupPolicy'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>default</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>mandatory</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>requisite</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>optional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='subsysType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pci</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>scsi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='capsType'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='pciBackend'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </hostdev>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <rng supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-non-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>random</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>egd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>builtin</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <filesystem supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='driverType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>path</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>handle</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtiofs</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </filesystem>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <tpm supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tpm-tis</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tpm-crb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>emulator</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>external</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendVersion'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>2.0</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </tpm>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <redirdev supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='bus'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </redirdev>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <channel supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pty</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>unix</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </channel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <crypto supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>qemu</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>builtin</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </crypto>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <interface supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>default</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>passt</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <panic supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>isa</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>hyperv</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </panic>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <console supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>null</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pty</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dev</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>file</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pipe</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>stdio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>udp</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tcp</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>unix</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>qemu-vdagent</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dbus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </console>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <gic supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <vmcoreinfo supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <genid supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <backingStoreInput supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <backup supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <async-teardown supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <s390-pv supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <ps2 supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <tdx supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <sev supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <sgx supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <hyperv supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='features'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>relaxed</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vapic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>spinlocks</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vpindex</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>runtime</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>synic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>stimer</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>reset</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vendor_id</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>frequencies</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>reenlightenment</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tlbflush</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ipi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>avic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>emsr_bitmap</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>xmm_input</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <defaults>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <spinlocks>4095</spinlocks>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <stimer_direct>on</stimer_direct>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <tlbflush_direct>on</tlbflush_direct>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <tlbflush_extended>on</tlbflush_extended>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </defaults>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </hyperv>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <launchSecurity supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </features>
Feb 02 15:23:53 compute-0 nova_compute[239545]: </domainCapabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.405 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.410 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 02 15:23:53 compute-0 nova_compute[239545]: <domainCapabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <path>/usr/libexec/qemu-kvm</path>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <domain>kvm</domain>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <arch>x86_64</arch>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <vcpu max='4096'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <iothreads supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <os supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <enum name='firmware'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>efi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <loader supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>rom</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pflash</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='readonly'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>yes</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>no</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='secure'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>yes</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>no</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </loader>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </os>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='host-passthrough' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='hostPassthroughMigratable'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>on</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>off</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='maximum' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='maximumMigratable'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>on</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>off</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='host-model' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <vendor>AMD</vendor>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='x2apic'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc-deadline'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='hypervisor'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc_adjust'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='spec-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='stibp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='cmp_legacy'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='overflow-recov'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='succor'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='amd-ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='virt-ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='lbrv'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc-scale'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='vmcb-clean'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='flushbyasid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='pause-filter'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='pfthreshold'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='svme-addr-chk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='disable' name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='custom' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='ClearwaterForest'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ddpd-u'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sha512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='ClearwaterForest-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ddpd-u'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sha512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Dhyana-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Turin'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vp2intersect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibpb-brtype'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbpb'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='srso-user-kernel-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Turin-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vp2intersect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibpb-brtype'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbpb'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='srso-user-kernel-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-128'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-256'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-128'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-256'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v6'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v7'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='KnightsMill'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4fmaps'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4vnniw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512er'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512pf'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='KnightsMill-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4fmaps'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4vnniw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512er'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512pf'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G4-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tbm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G5-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tbm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='athlon'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='athlon-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='core2duo'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='core2duo-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='coreduo'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='coreduo-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='n270'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='n270-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='phenom'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='phenom-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <memoryBacking supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <enum name='sourceType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>file</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>anonymous</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>memfd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </memoryBacking>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <disk supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='diskDevice'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>disk</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>cdrom</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>floppy</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>lun</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='bus'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>fdc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>scsi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>sata</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-non-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <graphics supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vnc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>egl-headless</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dbus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </graphics>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <video supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='modelType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vga</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>cirrus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>none</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>bochs</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ramfb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </video>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <hostdev supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='mode'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>subsystem</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='startupPolicy'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>default</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>mandatory</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>requisite</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>optional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='subsysType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pci</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>scsi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='capsType'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='pciBackend'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </hostdev>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <rng supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-non-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>random</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>egd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>builtin</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <filesystem supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='driverType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>path</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>handle</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtiofs</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </filesystem>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <tpm supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tpm-tis</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tpm-crb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>emulator</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>external</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendVersion'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>2.0</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </tpm>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <redirdev supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='bus'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </redirdev>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <channel supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pty</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>unix</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </channel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <crypto supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>qemu</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>builtin</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </crypto>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <interface supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>default</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>passt</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <panic supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>isa</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>hyperv</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </panic>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <console supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>null</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pty</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dev</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>file</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pipe</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>stdio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>udp</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tcp</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>unix</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>qemu-vdagent</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dbus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </console>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <gic supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <vmcoreinfo supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <genid supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <backingStoreInput supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <backup supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <async-teardown supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <s390-pv supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <ps2 supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <tdx supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <sev supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <sgx supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <hyperv supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='features'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>relaxed</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vapic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>spinlocks</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vpindex</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>runtime</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>synic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>stimer</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>reset</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vendor_id</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>frequencies</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>reenlightenment</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tlbflush</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ipi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>avic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>emsr_bitmap</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>xmm_input</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <defaults>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <spinlocks>4095</spinlocks>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <stimer_direct>on</stimer_direct>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <tlbflush_direct>on</tlbflush_direct>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <tlbflush_extended>on</tlbflush_extended>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </defaults>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </hyperv>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <launchSecurity supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </features>
Feb 02 15:23:53 compute-0 nova_compute[239545]: </domainCapabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.485 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 02 15:23:53 compute-0 nova_compute[239545]: <domainCapabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <path>/usr/libexec/qemu-kvm</path>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <domain>kvm</domain>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <arch>x86_64</arch>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <vcpu max='240'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <iothreads supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <os supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <enum name='firmware'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <loader supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>rom</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pflash</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='readonly'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>yes</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>no</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='secure'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>no</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </loader>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </os>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='host-passthrough' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='hostPassthroughMigratable'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>on</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>off</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='maximum' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='maximumMigratable'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>on</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>off</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='host-model' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <vendor>AMD</vendor>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='x2apic'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc-deadline'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='hypervisor'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc_adjust'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='spec-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='stibp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='cmp_legacy'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='overflow-recov'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='succor'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='amd-ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='virt-ssbd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='lbrv'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='tsc-scale'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='vmcb-clean'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='flushbyasid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='pause-filter'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='pfthreshold'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='svme-addr-chk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <feature policy='disable' name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <mode name='custom' supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Broadwell-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cascadelake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='ClearwaterForest'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ddpd-u'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sha512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='ClearwaterForest-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ddpd-u'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sha512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm3'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sm4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Cooperlake-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Denverton-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Dhyana-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Genoa-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Milan-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Rome-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Turin'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vp2intersect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibpb-brtype'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbpb'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='srso-user-kernel-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-Turin-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amd-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='auto-ibrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vp2intersect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fs-gs-base-ns'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibpb-brtype'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='no-nested-data-bp'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='null-sel-clr-base'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='perfmon-v2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbpb'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='srso-user-kernel-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='stibp-always-on'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='EPYC-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-128'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-256'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='GraniteRapids-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-128'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-256'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx10-512'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='prefetchiti'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Haswell-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-noTSX'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v6'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Icelake-Server-v7'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='IvyBridge-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='KnightsMill'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4fmaps'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4vnniw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512er'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512pf'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='KnightsMill-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4fmaps'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-4vnniw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512er'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512pf'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G4-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tbm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Opteron_G5-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fma4'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tbm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xop'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SapphireRapids-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='amx-tile'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-bf16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-fp16'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512-vpopcntdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bitalg'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vbmi2'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrc'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fzrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='la57'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='taa-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='tsx-ldtrk'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='SierraForest-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ifma'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-ne-convert'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx-vnni-int8'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bhi-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='bus-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cmpccxadd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fbsdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='fsrs'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ibrs-all'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='intel-psfd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ipred-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='lam'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mcdt-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pbrsb-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='psdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rrsba-ctrl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='sbdr-ssdp-no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='serialize'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vaes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='vpclmulqdq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Client-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='hle'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='rtm'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Skylake-Server-v5'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512bw'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512cd'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512dq'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512f'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='avx512vl'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='invpcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pcid'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='pku'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='mpx'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v2'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v3'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='core-capability'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='split-lock-detect'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='Snowridge-v4'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='cldemote'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='erms'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='gfni'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdir64b'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='movdiri'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='xsaves'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='athlon'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='athlon-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='core2duo'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='core2duo-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='coreduo'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='coreduo-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='n270'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='n270-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='ss'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='phenom'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <blockers model='phenom-v1'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnow'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <feature name='3dnowext'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </blockers>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </mode>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <memoryBacking supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <enum name='sourceType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>file</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>anonymous</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <value>memfd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </memoryBacking>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <disk supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='diskDevice'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>disk</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>cdrom</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>floppy</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>lun</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='bus'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ide</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>fdc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>scsi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>sata</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-non-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <graphics supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vnc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>egl-headless</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dbus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </graphics>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <video supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='modelType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vga</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>cirrus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>none</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>bochs</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ramfb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </video>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <hostdev supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='mode'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>subsystem</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='startupPolicy'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>default</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>mandatory</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>requisite</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>optional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='subsysType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pci</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>scsi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='capsType'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='pciBackend'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </hostdev>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <rng supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtio-non-transitional</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>random</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>egd</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>builtin</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <filesystem supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='driverType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>path</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>handle</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>virtiofs</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </filesystem>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <tpm supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tpm-tis</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tpm-crb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>emulator</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>external</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendVersion'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>2.0</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </tpm>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <redirdev supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='bus'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>usb</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </redirdev>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <channel supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pty</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>unix</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </channel>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <crypto supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>qemu</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendModel'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>builtin</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </crypto>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <interface supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='backendType'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>default</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>passt</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <panic supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='model'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>isa</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>hyperv</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </panic>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <console supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='type'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>null</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vc</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pty</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dev</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>file</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>pipe</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>stdio</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>udp</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tcp</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>unix</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>qemu-vdagent</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>dbus</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </console>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   <features>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <gic supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <vmcoreinfo supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <genid supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <backingStoreInput supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <backup supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <async-teardown supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <s390-pv supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <ps2 supported='yes'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <tdx supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <sev supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <sgx supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <hyperv supported='yes'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <enum name='features'>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>relaxed</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vapic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>spinlocks</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vpindex</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>runtime</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>synic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>stimer</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>reset</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>vendor_id</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>frequencies</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>reenlightenment</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>tlbflush</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>ipi</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>avic</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>emsr_bitmap</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <value>xmm_input</value>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </enum>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       <defaults>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <spinlocks>4095</spinlocks>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <stimer_direct>on</stimer_direct>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <tlbflush_direct>on</tlbflush_direct>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <tlbflush_extended>on</tlbflush_extended>
Feb 02 15:23:53 compute-0 nova_compute[239545]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 02 15:23:53 compute-0 nova_compute[239545]:       </defaults>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     </hyperv>
Feb 02 15:23:53 compute-0 nova_compute[239545]:     <launchSecurity supported='no'/>
Feb 02 15:23:53 compute-0 nova_compute[239545]:   </features>
Feb 02 15:23:53 compute-0 nova_compute[239545]: </domainCapabilities>
Feb 02 15:23:53 compute-0 nova_compute[239545]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.556 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.556 239549 INFO nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Secure Boot support detected
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.559 239549 INFO nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.560 239549 INFO nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.573 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.635 239549 INFO nova.virt.node [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Determined node identity b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 from /var/lib/nova/compute_id
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.659 239549 WARNING nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Compute nodes ['b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.701 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.739 239549 WARNING nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.739 239549 DEBUG oslo_concurrency.lockutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.739 239549 DEBUG oslo_concurrency.lockutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.740 239549 DEBUG oslo_concurrency.lockutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.740 239549 DEBUG nova.compute.resource_tracker [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:23:53 compute-0 nova_compute[239545]: 2026-02-02 15:23:53.740 239549 DEBUG oslo_concurrency.processutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:23:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:23:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:23:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2534945866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:23:54 compute-0 nova_compute[239545]: 2026-02-02 15:23:54.318 239549 DEBUG oslo_concurrency.processutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:23:54 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 02 15:23:54 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 02 15:23:54 compute-0 ceph-mon[75334]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2534945866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:23:54 compute-0 nova_compute[239545]: 2026-02-02 15:23:54.632 239549 WARNING nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:23:54 compute-0 nova_compute[239545]: 2026-02-02 15:23:54.634 239549 DEBUG nova.compute.resource_tracker [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:23:54 compute-0 nova_compute[239545]: 2026-02-02 15:23:54.635 239549 DEBUG oslo_concurrency.lockutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:23:54 compute-0 nova_compute[239545]: 2026-02-02 15:23:54.635 239549 DEBUG oslo_concurrency.lockutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:23:54 compute-0 nova_compute[239545]: 2026-02-02 15:23:54.652 239549 WARNING nova.compute.resource_tracker [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] No compute node record for compute-0.ctlplane.example.com:b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 could not be found.
Feb 02 15:23:54 compute-0 nova_compute[239545]: 2026-02-02 15:23:54.675 239549 INFO nova.compute.resource_tracker [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75
Feb 02 15:23:54 compute-0 nova_compute[239545]: 2026-02-02 15:23:54.743 239549 DEBUG nova.compute.resource_tracker [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:23:54 compute-0 nova_compute[239545]: 2026-02-02 15:23:54.743 239549 DEBUG nova.compute.resource_tracker [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:23:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:55 compute-0 nova_compute[239545]: 2026-02-02 15:23:55.727 239549 INFO nova.scheduler.client.report [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [req-b9130b17-2b1d-4a9d-a18f-5c5a096ec1be] Created resource provider record via placement API for resource provider with UUID b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 and name compute-0.ctlplane.example.com.
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.075 239549 DEBUG oslo_concurrency.processutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:23:56 compute-0 ceph-mon[75334]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:23:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223470127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.635 239549 DEBUG oslo_concurrency.processutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.641 239549 DEBUG nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 02 15:23:56 compute-0 nova_compute[239545]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.642 239549 INFO nova.virt.libvirt.host [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] kernel doesn't support AMD SEV
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.643 239549 DEBUG nova.compute.provider_tree [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.644 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.704 239549 DEBUG nova.scheduler.client.report [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Updated inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.705 239549 DEBUG nova.compute.provider_tree [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Updating resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.705 239549 DEBUG nova.compute.provider_tree [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.804 239549 DEBUG nova.compute.provider_tree [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Updating resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.826 239549 DEBUG nova.compute.resource_tracker [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.826 239549 DEBUG oslo_concurrency.lockutils [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.827 239549 DEBUG nova.service [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.915 239549 DEBUG nova.service [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 02 15:23:56 compute-0 nova_compute[239545]: 2026-02-02 15:23:56.916 239549 DEBUG nova.servicegroup.drivers.db [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 02 15:23:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.310472) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045837310514, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1759, "num_deletes": 251, "total_data_size": 2971514, "memory_usage": 3006224, "flush_reason": "Manual Compaction"}
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045837322327, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1679890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11729, "largest_seqno": 13487, "table_properties": {"data_size": 1674122, "index_size": 2843, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14483, "raw_average_key_size": 20, "raw_value_size": 1661412, "raw_average_value_size": 2307, "num_data_blocks": 132, "num_entries": 720, "num_filter_entries": 720, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770045638, "oldest_key_time": 1770045638, "file_creation_time": 1770045837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11939 microseconds, and 3661 cpu microseconds.
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.322407) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1679890 bytes OK
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.322434) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.324368) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.324389) EVENT_LOG_v1 {"time_micros": 1770045837324383, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.324414) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2964038, prev total WAL file size 2964038, number of live WAL files 2.
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.325342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1640KB)], [29(7960KB)]
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045837325454, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9831827, "oldest_snapshot_seqno": -1}
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3995 keys, 7715219 bytes, temperature: kUnknown
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045837371847, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7715219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7686637, "index_size": 17464, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95039, "raw_average_key_size": 23, "raw_value_size": 7612766, "raw_average_value_size": 1905, "num_data_blocks": 762, "num_entries": 3995, "num_filter_entries": 3995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770045837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.372136) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7715219 bytes
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.373644) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.5 rd, 166.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(10.4) write-amplify(4.6) OK, records in: 4416, records dropped: 421 output_compression: NoCompression
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.373668) EVENT_LOG_v1 {"time_micros": 1770045837373656, "job": 12, "event": "compaction_finished", "compaction_time_micros": 46477, "compaction_time_cpu_micros": 26838, "output_level": 6, "num_output_files": 1, "total_output_size": 7715219, "num_input_records": 4416, "num_output_records": 3995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045837374127, "job": 12, "event": "table_file_deletion", "file_number": 31}
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045837375174, "job": 12, "event": "table_file_deletion", "file_number": 29}
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.325130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.375337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.375353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.375357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.375360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:23:57 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:23:57.375363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:23:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2223470127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:23:58 compute-0 ceph-mon[75334]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:23:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:23:59.235 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:23:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:23:59.236 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:23:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:23:59.236 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:24:00 compute-0 ceph-mon[75334]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:02 compute-0 ceph-mon[75334]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:04 compute-0 ceph-mon[75334]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:06 compute-0 ceph-mon[75334]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:07 compute-0 podman[239915]: 2026-02-02 15:24:07.330623607 +0000 UTC m=+0.058336745 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 02 15:24:07 compute-0 podman[239914]: 2026-02-02 15:24:07.39647657 +0000 UTC m=+0.122831135 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 15:24:08 compute-0 ceph-mon[75334]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:10 compute-0 ceph-mon[75334]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:12 compute-0 ceph-mon[75334]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:14 compute-0 ceph-mon[75334]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:24:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:24:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:24:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:24:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:24:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:24:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:24:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673826988' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:24:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:24:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673826988' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:24:16 compute-0 ceph-mon[75334]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/673826988' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:24:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/673826988' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:24:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:24:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1008467808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:24:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:24:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1008467808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:24:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:24:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/975139901' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:24:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:24:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/975139901' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:24:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1008467808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:24:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1008467808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:24:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/975139901' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:24:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/975139901' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:24:18 compute-0 ceph-mon[75334]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:20 compute-0 ceph-mon[75334]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:22 compute-0 ceph-mon[75334]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:24 compute-0 ceph-mon[75334]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:26 compute-0 ceph-mon[75334]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:28 compute-0 ceph-mon[75334]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:30 compute-0 ceph-mon[75334]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:32 compute-0 ceph-mon[75334]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:34 compute-0 ceph-mon[75334]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:36 compute-0 ceph-mon[75334]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:38 compute-0 podman[239957]: 2026-02-02 15:24:38.33040847 +0000 UTC m=+0.075792641 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127)
Feb 02 15:24:38 compute-0 podman[239958]: 2026-02-02 15:24:38.343698821 +0000 UTC m=+0.080921502 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 02 15:24:38 compute-0 ceph-mon[75334]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:40 compute-0 ceph-mon[75334]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:42 compute-0 sudo[240002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:24:42 compute-0 sudo[240002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:24:42 compute-0 sudo[240002]: pam_unix(sudo:session): session closed for user root
Feb 02 15:24:42 compute-0 sudo[240027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:24:42 compute-0 sudo[240027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:24:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:42 compute-0 sudo[240027]: pam_unix(sudo:session): session closed for user root
Feb 02 15:24:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:24:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:24:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:24:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:24:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:24:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:24:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:24:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:24:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:24:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:24:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:24:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:24:42 compute-0 sudo[240084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:24:42 compute-0 sudo[240084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:24:42 compute-0 sudo[240084]: pam_unix(sudo:session): session closed for user root
Feb 02 15:24:42 compute-0 sudo[240109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:24:42 compute-0 sudo[240109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:24:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:24:42
Feb 02 15:24:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:24:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:24:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'vms', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta']
Feb 02 15:24:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:24:42 compute-0 ceph-mon[75334]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:24:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:24:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:24:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:24:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:24:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:24:43 compute-0 podman[240144]: 2026-02-02 15:24:43.051015918 +0000 UTC m=+0.063000976 container create db8fff4e1e0cfd68965e1525f65439c01f791573990b2256c7ba48b64acb23af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:24:43 compute-0 systemd[1]: Started libpod-conmon-db8fff4e1e0cfd68965e1525f65439c01f791573990b2256c7ba48b64acb23af.scope.
Feb 02 15:24:43 compute-0 podman[240144]: 2026-02-02 15:24:43.009489278 +0000 UTC m=+0.021474366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:24:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:24:43 compute-0 podman[240144]: 2026-02-02 15:24:43.149732067 +0000 UTC m=+0.161717155 container init db8fff4e1e0cfd68965e1525f65439c01f791573990b2256c7ba48b64acb23af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_moser, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:24:43 compute-0 podman[240144]: 2026-02-02 15:24:43.156873362 +0000 UTC m=+0.168858450 container start db8fff4e1e0cfd68965e1525f65439c01f791573990b2256c7ba48b64acb23af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:24:43 compute-0 great_moser[240160]: 167 167
Feb 02 15:24:43 compute-0 systemd[1]: libpod-db8fff4e1e0cfd68965e1525f65439c01f791573990b2256c7ba48b64acb23af.scope: Deactivated successfully.
Feb 02 15:24:43 compute-0 podman[240144]: 2026-02-02 15:24:43.164933477 +0000 UTC m=+0.176918555 container attach db8fff4e1e0cfd68965e1525f65439c01f791573990b2256c7ba48b64acb23af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_moser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:24:43 compute-0 podman[240144]: 2026-02-02 15:24:43.16602265 +0000 UTC m=+0.178007698 container died db8fff4e1e0cfd68965e1525f65439c01f791573990b2256c7ba48b64acb23af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_moser, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0669535516ba684f9ec1414e9ec20d3bfca6956e6a4f0ef11bc037a49240151-merged.mount: Deactivated successfully.
Feb 02 15:24:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:43 compute-0 podman[240144]: 2026-02-02 15:24:43.207802486 +0000 UTC m=+0.219787544 container remove db8fff4e1e0cfd68965e1525f65439c01f791573990b2256c7ba48b64acb23af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_moser, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:24:43 compute-0 systemd[1]: libpod-conmon-db8fff4e1e0cfd68965e1525f65439c01f791573990b2256c7ba48b64acb23af.scope: Deactivated successfully.
Feb 02 15:24:43 compute-0 podman[240183]: 2026-02-02 15:24:43.338951307 +0000 UTC m=+0.040999739 container create b49a28a63164ff0d503cb7911213644184affb8273266182b71d100571c2dcdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:24:43 compute-0 systemd[1]: Started libpod-conmon-b49a28a63164ff0d503cb7911213644184affb8273266182b71d100571c2dcdc.scope.
Feb 02 15:24:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac65cb1409e4746a7a03d3628a9c5255c11701bc30204337e20adab9b0525c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac65cb1409e4746a7a03d3628a9c5255c11701bc30204337e20adab9b0525c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac65cb1409e4746a7a03d3628a9c5255c11701bc30204337e20adab9b0525c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac65cb1409e4746a7a03d3628a9c5255c11701bc30204337e20adab9b0525c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac65cb1409e4746a7a03d3628a9c5255c11701bc30204337e20adab9b0525c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:43 compute-0 podman[240183]: 2026-02-02 15:24:43.403839923 +0000 UTC m=+0.105888375 container init b49a28a63164ff0d503cb7911213644184affb8273266182b71d100571c2dcdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:24:43 compute-0 podman[240183]: 2026-02-02 15:24:43.412431139 +0000 UTC m=+0.114479571 container start b49a28a63164ff0d503cb7911213644184affb8273266182b71d100571c2dcdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:24:43 compute-0 podman[240183]: 2026-02-02 15:24:43.322102742 +0000 UTC m=+0.024151194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:24:43 compute-0 podman[240183]: 2026-02-02 15:24:43.423490489 +0000 UTC m=+0.125538921 container attach b49a28a63164ff0d503cb7911213644184affb8273266182b71d100571c2dcdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:24:43 compute-0 gallant_williams[240199]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:24:43 compute-0 gallant_williams[240199]: --> All data devices are unavailable
Feb 02 15:24:43 compute-0 systemd[1]: libpod-b49a28a63164ff0d503cb7911213644184affb8273266182b71d100571c2dcdc.scope: Deactivated successfully.
Feb 02 15:24:43 compute-0 podman[240183]: 2026-02-02 15:24:43.858865633 +0000 UTC m=+0.560914065 container died b49a28a63164ff0d503cb7911213644184affb8273266182b71d100571c2dcdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ac65cb1409e4746a7a03d3628a9c5255c11701bc30204337e20adab9b0525c5-merged.mount: Deactivated successfully.
Feb 02 15:24:43 compute-0 podman[240183]: 2026-02-02 15:24:43.909095002 +0000 UTC m=+0.611143434 container remove b49a28a63164ff0d503cb7911213644184affb8273266182b71d100571c2dcdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:24:43 compute-0 systemd[1]: libpod-conmon-b49a28a63164ff0d503cb7911213644184affb8273266182b71d100571c2dcdc.scope: Deactivated successfully.
Feb 02 15:24:43 compute-0 sudo[240109]: pam_unix(sudo:session): session closed for user root
Feb 02 15:24:44 compute-0 sudo[240231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:24:44 compute-0 sudo[240231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:24:44 compute-0 sudo[240231]: pam_unix(sudo:session): session closed for user root
Feb 02 15:24:44 compute-0 sudo[240256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:24:44 compute-0 sudo[240256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:24:44 compute-0 podman[240293]: 2026-02-02 15:24:44.372631365 +0000 UTC m=+0.048027322 container create 728dc36dce0ff4633d954cf84dd7967e1ac6745452ea2e9ba7ab523ac6e853f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:24:44 compute-0 systemd[1]: Started libpod-conmon-728dc36dce0ff4633d954cf84dd7967e1ac6745452ea2e9ba7ab523ac6e853f0.scope.
Feb 02 15:24:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:24:44 compute-0 podman[240293]: 2026-02-02 15:24:44.344138127 +0000 UTC m=+0.019534134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:24:44 compute-0 podman[240293]: 2026-02-02 15:24:44.47484195 +0000 UTC m=+0.150237917 container init 728dc36dce0ff4633d954cf84dd7967e1ac6745452ea2e9ba7ab523ac6e853f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_antonelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:24:44 compute-0 podman[240293]: 2026-02-02 15:24:44.481602906 +0000 UTC m=+0.156998853 container start 728dc36dce0ff4633d954cf84dd7967e1ac6745452ea2e9ba7ab523ac6e853f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:24:44 compute-0 clever_antonelli[240310]: 167 167
Feb 02 15:24:44 compute-0 systemd[1]: libpod-728dc36dce0ff4633d954cf84dd7967e1ac6745452ea2e9ba7ab523ac6e853f0.scope: Deactivated successfully.
Feb 02 15:24:44 compute-0 podman[240293]: 2026-02-02 15:24:44.494251811 +0000 UTC m=+0.169647788 container attach 728dc36dce0ff4633d954cf84dd7967e1ac6745452ea2e9ba7ab523ac6e853f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:24:44 compute-0 podman[240293]: 2026-02-02 15:24:44.495443596 +0000 UTC m=+0.170839583 container died 728dc36dce0ff4633d954cf84dd7967e1ac6745452ea2e9ba7ab523ac6e853f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_antonelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:24:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-54b44a75ea93a700665255509685ac71c8aef29fdf4ba7784e32f22f4e7c4711-merged.mount: Deactivated successfully.
Feb 02 15:24:44 compute-0 podman[240293]: 2026-02-02 15:24:44.59301356 +0000 UTC m=+0.268409507 container remove 728dc36dce0ff4633d954cf84dd7967e1ac6745452ea2e9ba7ab523ac6e853f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:24:44 compute-0 systemd[1]: libpod-conmon-728dc36dce0ff4633d954cf84dd7967e1ac6745452ea2e9ba7ab523ac6e853f0.scope: Deactivated successfully.
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:24:44 compute-0 podman[240334]: 2026-02-02 15:24:44.703368352 +0000 UTC m=+0.033784353 container create e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:24:44 compute-0 systemd[1]: Started libpod-conmon-e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7.scope.
Feb 02 15:24:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6961d999e243f15549789300e3f5b61fee5ba180cfa3a8f2507d126c70cce5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6961d999e243f15549789300e3f5b61fee5ba180cfa3a8f2507d126c70cce5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6961d999e243f15549789300e3f5b61fee5ba180cfa3a8f2507d126c70cce5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6961d999e243f15549789300e3f5b61fee5ba180cfa3a8f2507d126c70cce5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:44 compute-0 podman[240334]: 2026-02-02 15:24:44.758419394 +0000 UTC m=+0.088835445 container init e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_payne, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:24:44 compute-0 podman[240334]: 2026-02-02 15:24:44.766185533 +0000 UTC m=+0.096601544 container start e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:24:44 compute-0 podman[240334]: 2026-02-02 15:24:44.770332563 +0000 UTC m=+0.100748604 container attach e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:24:44 compute-0 podman[240334]: 2026-02-02 15:24:44.687444916 +0000 UTC m=+0.017860937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:24:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:24:44 compute-0 ceph-mon[75334]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:44 compute-0 nova_compute[239545]: 2026-02-02 15:24:44.918 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:45 compute-0 nova_compute[239545]: 2026-02-02 15:24:45.049 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:45 compute-0 tender_payne[240350]: {
Feb 02 15:24:45 compute-0 tender_payne[240350]:     "0": [
Feb 02 15:24:45 compute-0 tender_payne[240350]:         {
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "devices": [
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "/dev/loop3"
Feb 02 15:24:45 compute-0 tender_payne[240350]:             ],
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_name": "ceph_lv0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_size": "21470642176",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "name": "ceph_lv0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "tags": {
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.cluster_name": "ceph",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.crush_device_class": "",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.encrypted": "0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.objectstore": "bluestore",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.osd_id": "0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.type": "block",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.vdo": "0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.with_tpm": "0"
Feb 02 15:24:45 compute-0 tender_payne[240350]:             },
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "type": "block",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "vg_name": "ceph_vg0"
Feb 02 15:24:45 compute-0 tender_payne[240350]:         }
Feb 02 15:24:45 compute-0 tender_payne[240350]:     ],
Feb 02 15:24:45 compute-0 tender_payne[240350]:     "1": [
Feb 02 15:24:45 compute-0 tender_payne[240350]:         {
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "devices": [
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "/dev/loop4"
Feb 02 15:24:45 compute-0 tender_payne[240350]:             ],
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_name": "ceph_lv1",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_size": "21470642176",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "name": "ceph_lv1",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "tags": {
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.cluster_name": "ceph",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.crush_device_class": "",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.encrypted": "0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.objectstore": "bluestore",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.osd_id": "1",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.type": "block",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.vdo": "0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.with_tpm": "0"
Feb 02 15:24:45 compute-0 tender_payne[240350]:             },
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "type": "block",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "vg_name": "ceph_vg1"
Feb 02 15:24:45 compute-0 tender_payne[240350]:         }
Feb 02 15:24:45 compute-0 tender_payne[240350]:     ],
Feb 02 15:24:45 compute-0 tender_payne[240350]:     "2": [
Feb 02 15:24:45 compute-0 tender_payne[240350]:         {
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "devices": [
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "/dev/loop5"
Feb 02 15:24:45 compute-0 tender_payne[240350]:             ],
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_name": "ceph_lv2",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_size": "21470642176",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "name": "ceph_lv2",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "tags": {
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.cluster_name": "ceph",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.crush_device_class": "",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.encrypted": "0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.objectstore": "bluestore",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.osd_id": "2",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.type": "block",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.vdo": "0",
Feb 02 15:24:45 compute-0 tender_payne[240350]:                 "ceph.with_tpm": "0"
Feb 02 15:24:45 compute-0 tender_payne[240350]:             },
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "type": "block",
Feb 02 15:24:45 compute-0 tender_payne[240350]:             "vg_name": "ceph_vg2"
Feb 02 15:24:45 compute-0 tender_payne[240350]:         }
Feb 02 15:24:45 compute-0 tender_payne[240350]:     ]
Feb 02 15:24:45 compute-0 tender_payne[240350]: }
Feb 02 15:24:45 compute-0 systemd[1]: libpod-e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7.scope: Deactivated successfully.
Feb 02 15:24:45 compute-0 conmon[240350]: conmon e01225c989b7b4d0c382 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7.scope/container/memory.events
Feb 02 15:24:45 compute-0 podman[240334]: 2026-02-02 15:24:45.11268462 +0000 UTC m=+0.443100621 container died e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_payne, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f6961d999e243f15549789300e3f5b61fee5ba180cfa3a8f2507d126c70cce5-merged.mount: Deactivated successfully.
Feb 02 15:24:45 compute-0 podman[240334]: 2026-02-02 15:24:45.159215808 +0000 UTC m=+0.489631849 container remove e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 15:24:45 compute-0 systemd[1]: libpod-conmon-e01225c989b7b4d0c382970c0fa1878c08bfcf5f4d545e0fc465d17e8fc1d2e7.scope: Deactivated successfully.
Feb 02 15:24:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:45 compute-0 sudo[240256]: pam_unix(sudo:session): session closed for user root
Feb 02 15:24:45 compute-0 sudo[240371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:24:45 compute-0 sudo[240371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:24:45 compute-0 sudo[240371]: pam_unix(sudo:session): session closed for user root
Feb 02 15:24:45 compute-0 sudo[240396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:24:45 compute-0 sudo[240396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:24:45 compute-0 podman[240433]: 2026-02-02 15:24:45.550883635 +0000 UTC m=+0.051455256 container create d676483b90af9bf9c54f105f0c4301ef1a65fb0dcd81e69e7e585f90803f5132 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_tu, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:24:45 compute-0 systemd[1]: Started libpod-conmon-d676483b90af9bf9c54f105f0c4301ef1a65fb0dcd81e69e7e585f90803f5132.scope.
Feb 02 15:24:45 compute-0 podman[240433]: 2026-02-02 15:24:45.518398632 +0000 UTC m=+0.018970283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:24:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:24:45 compute-0 podman[240433]: 2026-02-02 15:24:45.631593485 +0000 UTC m=+0.132165096 container init d676483b90af9bf9c54f105f0c4301ef1a65fb0dcd81e69e7e585f90803f5132 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_tu, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:24:45 compute-0 podman[240433]: 2026-02-02 15:24:45.641402997 +0000 UTC m=+0.141974608 container start d676483b90af9bf9c54f105f0c4301ef1a65fb0dcd81e69e7e585f90803f5132 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:24:45 compute-0 podman[240433]: 2026-02-02 15:24:45.645386814 +0000 UTC m=+0.145958515 container attach d676483b90af9bf9c54f105f0c4301ef1a65fb0dcd81e69e7e585f90803f5132 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_tu, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:24:45 compute-0 festive_tu[240448]: 167 167
Feb 02 15:24:45 compute-0 systemd[1]: libpod-d676483b90af9bf9c54f105f0c4301ef1a65fb0dcd81e69e7e585f90803f5132.scope: Deactivated successfully.
Feb 02 15:24:45 compute-0 podman[240433]: 2026-02-02 15:24:45.648020471 +0000 UTC m=+0.148592092 container died d676483b90af9bf9c54f105f0c4301ef1a65fb0dcd81e69e7e585f90803f5132 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_tu, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9202ad486205932c4706c261e9450809bd8241ea2f4be56f9565173debb5c082-merged.mount: Deactivated successfully.
Feb 02 15:24:45 compute-0 podman[240433]: 2026-02-02 15:24:45.682111549 +0000 UTC m=+0.182683170 container remove d676483b90af9bf9c54f105f0c4301ef1a65fb0dcd81e69e7e585f90803f5132 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_tu, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:24:45 compute-0 systemd[1]: libpod-conmon-d676483b90af9bf9c54f105f0c4301ef1a65fb0dcd81e69e7e585f90803f5132.scope: Deactivated successfully.
Feb 02 15:24:45 compute-0 podman[240472]: 2026-02-02 15:24:45.848148316 +0000 UTC m=+0.044525405 container create 50a2891966ec4ef331d336b684496066d90c699eb3d3a70c7a38b446975fad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:24:45 compute-0 systemd[1]: Started libpod-conmon-50a2891966ec4ef331d336b684496066d90c699eb3d3a70c7a38b446975fad58.scope.
Feb 02 15:24:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937b9b5534d9ae6b2eb62405fc6f4e91c828628c22085a8f762e2f396b33aad9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937b9b5534d9ae6b2eb62405fc6f4e91c828628c22085a8f762e2f396b33aad9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937b9b5534d9ae6b2eb62405fc6f4e91c828628c22085a8f762e2f396b33aad9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937b9b5534d9ae6b2eb62405fc6f4e91c828628c22085a8f762e2f396b33aad9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:24:45 compute-0 podman[240472]: 2026-02-02 15:24:45.831596418 +0000 UTC m=+0.027973537 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:24:45 compute-0 podman[240472]: 2026-02-02 15:24:45.948263056 +0000 UTC m=+0.144640175 container init 50a2891966ec4ef331d336b684496066d90c699eb3d3a70c7a38b446975fad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:24:45 compute-0 podman[240472]: 2026-02-02 15:24:45.953616112 +0000 UTC m=+0.149993221 container start 50a2891966ec4ef331d336b684496066d90c699eb3d3a70c7a38b446975fad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:24:45 compute-0 podman[240472]: 2026-02-02 15:24:45.957446525 +0000 UTC m=+0.153823624 container attach 50a2891966ec4ef331d336b684496066d90c699eb3d3a70c7a38b446975fad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_panini, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:24:46 compute-0 lvm[240568]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:24:46 compute-0 lvm[240566]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:24:46 compute-0 lvm[240566]: VG ceph_vg0 finished
Feb 02 15:24:46 compute-0 lvm[240568]: VG ceph_vg1 finished
Feb 02 15:24:46 compute-0 lvm[240569]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:24:46 compute-0 lvm[240569]: VG ceph_vg2 finished
Feb 02 15:24:46 compute-0 heuristic_panini[240488]: {}
Feb 02 15:24:46 compute-0 systemd[1]: libpod-50a2891966ec4ef331d336b684496066d90c699eb3d3a70c7a38b446975fad58.scope: Deactivated successfully.
Feb 02 15:24:46 compute-0 podman[240472]: 2026-02-02 15:24:46.585855501 +0000 UTC m=+0.782232630 container died 50a2891966ec4ef331d336b684496066d90c699eb3d3a70c7a38b446975fad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-937b9b5534d9ae6b2eb62405fc6f4e91c828628c22085a8f762e2f396b33aad9-merged.mount: Deactivated successfully.
Feb 02 15:24:46 compute-0 podman[240472]: 2026-02-02 15:24:46.662036992 +0000 UTC m=+0.858414101 container remove 50a2891966ec4ef331d336b684496066d90c699eb3d3a70c7a38b446975fad58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:24:46 compute-0 systemd[1]: libpod-conmon-50a2891966ec4ef331d336b684496066d90c699eb3d3a70c7a38b446975fad58.scope: Deactivated successfully.
Feb 02 15:24:46 compute-0 sudo[240396]: pam_unix(sudo:session): session closed for user root
Feb 02 15:24:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:24:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:24:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:24:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:24:46 compute-0 sudo[240584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:24:46 compute-0 sudo[240584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:24:46 compute-0 sudo[240584]: pam_unix(sudo:session): session closed for user root
Feb 02 15:24:46 compute-0 ceph-mon[75334]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:24:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:24:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:48 compute-0 ceph-mon[75334]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:50 compute-0 ceph-mon[75334]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.548 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.550 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.550 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.550 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.653 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.653 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.654 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.654 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.655 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.655 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.655 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.655 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.656 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.679 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.679 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.679 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.679 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:24:52 compute-0 nova_compute[239545]: 2026-02-02 15:24:52.680 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:24:53 compute-0 ceph-mon[75334]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:24:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/551779976' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:24:53 compute-0 nova_compute[239545]: 2026-02-02 15:24:53.231 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:24:53 compute-0 nova_compute[239545]: 2026-02-02 15:24:53.371 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:24:53 compute-0 nova_compute[239545]: 2026-02-02 15:24:53.373 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5099MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:24:53 compute-0 nova_compute[239545]: 2026-02-02 15:24:53.373 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:24:53 compute-0 nova_compute[239545]: 2026-02-02 15:24:53.374 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:24:53 compute-0 nova_compute[239545]: 2026-02-02 15:24:53.493 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:24:53 compute-0 nova_compute[239545]: 2026-02-02 15:24:53.493 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:24:53 compute-0 nova_compute[239545]: 2026-02-02 15:24:53.509 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:24:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:24:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212997220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:24:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/551779976' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:24:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2212997220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:24:54 compute-0 nova_compute[239545]: 2026-02-02 15:24:54.081 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:24:54 compute-0 nova_compute[239545]: 2026-02-02 15:24:54.087 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:24:54 compute-0 nova_compute[239545]: 2026-02-02 15:24:54.116 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:24:54 compute-0 nova_compute[239545]: 2026-02-02 15:24:54.117 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:24:54 compute-0 nova_compute[239545]: 2026-02-02 15:24:54.118 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:24:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:24:55 compute-0 ceph-mon[75334]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:57 compute-0 ceph-mon[75334]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:24:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb 02 15:24:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1679604623' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 02 15:24:57 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 15:24:57 compute-0 ceph-mgr[75628]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 15:24:57 compute-0 ceph-mgr[75628]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 15:24:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1679604623' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 02 15:24:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:24:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:24:59.236 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:24:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:24:59.236 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:24:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:24:59.236 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:25:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:25:02 compute-0 ceph-mon[75334]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:02 compute-0 ceph-mon[75334]: from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 15:25:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:04 compute-0 ceph-mon[75334]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:04 compute-0 ceph-mon[75334]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:05 compute-0 ceph-mon[75334]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:07 compute-0 ceph-mon[75334]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:25:08 compute-0 ceph-mon[75334]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:09 compute-0 podman[240654]: 2026-02-02 15:25:09.320809505 +0000 UTC m=+0.052792805 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 02 15:25:09 compute-0 podman[240653]: 2026-02-02 15:25:09.352225285 +0000 UTC m=+0.087024736 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 02 15:25:10 compute-0 ceph-mon[75334]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:12 compute-0 ceph-mon[75334]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:25:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:14 compute-0 ceph-mon[75334]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:25:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:25:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:25:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:25:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:25:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:25:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:16 compute-0 ceph-mon[75334]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:25:18 compute-0 ceph-mon[75334]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:20 compute-0 ceph-mon[75334]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb 02 15:25:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4125760856' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 02 15:25:20 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 15:25:20 compute-0 ceph-mgr[75628]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 15:25:20 compute-0 ceph-mgr[75628]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 02 15:25:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4125760856' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 02 15:25:21 compute-0 ceph-mon[75334]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 02 15:25:22 compute-0 ceph-mon[75334]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:25:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:24 compute-0 ceph-mon[75334]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:26 compute-0 ceph-mon[75334]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:25:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:25:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444673625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:25:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:25:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444673625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:25:28 compute-0 ceph-mon[75334]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1444673625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:25:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1444673625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:25:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:30 compute-0 ceph-mon[75334]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:38 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.mcxxtn missed beacon ack from the monitors
Feb 02 15:25:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:40 compute-0 podman[240696]: 2026-02-02 15:25:40.342500896 +0000 UTC m=+0.074581063 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 02 15:25:40 compute-0 podman[240695]: 2026-02-02 15:25:40.395513321 +0000 UTC m=+0.130557883 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Feb 02 15:25:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:42 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.mcxxtn missed beacon ack from the monitors
Feb 02 15:25:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:25:42
Feb 02 15:25:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:25:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:25:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta']
Feb 02 15:25:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:25:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:25:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:25:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:46 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.mcxxtn missed beacon ack from the monitors
Feb 02 15:25:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:47 compute-0 sudo[240740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:25:47 compute-0 sudo[240740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:25:47 compute-0 sudo[240740]: pam_unix(sudo:session): session closed for user root
Feb 02 15:25:47 compute-0 sudo[240765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:25:47 compute-0 sudo[240765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:25:47 compute-0 sudo[240765]: pam_unix(sudo:session): session closed for user root
Feb 02 15:25:48 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.mcxxtn MDS connection to Monitors appears to be laggy; 17.5513s since last acked beacon
Feb 02 15:25:48 compute-0 ceph-mds[95441]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Feb 02 15:25:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:50 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.mcxxtn missed beacon ack from the monitors
Feb 02 15:25:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:53 compute-0 ceph-mds[95441]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Feb 02 15:25:54 compute-0 nova_compute[239545]: 2026-02-02 15:25:54.108 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:54 compute-0 nova_compute[239545]: 2026-02-02 15:25:54.110 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:25:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:25:54 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.mcxxtn missed beacon ack from the monitors
Feb 02 15:25:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:56 compute-0 ceph-mds[95441]: mds.beacon.cephfs.compute-0.mcxxtn  MDS is no longer laggy
Feb 02 15:25:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 3e+01 seconds
Feb 02 15:25:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:25:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:25:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:25:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:25:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:25:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:25:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:25:57 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:25:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:25:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:25:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:25:57 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:25:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:25:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:25:57 compute-0 sudo[240821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:25:57 compute-0 sudo[240821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:25:57 compute-0 sudo[240821]: pam_unix(sudo:session): session closed for user root
Feb 02 15:25:57 compute-0 sudo[240846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:25:57 compute-0 sudo[240846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:25:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:57 compute-0 podman[240883]: 2026-02-02 15:25:57.478033628 +0000 UTC m=+0.025939936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:25:57 compute-0 podman[240883]: 2026-02-02 15:25:57.680184578 +0000 UTC m=+0.228090876 container create a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:25:57 compute-0 nova_compute[239545]: 2026-02-02 15:25:57.681 239549 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 15.77 sec
Feb 02 15:25:57 compute-0 systemd[1]: Started libpod-conmon-a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4.scope.
Feb 02 15:25:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:25:58 compute-0 podman[240883]: 2026-02-02 15:25:58.001993455 +0000 UTC m=+0.549899723 container init a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_kapitsa, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:25:58 compute-0 podman[240883]: 2026-02-02 15:25:58.011020073 +0000 UTC m=+0.558926331 container start a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:25:58 compute-0 suspicious_kapitsa[240900]: 167 167
Feb 02 15:25:58 compute-0 systemd[1]: libpod-a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4.scope: Deactivated successfully.
Feb 02 15:25:58 compute-0 conmon[240900]: conmon a40d25c6757961340513 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4.scope/container/memory.events
Feb 02 15:25:58 compute-0 podman[240883]: 2026-02-02 15:25:58.16756766 +0000 UTC m=+0.715473938 container attach a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:25:58 compute-0 podman[240883]: 2026-02-02 15:25:58.169603341 +0000 UTC m=+0.717509619 container died a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_kapitsa, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.169 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.170 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.170 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:25:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:25:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:25:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:25:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:25:58 compute-0 ceph-mon[75334]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a24ebea1aff4c87d7f8f6673d8ed547fe2f9e442765e697cf50cfd24b25abb2b-merged.mount: Deactivated successfully.
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.625 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.626 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.627 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.628 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.628 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.629 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.630 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.630 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.631 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:25:58 compute-0 podman[240883]: 2026-02-02 15:25:58.972358473 +0000 UTC m=+1.520264751 container remove a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_kapitsa, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.978 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.979 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.979 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.979 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:25:58 compute-0 nova_compute[239545]: 2026-02-02 15:25:58.980 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:25:59 compute-0 systemd[1]: libpod-conmon-a40d25c67579613405130ed0a9013151de854fd647b4590acd37f2ef31aacde4.scope: Deactivated successfully.
Feb 02 15:25:59 compute-0 podman[240945]: 2026-02-02 15:25:59.114616194 +0000 UTC m=+0.024663376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:25:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:59 compute-0 podman[240945]: 2026-02-02 15:25:59.226390492 +0000 UTC m=+0.136437654 container create 2cc9d226e566ea8f4c36790c321a79fac28aac5e81edf11d2f7d5e3addefe263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:25:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:25:59.236 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:25:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:25:59.237 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:25:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:25:59.237 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:25:59 compute-0 systemd[1]: Started libpod-conmon-2cc9d226e566ea8f4c36790c321a79fac28aac5e81edf11d2f7d5e3addefe263.scope.
Feb 02 15:25:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a554604b165b4cf08dc892ad655253748d0332e644277cb29b41a4c4c6abf96a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a554604b165b4cf08dc892ad655253748d0332e644277cb29b41a4c4c6abf96a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a554604b165b4cf08dc892ad655253748d0332e644277cb29b41a4c4c6abf96a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a554604b165b4cf08dc892ad655253748d0332e644277cb29b41a4c4c6abf96a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a554604b165b4cf08dc892ad655253748d0332e644277cb29b41a4c4c6abf96a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:25:59 compute-0 ceph-mon[75334]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:25:59 compute-0 podman[240945]: 2026-02-02 15:25:59.565319369 +0000 UTC m=+0.475366591 container init 2cc9d226e566ea8f4c36790c321a79fac28aac5e81edf11d2f7d5e3addefe263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 02 15:25:59 compute-0 podman[240945]: 2026-02-02 15:25:59.572254614 +0000 UTC m=+0.482301776 container start 2cc9d226e566ea8f4c36790c321a79fac28aac5e81edf11d2f7d5e3addefe263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:25:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:25:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663912676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:25:59 compute-0 nova_compute[239545]: 2026-02-02 15:25:59.621 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:25:59 compute-0 podman[240945]: 2026-02-02 15:25:59.671717505 +0000 UTC m=+0.581764687 container attach 2cc9d226e566ea8f4c36790c321a79fac28aac5e81edf11d2f7d5e3addefe263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_elbakyan, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:25:59 compute-0 nova_compute[239545]: 2026-02-02 15:25:59.767 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:25:59 compute-0 nova_compute[239545]: 2026-02-02 15:25:59.768 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5102MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:25:59 compute-0 nova_compute[239545]: 2026-02-02 15:25:59.768 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:25:59 compute-0 nova_compute[239545]: 2026-02-02 15:25:59.768 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:25:59 compute-0 wonderful_elbakyan[240962]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:25:59 compute-0 wonderful_elbakyan[240962]: --> All data devices are unavailable
Feb 02 15:25:59 compute-0 nova_compute[239545]: 2026-02-02 15:25:59.973 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:25:59 compute-0 nova_compute[239545]: 2026-02-02 15:25:59.973 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:25:59 compute-0 systemd[1]: libpod-2cc9d226e566ea8f4c36790c321a79fac28aac5e81edf11d2f7d5e3addefe263.scope: Deactivated successfully.
Feb 02 15:25:59 compute-0 podman[240945]: 2026-02-02 15:25:59.978845619 +0000 UTC m=+0.888892871 container died 2cc9d226e566ea8f4c36790c321a79fac28aac5e81edf11d2f7d5e3addefe263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_elbakyan, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:26:00 compute-0 nova_compute[239545]: 2026-02-02 15:26:00.000 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a554604b165b4cf08dc892ad655253748d0332e644277cb29b41a4c4c6abf96a-merged.mount: Deactivated successfully.
Feb 02 15:26:00 compute-0 podman[240945]: 2026-02-02 15:26:00.492879006 +0000 UTC m=+1.402926168 container remove 2cc9d226e566ea8f4c36790c321a79fac28aac5e81edf11d2f7d5e3addefe263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_elbakyan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:26:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/663912676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:26:00 compute-0 sudo[240846]: pam_unix(sudo:session): session closed for user root
Feb 02 15:26:00 compute-0 systemd[1]: libpod-conmon-2cc9d226e566ea8f4c36790c321a79fac28aac5e81edf11d2f7d5e3addefe263.scope: Deactivated successfully.
Feb 02 15:26:00 compute-0 sudo[241016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:26:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:26:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3090036664' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:26:00 compute-0 sudo[241016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:26:00 compute-0 sudo[241016]: pam_unix(sudo:session): session closed for user root
Feb 02 15:26:00 compute-0 nova_compute[239545]: 2026-02-02 15:26:00.608 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:26:00 compute-0 nova_compute[239545]: 2026-02-02 15:26:00.614 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:26:00 compute-0 sudo[241042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:26:00 compute-0 sudo[241042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:26:00 compute-0 nova_compute[239545]: 2026-02-02 15:26:00.691 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:26:00 compute-0 nova_compute[239545]: 2026-02-02 15:26:00.693 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:26:00 compute-0 nova_compute[239545]: 2026-02-02 15:26:00.693 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:26:00 compute-0 podman[241079]: 2026-02-02 15:26:00.952804162 +0000 UTC m=+0.088043129 container create f6195f01e103eed037458751497ad7937e7ba7cc0b003a86c0511b665d170098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_goodall, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:26:00 compute-0 podman[241079]: 2026-02-02 15:26:00.882421362 +0000 UTC m=+0.017660349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:26:01 compute-0 systemd[1]: Started libpod-conmon-f6195f01e103eed037458751497ad7937e7ba7cc0b003a86c0511b665d170098.scope.
Feb 02 15:26:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:26:01 compute-0 podman[241079]: 2026-02-02 15:26:01.195327935 +0000 UTC m=+0.330566922 container init f6195f01e103eed037458751497ad7937e7ba7cc0b003a86c0511b665d170098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:26:01 compute-0 podman[241079]: 2026-02-02 15:26:01.200128037 +0000 UTC m=+0.335367004 container start f6195f01e103eed037458751497ad7937e7ba7cc0b003a86c0511b665d170098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_goodall, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:26:01 compute-0 elegant_goodall[241095]: 167 167
Feb 02 15:26:01 compute-0 systemd[1]: libpod-f6195f01e103eed037458751497ad7937e7ba7cc0b003a86c0511b665d170098.scope: Deactivated successfully.
Feb 02 15:26:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:01 compute-0 podman[241079]: 2026-02-02 15:26:01.27667761 +0000 UTC m=+0.411916637 container attach f6195f01e103eed037458751497ad7937e7ba7cc0b003a86c0511b665d170098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_goodall, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 02 15:26:01 compute-0 podman[241079]: 2026-02-02 15:26:01.277201658 +0000 UTC m=+0.412440645 container died f6195f01e103eed037458751497ad7937e7ba7cc0b003a86c0511b665d170098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2cc6be1e079979e025726c7f074c3a275d1619da1337bf198003cbb23207359-merged.mount: Deactivated successfully.
Feb 02 15:26:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3090036664' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:26:01 compute-0 ceph-mon[75334]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:01 compute-0 podman[241079]: 2026-02-02 15:26:01.840676016 +0000 UTC m=+0.975914983 container remove f6195f01e103eed037458751497ad7937e7ba7cc0b003a86c0511b665d170098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:26:01 compute-0 systemd[1]: libpod-conmon-f6195f01e103eed037458751497ad7937e7ba7cc0b003a86c0511b665d170098.scope: Deactivated successfully.
Feb 02 15:26:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:02 compute-0 podman[241121]: 2026-02-02 15:26:01.936906858 +0000 UTC m=+0.018183547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:26:02 compute-0 podman[241121]: 2026-02-02 15:26:02.039868901 +0000 UTC m=+0.121145620 container create 0d63a5046208d2eca0ef1bb9e3df4cee1a193a011a6c328964dce515f7ec9f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_tesla, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:26:02 compute-0 systemd[1]: Started libpod-conmon-0d63a5046208d2eca0ef1bb9e3df4cee1a193a011a6c328964dce515f7ec9f10.scope.
Feb 02 15:26:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be91b7a08808a6972bd6798254c951f6cff9c96447e5087bd3c9218632841931/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be91b7a08808a6972bd6798254c951f6cff9c96447e5087bd3c9218632841931/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be91b7a08808a6972bd6798254c951f6cff9c96447e5087bd3c9218632841931/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be91b7a08808a6972bd6798254c951f6cff9c96447e5087bd3c9218632841931/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:26:02 compute-0 podman[241121]: 2026-02-02 15:26:02.271873685 +0000 UTC m=+0.353150384 container init 0d63a5046208d2eca0ef1bb9e3df4cee1a193a011a6c328964dce515f7ec9f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_tesla, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:26:02 compute-0 podman[241121]: 2026-02-02 15:26:02.278489875 +0000 UTC m=+0.359766554 container start 0d63a5046208d2eca0ef1bb9e3df4cee1a193a011a6c328964dce515f7ec9f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:26:02 compute-0 podman[241121]: 2026-02-02 15:26:02.337979579 +0000 UTC m=+0.419256278 container attach 0d63a5046208d2eca0ef1bb9e3df4cee1a193a011a6c328964dce515f7ec9f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:26:02 compute-0 gracious_tesla[241138]: {
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:     "0": [
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:         {
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "devices": [
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "/dev/loop3"
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             ],
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_name": "ceph_lv0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_size": "21470642176",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "name": "ceph_lv0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "tags": {
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.cluster_name": "ceph",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.crush_device_class": "",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.encrypted": "0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.objectstore": "bluestore",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.osd_id": "0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.type": "block",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.vdo": "0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.with_tpm": "0"
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             },
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "type": "block",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "vg_name": "ceph_vg0"
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:         }
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:     ],
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:     "1": [
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:         {
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "devices": [
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "/dev/loop4"
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             ],
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_name": "ceph_lv1",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_size": "21470642176",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "name": "ceph_lv1",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "tags": {
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.cluster_name": "ceph",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.crush_device_class": "",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.encrypted": "0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.objectstore": "bluestore",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.osd_id": "1",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.type": "block",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.vdo": "0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.with_tpm": "0"
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             },
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "type": "block",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "vg_name": "ceph_vg1"
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:         }
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:     ],
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:     "2": [
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:         {
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "devices": [
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "/dev/loop5"
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             ],
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_name": "ceph_lv2",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_size": "21470642176",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "name": "ceph_lv2",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "tags": {
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.cluster_name": "ceph",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.crush_device_class": "",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.encrypted": "0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.objectstore": "bluestore",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.osd_id": "2",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.type": "block",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.vdo": "0",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:                 "ceph.with_tpm": "0"
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             },
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "type": "block",
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:             "vg_name": "ceph_vg2"
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:         }
Feb 02 15:26:02 compute-0 gracious_tesla[241138]:     ]
Feb 02 15:26:02 compute-0 gracious_tesla[241138]: }
Feb 02 15:26:02 compute-0 systemd[1]: libpod-0d63a5046208d2eca0ef1bb9e3df4cee1a193a011a6c328964dce515f7ec9f10.scope: Deactivated successfully.
Feb 02 15:26:02 compute-0 podman[241121]: 2026-02-02 15:26:02.565090108 +0000 UTC m=+0.646366807 container died 0d63a5046208d2eca0ef1bb9e3df4cee1a193a011a6c328964dce515f7ec9f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-be91b7a08808a6972bd6798254c951f6cff9c96447e5087bd3c9218632841931-merged.mount: Deactivated successfully.
Feb 02 15:26:03 compute-0 podman[241121]: 2026-02-02 15:26:03.061923543 +0000 UTC m=+1.143200232 container remove 0d63a5046208d2eca0ef1bb9e3df4cee1a193a011a6c328964dce515f7ec9f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:26:03 compute-0 sudo[241042]: pam_unix(sudo:session): session closed for user root
Feb 02 15:26:03 compute-0 systemd[1]: libpod-conmon-0d63a5046208d2eca0ef1bb9e3df4cee1a193a011a6c328964dce515f7ec9f10.scope: Deactivated successfully.
Feb 02 15:26:03 compute-0 sudo[241159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:26:03 compute-0 sudo[241159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:26:03 compute-0 sudo[241159]: pam_unix(sudo:session): session closed for user root
Feb 02 15:26:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:03 compute-0 sudo[241184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:26:03 compute-0 sudo[241184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:26:03 compute-0 ceph-mon[75334]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:03 compute-0 podman[241221]: 2026-02-02 15:26:03.493976816 +0000 UTC m=+0.020798038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:26:03 compute-0 podman[241221]: 2026-02-02 15:26:03.609046593 +0000 UTC m=+0.135867795 container create 0ebd264cece506fb1dd6274d5b5314a263e99b1b42171148cf9144818a8465ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_albattani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:26:03 compute-0 systemd[1]: Started libpod-conmon-0ebd264cece506fb1dd6274d5b5314a263e99b1b42171148cf9144818a8465ba.scope.
Feb 02 15:26:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:26:03 compute-0 podman[241221]: 2026-02-02 15:26:03.851328673 +0000 UTC m=+0.378149895 container init 0ebd264cece506fb1dd6274d5b5314a263e99b1b42171148cf9144818a8465ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_albattani, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:26:03 compute-0 podman[241221]: 2026-02-02 15:26:03.857988544 +0000 UTC m=+0.384809746 container start 0ebd264cece506fb1dd6274d5b5314a263e99b1b42171148cf9144818a8465ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_albattani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:26:03 compute-0 infallible_albattani[241238]: 167 167
Feb 02 15:26:03 compute-0 systemd[1]: libpod-0ebd264cece506fb1dd6274d5b5314a263e99b1b42171148cf9144818a8465ba.scope: Deactivated successfully.
Feb 02 15:26:03 compute-0 podman[241221]: 2026-02-02 15:26:03.992229112 +0000 UTC m=+0.519050314 container attach 0ebd264cece506fb1dd6274d5b5314a263e99b1b42171148cf9144818a8465ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:26:03 compute-0 podman[241221]: 2026-02-02 15:26:03.993310309 +0000 UTC m=+0.520131511 container died 0ebd264cece506fb1dd6274d5b5314a263e99b1b42171148cf9144818a8465ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_albattani, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d9110b150627eb6a24ad1c7490109ed8e49494db20066b5e03a49b1e76d5581-merged.mount: Deactivated successfully.
Feb 02 15:26:04 compute-0 podman[241221]: 2026-02-02 15:26:04.680921673 +0000 UTC m=+1.207742875 container remove 0ebd264cece506fb1dd6274d5b5314a263e99b1b42171148cf9144818a8465ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_albattani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:26:04 compute-0 systemd[1]: libpod-conmon-0ebd264cece506fb1dd6274d5b5314a263e99b1b42171148cf9144818a8465ba.scope: Deactivated successfully.
Feb 02 15:26:04 compute-0 podman[241263]: 2026-02-02 15:26:04.812235357 +0000 UTC m=+0.020985620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:26:04 compute-0 podman[241263]: 2026-02-02 15:26:04.976047555 +0000 UTC m=+0.184797778 container create c60e3378abf61bf4c21a55fd55d122a23ae54e7ef29d778884bd2bc7edefdc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:26:05 compute-0 systemd[1]: Started libpod-conmon-c60e3378abf61bf4c21a55fd55d122a23ae54e7ef29d778884bd2bc7edefdc8a.scope.
Feb 02 15:26:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf3c65b5820de847b545cae8dda279135bcc221f95320b000e54923c31b7de0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf3c65b5820de847b545cae8dda279135bcc221f95320b000e54923c31b7de0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf3c65b5820de847b545cae8dda279135bcc221f95320b000e54923c31b7de0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf3c65b5820de847b545cae8dda279135bcc221f95320b000e54923c31b7de0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:26:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:05 compute-0 podman[241263]: 2026-02-02 15:26:05.65851544 +0000 UTC m=+0.867265753 container init c60e3378abf61bf4c21a55fd55d122a23ae54e7ef29d778884bd2bc7edefdc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mccarthy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:26:05 compute-0 podman[241263]: 2026-02-02 15:26:05.696830452 +0000 UTC m=+0.905580705 container start c60e3378abf61bf4c21a55fd55d122a23ae54e7ef29d778884bd2bc7edefdc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:26:05 compute-0 podman[241263]: 2026-02-02 15:26:05.941500518 +0000 UTC m=+1.150250751 container attach c60e3378abf61bf4c21a55fd55d122a23ae54e7ef29d778884bd2bc7edefdc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mccarthy, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:26:06 compute-0 ceph-mon[75334]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:06 compute-0 lvm[241359]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:26:06 compute-0 lvm[241358]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:26:06 compute-0 lvm[241358]: VG ceph_vg0 finished
Feb 02 15:26:06 compute-0 lvm[241359]: VG ceph_vg1 finished
Feb 02 15:26:06 compute-0 lvm[241361]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:26:06 compute-0 lvm[241361]: VG ceph_vg2 finished
Feb 02 15:26:06 compute-0 hungry_mccarthy[241280]: {}
Feb 02 15:26:06 compute-0 systemd[1]: libpod-c60e3378abf61bf4c21a55fd55d122a23ae54e7ef29d778884bd2bc7edefdc8a.scope: Deactivated successfully.
Feb 02 15:26:06 compute-0 podman[241263]: 2026-02-02 15:26:06.423422477 +0000 UTC m=+1.632172690 container died c60e3378abf61bf4c21a55fd55d122a23ae54e7ef29d778884bd2bc7edefdc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-abf3c65b5820de847b545cae8dda279135bcc221f95320b000e54923c31b7de0-merged.mount: Deactivated successfully.
Feb 02 15:26:06 compute-0 podman[241263]: 2026-02-02 15:26:06.777951061 +0000 UTC m=+1.986701284 container remove c60e3378abf61bf4c21a55fd55d122a23ae54e7ef29d778884bd2bc7edefdc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 15:26:06 compute-0 systemd[1]: libpod-conmon-c60e3378abf61bf4c21a55fd55d122a23ae54e7ef29d778884bd2bc7edefdc8a.scope: Deactivated successfully.
Feb 02 15:26:06 compute-0 sudo[241184]: pam_unix(sudo:session): session closed for user root
Feb 02 15:26:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:26:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:26:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:26:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:26:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:06 compute-0 sudo[241375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:26:06 compute-0 sudo[241375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:26:06 compute-0 sudo[241375]: pam_unix(sudo:session): session closed for user root
Feb 02 15:26:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:26:07 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:26:07 compute-0 ceph-mon[75334]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:09 compute-0 ceph-mon[75334]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:11 compute-0 podman[241401]: 2026-02-02 15:26:11.309601017 +0000 UTC m=+0.052907315 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 02 15:26:11 compute-0 ceph-mon[75334]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:11 compute-0 podman[241400]: 2026-02-02 15:26:11.360399328 +0000 UTC m=+0.103720676 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 02 15:26:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:26:11.855 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:26:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:26:11.856 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:26:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:26:11.858 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:26:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:13 compute-0 ceph-mon[75334]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:26:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:26:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:26:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:26:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:26:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:26:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:15 compute-0 ceph-mon[75334]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:17 compute-0 ceph-mon[75334]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:19 compute-0 ceph-mon[75334]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:21 compute-0 ceph-mon[75334]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:21.892937) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045981892969, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1478, "num_deletes": 506, "total_data_size": 1850720, "memory_usage": 1889264, "flush_reason": "Manual Compaction"}
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045981929858, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1821891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13488, "largest_seqno": 14965, "table_properties": {"data_size": 1815432, "index_size": 3149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16385, "raw_average_key_size": 18, "raw_value_size": 1800367, "raw_average_value_size": 2020, "num_data_blocks": 144, "num_entries": 891, "num_filter_entries": 891, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770045837, "oldest_key_time": 1770045837, "file_creation_time": 1770045981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 36996 microseconds, and 3946 cpu microseconds.
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:21.929930) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1821891 bytes OK
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:21.929949) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:21.936272) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:21.936300) EVENT_LOG_v1 {"time_micros": 1770045981936294, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:21.936318) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1843060, prev total WAL file size 1843060, number of live WAL files 2.
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:21.936934) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1779KB)], [32(7534KB)]
Feb 02 15:26:21 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045981937015, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9537110, "oldest_snapshot_seqno": -1}
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3861 keys, 7532887 bytes, temperature: kUnknown
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045982059217, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7532887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7505180, "index_size": 16973, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94479, "raw_average_key_size": 24, "raw_value_size": 7433355, "raw_average_value_size": 1925, "num_data_blocks": 721, "num_entries": 3861, "num_filter_entries": 3861, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770045981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:22.059505) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7532887 bytes
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:22.061240) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.0 rd, 61.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.4 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.4) write-amplify(4.1) OK, records in: 4886, records dropped: 1025 output_compression: NoCompression
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:22.061261) EVENT_LOG_v1 {"time_micros": 1770045982061249, "job": 14, "event": "compaction_finished", "compaction_time_micros": 122293, "compaction_time_cpu_micros": 25322, "output_level": 6, "num_output_files": 1, "total_output_size": 7532887, "num_input_records": 4886, "num_output_records": 3861, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045982061567, "job": 14, "event": "table_file_deletion", "file_number": 34}
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770045982062290, "job": 14, "event": "table_file_deletion", "file_number": 32}
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:21.936760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:22.062368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:22.062373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:22.062374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:22.062376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:26:22 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:26:22.062377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:26:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:23 compute-0 ceph-mon[75334]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:25 compute-0 ceph-mon[75334]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:26:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3311 writes, 14K keys, 3311 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3311 writes, 3311 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1236 writes, 5688 keys, 1236 commit groups, 1.0 writes per commit group, ingest: 8.30 MB, 0.01 MB/s
                                           Interval WAL: 1236 writes, 1236 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    135.0      0.12              0.03         7    0.017       0      0       0.0       0.0
                                             L6      1/0    7.18 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    152.0    124.8      0.34              0.15         6    0.056     24K   3204       0.0       0.0
                                            Sum      1/0    7.18 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    111.9    127.5      0.46              0.18        13    0.035     24K   3204       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    108.2    109.7      0.33              0.12         8    0.041     17K   2474       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    152.0    124.8      0.34              0.15         6    0.056     24K   3204       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    138.3      0.12              0.03         6    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1f12ef8d0#2 capacity: 308.00 MB usage: 1.91 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(109,1.69 MB,0.549955%) FilterBlock(14,75.61 KB,0.0239731%) IndexBlock(14,149.55 KB,0.0474162%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 15:26:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:27 compute-0 ceph-mon[75334]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:26:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/69096859' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:26:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:26:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/69096859' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:26:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/69096859' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:26:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/69096859' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:26:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:29 compute-0 ceph-mon[75334]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:31 compute-0 ceph-mon[75334]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:33 compute-0 ceph-mon[75334]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:35 compute-0 ceph-mon[75334]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:37 compute-0 ceph-mon[75334]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:39 compute-0 ceph-mon[75334]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:41 compute-0 ceph-mon[75334]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:42 compute-0 podman[241446]: 2026-02-02 15:26:42.295796549 +0000 UTC m=+0.036851200 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 02 15:26:42 compute-0 podman[241445]: 2026-02-02 15:26:42.321331114 +0000 UTC m=+0.064945465 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:26:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:26:42
Feb 02 15:26:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:26:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:26:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'volumes']
Feb 02 15:26:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:26:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:43 compute-0 ceph-mon[75334]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:26:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:26:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:45 compute-0 ceph-mon[75334]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:47 compute-0 ceph-mon[75334]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:49 compute-0 ceph-mon[75334]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:51 compute-0 ceph-mon[75334]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:53 compute-0 ceph-mon[75334]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:26:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:26:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:55 compute-0 ceph-mon[75334]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:26:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:57 compute-0 ceph-mon[75334]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:26:59.238 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:26:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:26:59.239 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:26:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:26:59.240 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:26:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:26:59 compute-0 ceph-mon[75334]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.696 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.697 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.697 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.697 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.732 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.732 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.733 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.733 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.734 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.734 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.734 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.734 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.735 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.767 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.767 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.768 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.768 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:27:00 compute-0 nova_compute[239545]: 2026-02-02 15:27:00.768 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:27:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:27:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3233137470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.286 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:27:01 compute-0 ceph-mon[75334]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3233137470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.414 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.415 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5127MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.415 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.415 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.461 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.461 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.475 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:27:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:27:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005250505' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.991 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:27:01 compute-0 nova_compute[239545]: 2026-02-02 15:27:01.998 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:27:02 compute-0 nova_compute[239545]: 2026-02-02 15:27:02.014 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:27:02 compute-0 nova_compute[239545]: 2026-02-02 15:27:02.017 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:27:02 compute-0 nova_compute[239545]: 2026-02-02 15:27:02.017 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:27:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2005250505' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:27:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:03 compute-0 ceph-mon[75334]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:05 compute-0 ceph-mon[75334]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:06 compute-0 sudo[241533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:27:06 compute-0 sudo[241533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:06 compute-0 sudo[241533]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:07 compute-0 sudo[241558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 02 15:27:07 compute-0 sudo[241558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:07 compute-0 sudo[241558]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:27:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:27:07 compute-0 ceph-mon[75334]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:07 compute-0 sudo[241602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:27:07 compute-0 sudo[241602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:07 compute-0 sudo[241602]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:07 compute-0 sudo[241627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:27:07 compute-0 sudo[241627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:07 compute-0 sudo[241627]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:27:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:27:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:27:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:27:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:27:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:27:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:27:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:27:07 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:27:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:27:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:27:07 compute-0 sudo[241684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:27:07 compute-0 sudo[241684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:07 compute-0 sudo[241684]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:08 compute-0 sudo[241709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:27:08 compute-0 sudo[241709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:08 compute-0 podman[241746]: 2026-02-02 15:27:08.334290492 +0000 UTC m=+0.086171805 container create 176671b49498aa95cb3b3aab324bfbfccc0889e33e28fc814c7e193d33be0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Feb 02 15:27:08 compute-0 podman[241746]: 2026-02-02 15:27:08.273077902 +0000 UTC m=+0.024959235 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:27:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:27:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:27:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:27:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:27:08 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:27:08 compute-0 systemd[1]: Started libpod-conmon-176671b49498aa95cb3b3aab324bfbfccc0889e33e28fc814c7e193d33be0fbc.scope.
Feb 02 15:27:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:27:08 compute-0 podman[241746]: 2026-02-02 15:27:08.602812549 +0000 UTC m=+0.354693902 container init 176671b49498aa95cb3b3aab324bfbfccc0889e33e28fc814c7e193d33be0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:27:08 compute-0 podman[241746]: 2026-02-02 15:27:08.608097823 +0000 UTC m=+0.359979166 container start 176671b49498aa95cb3b3aab324bfbfccc0889e33e28fc814c7e193d33be0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:27:08 compute-0 eager_knuth[241763]: 167 167
Feb 02 15:27:08 compute-0 systemd[1]: libpod-176671b49498aa95cb3b3aab324bfbfccc0889e33e28fc814c7e193d33be0fbc.scope: Deactivated successfully.
Feb 02 15:27:08 compute-0 podman[241746]: 2026-02-02 15:27:08.688865449 +0000 UTC m=+0.440746792 container attach 176671b49498aa95cb3b3aab324bfbfccc0889e33e28fc814c7e193d33be0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:27:08 compute-0 podman[241746]: 2026-02-02 15:27:08.689372989 +0000 UTC m=+0.441254342 container died 176671b49498aa95cb3b3aab324bfbfccc0889e33e28fc814c7e193d33be0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:27:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e031cb981e1bf160c19d0404c964fdf55a6c1368159d57301d3b828bd60fd0a4-merged.mount: Deactivated successfully.
Feb 02 15:27:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:09 compute-0 podman[241746]: 2026-02-02 15:27:09.484142315 +0000 UTC m=+1.236023618 container remove 176671b49498aa95cb3b3aab324bfbfccc0889e33e28fc814c7e193d33be0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:27:09 compute-0 ceph-mon[75334]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:09 compute-0 systemd[1]: libpod-conmon-176671b49498aa95cb3b3aab324bfbfccc0889e33e28fc814c7e193d33be0fbc.scope: Deactivated successfully.
Feb 02 15:27:09 compute-0 podman[241788]: 2026-02-02 15:27:09.699525863 +0000 UTC m=+0.082724956 container create e315c5e4106b54d0eaf9f2e307c054a07a7e17a08994eb89181410a3a00c9438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:27:09 compute-0 podman[241788]: 2026-02-02 15:27:09.643923653 +0000 UTC m=+0.027122766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:27:09 compute-0 systemd[1]: Started libpod-conmon-e315c5e4106b54d0eaf9f2e307c054a07a7e17a08994eb89181410a3a00c9438.scope.
Feb 02 15:27:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de1b126f6c93389f7188e8d0bb980b8292bedbbb9d2f381482d1c8729c28a52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de1b126f6c93389f7188e8d0bb980b8292bedbbb9d2f381482d1c8729c28a52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de1b126f6c93389f7188e8d0bb980b8292bedbbb9d2f381482d1c8729c28a52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de1b126f6c93389f7188e8d0bb980b8292bedbbb9d2f381482d1c8729c28a52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de1b126f6c93389f7188e8d0bb980b8292bedbbb9d2f381482d1c8729c28a52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:09 compute-0 podman[241788]: 2026-02-02 15:27:09.800226453 +0000 UTC m=+0.183425646 container init e315c5e4106b54d0eaf9f2e307c054a07a7e17a08994eb89181410a3a00c9438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:27:09 compute-0 podman[241788]: 2026-02-02 15:27:09.807842723 +0000 UTC m=+0.191041826 container start e315c5e4106b54d0eaf9f2e307c054a07a7e17a08994eb89181410a3a00c9438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hertz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:27:09 compute-0 podman[241788]: 2026-02-02 15:27:09.831884658 +0000 UTC m=+0.215083761 container attach e315c5e4106b54d0eaf9f2e307c054a07a7e17a08994eb89181410a3a00c9438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hertz, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:27:10 compute-0 optimistic_hertz[241805]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:27:10 compute-0 optimistic_hertz[241805]: --> All data devices are unavailable
Feb 02 15:27:10 compute-0 systemd[1]: libpod-e315c5e4106b54d0eaf9f2e307c054a07a7e17a08994eb89181410a3a00c9438.scope: Deactivated successfully.
Feb 02 15:27:10 compute-0 podman[241788]: 2026-02-02 15:27:10.203511303 +0000 UTC m=+0.586710386 container died e315c5e4106b54d0eaf9f2e307c054a07a7e17a08994eb89181410a3a00c9438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8de1b126f6c93389f7188e8d0bb980b8292bedbbb9d2f381482d1c8729c28a52-merged.mount: Deactivated successfully.
Feb 02 15:27:10 compute-0 podman[241788]: 2026-02-02 15:27:10.322413523 +0000 UTC m=+0.705612626 container remove e315c5e4106b54d0eaf9f2e307c054a07a7e17a08994eb89181410a3a00c9438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:27:10 compute-0 systemd[1]: libpod-conmon-e315c5e4106b54d0eaf9f2e307c054a07a7e17a08994eb89181410a3a00c9438.scope: Deactivated successfully.
Feb 02 15:27:10 compute-0 sudo[241709]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:10 compute-0 sudo[241836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:27:10 compute-0 sudo[241836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:10 compute-0 sudo[241836]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:10 compute-0 sudo[241861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:27:10 compute-0 sudo[241861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:10 compute-0 podman[241900]: 2026-02-02 15:27:10.734004837 +0000 UTC m=+0.063846233 container create 2e5587d5f8ca9080614e023b6ace0b2273214789ceb6bed082b39726695130c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:27:10 compute-0 podman[241900]: 2026-02-02 15:27:10.689071529 +0000 UTC m=+0.018913015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:27:10 compute-0 systemd[1]: Started libpod-conmon-2e5587d5f8ca9080614e023b6ace0b2273214789ceb6bed082b39726695130c6.scope.
Feb 02 15:27:10 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:27:10 compute-0 podman[241900]: 2026-02-02 15:27:10.85513048 +0000 UTC m=+0.184971926 container init 2e5587d5f8ca9080614e023b6ace0b2273214789ceb6bed082b39726695130c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:27:10 compute-0 podman[241900]: 2026-02-02 15:27:10.859545007 +0000 UTC m=+0.189386413 container start 2e5587d5f8ca9080614e023b6ace0b2273214789ceb6bed082b39726695130c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:27:10 compute-0 heuristic_neumann[241916]: 167 167
Feb 02 15:27:10 compute-0 systemd[1]: libpod-2e5587d5f8ca9080614e023b6ace0b2273214789ceb6bed082b39726695130c6.scope: Deactivated successfully.
Feb 02 15:27:10 compute-0 podman[241900]: 2026-02-02 15:27:10.898069709 +0000 UTC m=+0.227911215 container attach 2e5587d5f8ca9080614e023b6ace0b2273214789ceb6bed082b39726695130c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:27:10 compute-0 podman[241900]: 2026-02-02 15:27:10.898581099 +0000 UTC m=+0.228422535 container died 2e5587d5f8ca9080614e023b6ace0b2273214789ceb6bed082b39726695130c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a6741fbd8c7c987945edf868612729c6ae9f892d2539d61aa8a16387bc69675-merged.mount: Deactivated successfully.
Feb 02 15:27:11 compute-0 podman[241900]: 2026-02-02 15:27:11.171878849 +0000 UTC m=+0.501720245 container remove 2e5587d5f8ca9080614e023b6ace0b2273214789ceb6bed082b39726695130c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:27:11 compute-0 systemd[1]: libpod-conmon-2e5587d5f8ca9080614e023b6ace0b2273214789ceb6bed082b39726695130c6.scope: Deactivated successfully.
Feb 02 15:27:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:11 compute-0 podman[241941]: 2026-02-02 15:27:11.359537848 +0000 UTC m=+0.112423642 container create 5a33647e41db463f71f20cf608e56e0eb9cde67caeedd45c014935baac76c112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:27:11 compute-0 podman[241941]: 2026-02-02 15:27:11.263896308 +0000 UTC m=+0.016782082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:27:11 compute-0 ceph-mon[75334]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:11 compute-0 systemd[1]: Started libpod-conmon-5a33647e41db463f71f20cf608e56e0eb9cde67caeedd45c014935baac76c112.scope.
Feb 02 15:27:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72853c8b5863c6c7ecd902fc420cf111b802f9015c007725649de221c5960bf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72853c8b5863c6c7ecd902fc420cf111b802f9015c007725649de221c5960bf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72853c8b5863c6c7ecd902fc420cf111b802f9015c007725649de221c5960bf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72853c8b5863c6c7ecd902fc420cf111b802f9015c007725649de221c5960bf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:11 compute-0 podman[241941]: 2026-02-02 15:27:11.530934946 +0000 UTC m=+0.283820700 container init 5a33647e41db463f71f20cf608e56e0eb9cde67caeedd45c014935baac76c112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 02 15:27:11 compute-0 podman[241941]: 2026-02-02 15:27:11.536343483 +0000 UTC m=+0.289229227 container start 5a33647e41db463f71f20cf608e56e0eb9cde67caeedd45c014935baac76c112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:27:11 compute-0 podman[241941]: 2026-02-02 15:27:11.635836769 +0000 UTC m=+0.388722523 container attach 5a33647e41db463f71f20cf608e56e0eb9cde67caeedd45c014935baac76c112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:27:11 compute-0 modest_banach[241957]: {
Feb 02 15:27:11 compute-0 modest_banach[241957]:     "0": [
Feb 02 15:27:11 compute-0 modest_banach[241957]:         {
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "devices": [
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "/dev/loop3"
Feb 02 15:27:11 compute-0 modest_banach[241957]:             ],
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_name": "ceph_lv0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_size": "21470642176",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "name": "ceph_lv0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "tags": {
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.cluster_name": "ceph",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.crush_device_class": "",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.encrypted": "0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.objectstore": "bluestore",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.osd_id": "0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.type": "block",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.vdo": "0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.with_tpm": "0"
Feb 02 15:27:11 compute-0 modest_banach[241957]:             },
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "type": "block",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "vg_name": "ceph_vg0"
Feb 02 15:27:11 compute-0 modest_banach[241957]:         }
Feb 02 15:27:11 compute-0 modest_banach[241957]:     ],
Feb 02 15:27:11 compute-0 modest_banach[241957]:     "1": [
Feb 02 15:27:11 compute-0 modest_banach[241957]:         {
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "devices": [
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "/dev/loop4"
Feb 02 15:27:11 compute-0 modest_banach[241957]:             ],
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_name": "ceph_lv1",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_size": "21470642176",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "name": "ceph_lv1",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "tags": {
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.cluster_name": "ceph",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.crush_device_class": "",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.encrypted": "0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.objectstore": "bluestore",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.osd_id": "1",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.type": "block",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.vdo": "0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.with_tpm": "0"
Feb 02 15:27:11 compute-0 modest_banach[241957]:             },
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "type": "block",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "vg_name": "ceph_vg1"
Feb 02 15:27:11 compute-0 modest_banach[241957]:         }
Feb 02 15:27:11 compute-0 modest_banach[241957]:     ],
Feb 02 15:27:11 compute-0 modest_banach[241957]:     "2": [
Feb 02 15:27:11 compute-0 modest_banach[241957]:         {
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "devices": [
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "/dev/loop5"
Feb 02 15:27:11 compute-0 modest_banach[241957]:             ],
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_name": "ceph_lv2",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_size": "21470642176",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "name": "ceph_lv2",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "tags": {
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.cluster_name": "ceph",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.crush_device_class": "",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.encrypted": "0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.objectstore": "bluestore",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.osd_id": "2",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.type": "block",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.vdo": "0",
Feb 02 15:27:11 compute-0 modest_banach[241957]:                 "ceph.with_tpm": "0"
Feb 02 15:27:11 compute-0 modest_banach[241957]:             },
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "type": "block",
Feb 02 15:27:11 compute-0 modest_banach[241957]:             "vg_name": "ceph_vg2"
Feb 02 15:27:11 compute-0 modest_banach[241957]:         }
Feb 02 15:27:11 compute-0 modest_banach[241957]:     ]
Feb 02 15:27:11 compute-0 modest_banach[241957]: }
Feb 02 15:27:11 compute-0 systemd[1]: libpod-5a33647e41db463f71f20cf608e56e0eb9cde67caeedd45c014935baac76c112.scope: Deactivated successfully.
Feb 02 15:27:11 compute-0 podman[241967]: 2026-02-02 15:27:11.830957135 +0000 UTC m=+0.021498315 container died 5a33647e41db463f71f20cf608e56e0eb9cde67caeedd45c014935baac76c112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:27:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-72853c8b5863c6c7ecd902fc420cf111b802f9015c007725649de221c5960bf3-merged.mount: Deactivated successfully.
Feb 02 15:27:12 compute-0 podman[241967]: 2026-02-02 15:27:12.253006496 +0000 UTC m=+0.443547636 container remove 5a33647e41db463f71f20cf608e56e0eb9cde67caeedd45c014935baac76c112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_banach, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:27:12 compute-0 systemd[1]: libpod-conmon-5a33647e41db463f71f20cf608e56e0eb9cde67caeedd45c014935baac76c112.scope: Deactivated successfully.
Feb 02 15:27:12 compute-0 sudo[241861]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:12 compute-0 sudo[241982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:27:12 compute-0 sudo[241982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:12 compute-0 sudo[241982]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:12 compute-0 sudo[242019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:27:12 compute-0 sudo[242019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:12 compute-0 podman[242007]: 2026-02-02 15:27:12.46876706 +0000 UTC m=+0.090554800 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:27:12 compute-0 podman[242006]: 2026-02-02 15:27:12.500080489 +0000 UTC m=+0.123283157 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 15:27:12 compute-0 podman[242086]: 2026-02-02 15:27:12.709023468 +0000 UTC m=+0.058640220 container create 42a32be8db7500d9b28f4b30a713b39c5befece6777cb75b3ec32710d959ad66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_poitras, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:27:12 compute-0 systemd[1]: Started libpod-conmon-42a32be8db7500d9b28f4b30a713b39c5befece6777cb75b3ec32710d959ad66.scope.
Feb 02 15:27:12 compute-0 podman[242086]: 2026-02-02 15:27:12.674160049 +0000 UTC m=+0.023776841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:27:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:27:12 compute-0 podman[242086]: 2026-02-02 15:27:12.794462707 +0000 UTC m=+0.144079519 container init 42a32be8db7500d9b28f4b30a713b39c5befece6777cb75b3ec32710d959ad66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_poitras, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:27:12 compute-0 podman[242086]: 2026-02-02 15:27:12.801002366 +0000 UTC m=+0.150619138 container start 42a32be8db7500d9b28f4b30a713b39c5befece6777cb75b3ec32710d959ad66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_poitras, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 02 15:27:12 compute-0 sleepy_poitras[242103]: 167 167
Feb 02 15:27:12 compute-0 systemd[1]: libpod-42a32be8db7500d9b28f4b30a713b39c5befece6777cb75b3ec32710d959ad66.scope: Deactivated successfully.
Feb 02 15:27:12 compute-0 podman[242086]: 2026-02-02 15:27:12.826891027 +0000 UTC m=+0.176507769 container attach 42a32be8db7500d9b28f4b30a713b39c5befece6777cb75b3ec32710d959ad66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:27:12 compute-0 podman[242086]: 2026-02-02 15:27:12.827770215 +0000 UTC m=+0.177386967 container died 42a32be8db7500d9b28f4b30a713b39c5befece6777cb75b3ec32710d959ad66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_poitras, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-04d3cc70549dc6e433f43536b2979e450f9d700b3a0c7488563a8328828ded86-merged.mount: Deactivated successfully.
Feb 02 15:27:13 compute-0 podman[242086]: 2026-02-02 15:27:13.00869045 +0000 UTC m=+0.358307192 container remove 42a32be8db7500d9b28f4b30a713b39c5befece6777cb75b3ec32710d959ad66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:27:13 compute-0 systemd[1]: libpod-conmon-42a32be8db7500d9b28f4b30a713b39c5befece6777cb75b3ec32710d959ad66.scope: Deactivated successfully.
Feb 02 15:27:13 compute-0 podman[242128]: 2026-02-02 15:27:13.145081816 +0000 UTC m=+0.055433867 container create c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gauss, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:27:13 compute-0 systemd[1]: Started libpod-conmon-c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf.scope.
Feb 02 15:27:13 compute-0 podman[242128]: 2026-02-02 15:27:13.109998843 +0000 UTC m=+0.020350874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:27:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de6f98bd8b1b8fda329488254c7662b056738d5477e7001eae24cfc31f396e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de6f98bd8b1b8fda329488254c7662b056738d5477e7001eae24cfc31f396e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de6f98bd8b1b8fda329488254c7662b056738d5477e7001eae24cfc31f396e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de6f98bd8b1b8fda329488254c7662b056738d5477e7001eae24cfc31f396e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:27:13 compute-0 podman[242128]: 2026-02-02 15:27:13.225508675 +0000 UTC m=+0.135860696 container init c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gauss, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:27:13 compute-0 podman[242128]: 2026-02-02 15:27:13.231662238 +0000 UTC m=+0.142014249 container start c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:27:13 compute-0 podman[242128]: 2026-02-02 15:27:13.235490682 +0000 UTC m=+0.145842953 container attach c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:27:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:13 compute-0 ceph-mon[75334]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:13 compute-0 lvm[242222]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:27:13 compute-0 lvm[242223]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:27:13 compute-0 lvm[242223]: VG ceph_vg1 finished
Feb 02 15:27:13 compute-0 lvm[242222]: VG ceph_vg0 finished
Feb 02 15:27:13 compute-0 lvm[242225]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:27:13 compute-0 lvm[242225]: VG ceph_vg2 finished
Feb 02 15:27:13 compute-0 pedantic_gauss[242144]: {}
Feb 02 15:27:13 compute-0 podman[242128]: 2026-02-02 15:27:13.950693017 +0000 UTC m=+0.861045098 container died c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gauss, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:27:13 compute-0 systemd[1]: libpod-c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf.scope: Deactivated successfully.
Feb 02 15:27:13 compute-0 systemd[1]: libpod-c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf.scope: Consumed 1.014s CPU time.
Feb 02 15:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8de6f98bd8b1b8fda329488254c7662b056738d5477e7001eae24cfc31f396e3-merged.mount: Deactivated successfully.
Feb 02 15:27:14 compute-0 podman[242128]: 2026-02-02 15:27:14.00699255 +0000 UTC m=+0.917344601 container remove c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:27:14 compute-0 systemd[1]: libpod-conmon-c49792889a7ce14dcb0cb3c9410075c1bc5f48e7db8dde6b865f9608a2368fcf.scope: Deactivated successfully.
Feb 02 15:27:14 compute-0 sudo[242019]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:27:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:27:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:14 compute-0 sudo[242240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:27:14 compute-0 sudo[242240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:27:14 compute-0 sudo[242240]: pam_unix(sudo:session): session closed for user root
Feb 02 15:27:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:27:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:27:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:27:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:27:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:27:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:27:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:27:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:16 compute-0 ceph-mon[75334]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:17 compute-0 ceph-mon[75334]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:19 compute-0 ceph-mon[75334]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:21 compute-0 ceph-mon[75334]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:23 compute-0 ceph-mon[75334]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:25 compute-0 ceph-mon[75334]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:27:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5852 writes, 24K keys, 5852 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5852 writes, 999 syncs, 5.86 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379da30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ad379d8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 15:27:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:27 compute-0 ceph-mon[75334]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:27:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471133576' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:27:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:27:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471133576' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:27:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1471133576' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:27:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1471133576' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:27:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:29 compute-0 ceph-mon[75334]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:27:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7119 writes, 29K keys, 7119 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7119 writes, 1409 syncs, 5.05 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 337 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d615c4f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 15:27:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:31 compute-0 ceph-mon[75334]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:27:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5719 writes, 24K keys, 5719 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5719 writes, 898 syncs, 6.37 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedab4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedab4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedab4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559dcedaba30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 02 15:27:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:33 compute-0 ceph-mon[75334]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:35 compute-0 ceph-mon[75334]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:35 compute-0 ceph-mgr[75628]: [devicehealth INFO root] Check health
Feb 02 15:27:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:37 compute-0 ceph-mon[75334]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:39 compute-0 ceph-mon[75334]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:41 compute-0 ceph-mon[75334]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:27:42
Feb 02 15:27:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:27:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:27:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', '.rgw.root', 'backups', 'default.rgw.meta']
Feb 02 15:27:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:27:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:43 compute-0 podman[242265]: 2026-02-02 15:27:43.339649966 +0000 UTC m=+0.080968173 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:27:43 compute-0 podman[242266]: 2026-02-02 15:27:43.340646487 +0000 UTC m=+0.080636303 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 02 15:27:43 compute-0 ceph-mon[75334]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:27:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:27:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:45 compute-0 ceph-mon[75334]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:47 compute-0 ceph-mon[75334]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:49 compute-0 ceph-mon[75334]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:51 compute-0 ceph-mon[75334]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:53 compute-0 ceph-mon[75334]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:27:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:27:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:55 compute-0 ceph-mon[75334]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:27:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:57 compute-0 ceph-mon[75334]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.860 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.861 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.882 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.882 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.882 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.901 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.901 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.901 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.902 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.902 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.902 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.903 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.903 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.903 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.938 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.939 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.939 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.940 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:27:57 compute-0 nova_compute[239545]: 2026-02-02 15:27:57.940 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:27:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:27:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/342790017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:27:58 compute-0 nova_compute[239545]: 2026-02-02 15:27:58.498 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:27:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/342790017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:27:58 compute-0 nova_compute[239545]: 2026-02-02 15:27:58.685 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:27:58 compute-0 nova_compute[239545]: 2026-02-02 15:27:58.688 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:27:58 compute-0 nova_compute[239545]: 2026-02-02 15:27:58.688 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:27:58 compute-0 nova_compute[239545]: 2026-02-02 15:27:58.688 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:27:58 compute-0 nova_compute[239545]: 2026-02-02 15:27:58.780 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:27:58 compute-0 nova_compute[239545]: 2026-02-02 15:27:58.780 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:27:58 compute-0 nova_compute[239545]: 2026-02-02 15:27:58.810 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:27:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:27:59.239 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:27:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:27:59.240 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:27:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:27:59.240 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:27:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:27:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/258347456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:27:59 compute-0 nova_compute[239545]: 2026-02-02 15:27:59.405 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:27:59 compute-0 nova_compute[239545]: 2026-02-02 15:27:59.412 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:27:59 compute-0 nova_compute[239545]: 2026-02-02 15:27:59.447 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:27:59 compute-0 nova_compute[239545]: 2026-02-02 15:27:59.450 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:27:59 compute-0 nova_compute[239545]: 2026-02-02 15:27:59.450 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:27:59 compute-0 ceph-mon[75334]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:27:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/258347456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:28:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:01 compute-0 ceph-mon[75334]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:03 compute-0 ceph-mon[75334]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:05 compute-0 ceph-mon[75334]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:07 compute-0 ceph-mon[75334]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:09 compute-0 ceph-mon[75334]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:11 compute-0 ceph-mon[75334]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:13 compute-0 ceph-mon[75334]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:14 compute-0 sudo[242355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:28:14 compute-0 sudo[242355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:28:14 compute-0 sudo[242355]: pam_unix(sudo:session): session closed for user root
Feb 02 15:28:14 compute-0 podman[242380]: 2026-02-02 15:28:14.244559364 +0000 UTC m=+0.053000542 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 02 15:28:14 compute-0 sudo[242392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:28:14 compute-0 sudo[242392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:28:14 compute-0 podman[242379]: 2026-02-02 15:28:14.287391208 +0000 UTC m=+0.095417393 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 15:28:14 compute-0 sudo[242392]: pam_unix(sudo:session): session closed for user root
Feb 02 15:28:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:28:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:28:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:28:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:28:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:28:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:28:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 15:28:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:28:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:28:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:28:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:28:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:28:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:28:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:28:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:28:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:28:14 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:28:14 compute-0 sudo[242483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:28:14 compute-0 sudo[242483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:28:14 compute-0 sudo[242483]: pam_unix(sudo:session): session closed for user root
Feb 02 15:28:14 compute-0 sudo[242508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:28:14 compute-0 sudo[242508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:28:15 compute-0 podman[242544]: 2026-02-02 15:28:15.119076235 +0000 UTC m=+0.040445941 container create 6e46fd8c17174a62151edc74379c270b6dc2d7b26a045f227162fa14e436a603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Feb 02 15:28:15 compute-0 systemd[1]: Started libpod-conmon-6e46fd8c17174a62151edc74379c270b6dc2d7b26a045f227162fa14e436a603.scope.
Feb 02 15:28:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:28:15 compute-0 podman[242544]: 2026-02-02 15:28:15.10221981 +0000 UTC m=+0.023589546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:28:15 compute-0 podman[242544]: 2026-02-02 15:28:15.207526011 +0000 UTC m=+0.128895747 container init 6e46fd8c17174a62151edc74379c270b6dc2d7b26a045f227162fa14e436a603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:28:15 compute-0 podman[242544]: 2026-02-02 15:28:15.213827077 +0000 UTC m=+0.135196783 container start 6e46fd8c17174a62151edc74379c270b6dc2d7b26a045f227162fa14e436a603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_cannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:28:15 compute-0 podman[242544]: 2026-02-02 15:28:15.217922655 +0000 UTC m=+0.139292391 container attach 6e46fd8c17174a62151edc74379c270b6dc2d7b26a045f227162fa14e436a603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:28:15 compute-0 infallible_cannon[242560]: 167 167
Feb 02 15:28:15 compute-0 systemd[1]: libpod-6e46fd8c17174a62151edc74379c270b6dc2d7b26a045f227162fa14e436a603.scope: Deactivated successfully.
Feb 02 15:28:15 compute-0 podman[242544]: 2026-02-02 15:28:15.221175446 +0000 UTC m=+0.142545172 container died 6e46fd8c17174a62151edc74379c270b6dc2d7b26a045f227162fa14e436a603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_cannon, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:28:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-cda25827c2c728176aa017d1885be2be8b2725654fb040c019106f33c379e0ab-merged.mount: Deactivated successfully.
Feb 02 15:28:15 compute-0 podman[242544]: 2026-02-02 15:28:15.261145272 +0000 UTC m=+0.182515018 container remove 6e46fd8c17174a62151edc74379c270b6dc2d7b26a045f227162fa14e436a603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:28:15 compute-0 systemd[1]: libpod-conmon-6e46fd8c17174a62151edc74379c270b6dc2d7b26a045f227162fa14e436a603.scope: Deactivated successfully.
Feb 02 15:28:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:15 compute-0 podman[242583]: 2026-02-02 15:28:15.370844138 +0000 UTC m=+0.033398771 container create db2dfd1136c9c080d80520a246a4f7ad04f4b96fb64d362c035876a19476ef6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_gates, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:28:15 compute-0 systemd[1]: Started libpod-conmon-db2dfd1136c9c080d80520a246a4f7ad04f4b96fb64d362c035876a19476ef6b.scope.
Feb 02 15:28:15 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b554bbc5055d8a31c335237f188eab621728ec9255deadb7a8cb962797cd681/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b554bbc5055d8a31c335237f188eab621728ec9255deadb7a8cb962797cd681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b554bbc5055d8a31c335237f188eab621728ec9255deadb7a8cb962797cd681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b554bbc5055d8a31c335237f188eab621728ec9255deadb7a8cb962797cd681/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b554bbc5055d8a31c335237f188eab621728ec9255deadb7a8cb962797cd681/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:15 compute-0 podman[242583]: 2026-02-02 15:28:15.356870853 +0000 UTC m=+0.019425516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:28:15 compute-0 podman[242583]: 2026-02-02 15:28:15.459340515 +0000 UTC m=+0.121895148 container init db2dfd1136c9c080d80520a246a4f7ad04f4b96fb64d362c035876a19476ef6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 02 15:28:15 compute-0 podman[242583]: 2026-02-02 15:28:15.467082127 +0000 UTC m=+0.129636760 container start db2dfd1136c9c080d80520a246a4f7ad04f4b96fb64d362c035876a19476ef6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:28:15 compute-0 podman[242583]: 2026-02-02 15:28:15.479530584 +0000 UTC m=+0.142085217 container attach db2dfd1136c9c080d80520a246a4f7ad04f4b96fb64d362c035876a19476ef6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_gates, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:28:15 compute-0 musing_gates[242600]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:28:15 compute-0 musing_gates[242600]: --> All data devices are unavailable
Feb 02 15:28:15 compute-0 systemd[1]: libpod-db2dfd1136c9c080d80520a246a4f7ad04f4b96fb64d362c035876a19476ef6b.scope: Deactivated successfully.
Feb 02 15:28:15 compute-0 podman[242583]: 2026-02-02 15:28:15.869665897 +0000 UTC m=+0.532220550 container died db2dfd1136c9c080d80520a246a4f7ad04f4b96fb64d362c035876a19476ef6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:28:15 compute-0 ceph-mon[75334]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b554bbc5055d8a31c335237f188eab621728ec9255deadb7a8cb962797cd681-merged.mount: Deactivated successfully.
Feb 02 15:28:16 compute-0 podman[242583]: 2026-02-02 15:28:16.229237418 +0000 UTC m=+0.891792051 container remove db2dfd1136c9c080d80520a246a4f7ad04f4b96fb64d362c035876a19476ef6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_gates, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:28:16 compute-0 sudo[242508]: pam_unix(sudo:session): session closed for user root
Feb 02 15:28:16 compute-0 sudo[242633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:28:16 compute-0 sudo[242633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:28:16 compute-0 sudo[242633]: pam_unix(sudo:session): session closed for user root
Feb 02 15:28:16 compute-0 systemd[1]: libpod-conmon-db2dfd1136c9c080d80520a246a4f7ad04f4b96fb64d362c035876a19476ef6b.scope: Deactivated successfully.
Feb 02 15:28:16 compute-0 sudo[242658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:28:16 compute-0 sudo[242658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:28:16 compute-0 podman[242694]: 2026-02-02 15:28:16.633927644 +0000 UTC m=+0.020729377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:28:16 compute-0 podman[242694]: 2026-02-02 15:28:16.7862833 +0000 UTC m=+0.173085043 container create 2288be4303fdf8331cc7a52b6938d47d0153dc565c760c8c02ce609e707899c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:28:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:16 compute-0 systemd[1]: Started libpod-conmon-2288be4303fdf8331cc7a52b6938d47d0153dc565c760c8c02ce609e707899c2.scope.
Feb 02 15:28:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:28:17 compute-0 podman[242694]: 2026-02-02 15:28:17.134221358 +0000 UTC m=+0.521023091 container init 2288be4303fdf8331cc7a52b6938d47d0153dc565c760c8c02ce609e707899c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mahavira, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:28:17 compute-0 podman[242694]: 2026-02-02 15:28:17.140816634 +0000 UTC m=+0.527618387 container start 2288be4303fdf8331cc7a52b6938d47d0153dc565c760c8c02ce609e707899c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mahavira, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:28:17 compute-0 reverent_mahavira[242711]: 167 167
Feb 02 15:28:17 compute-0 systemd[1]: libpod-2288be4303fdf8331cc7a52b6938d47d0153dc565c760c8c02ce609e707899c2.scope: Deactivated successfully.
Feb 02 15:28:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:17 compute-0 podman[242694]: 2026-02-02 15:28:17.373804463 +0000 UTC m=+0.760606216 container attach 2288be4303fdf8331cc7a52b6938d47d0153dc565c760c8c02ce609e707899c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:28:17 compute-0 podman[242694]: 2026-02-02 15:28:17.374562346 +0000 UTC m=+0.761364079 container died 2288be4303fdf8331cc7a52b6938d47d0153dc565c760c8c02ce609e707899c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:28:17 compute-0 ceph-mon[75334]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-734a74dbd4099d15316e0e7021e6abfd0ed69c4578c5a3c1c9b9e12f8496117b-merged.mount: Deactivated successfully.
Feb 02 15:28:17 compute-0 podman[242694]: 2026-02-02 15:28:17.526341033 +0000 UTC m=+0.913142786 container remove 2288be4303fdf8331cc7a52b6938d47d0153dc565c760c8c02ce609e707899c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mahavira, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:28:17 compute-0 systemd[1]: libpod-conmon-2288be4303fdf8331cc7a52b6938d47d0153dc565c760c8c02ce609e707899c2.scope: Deactivated successfully.
Feb 02 15:28:17 compute-0 podman[242734]: 2026-02-02 15:28:17.684223361 +0000 UTC m=+0.052777174 container create b9f0ba76926626c5546794985618e35fb62d7bad4c0bb128bc39f68aa8eb2710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:28:17 compute-0 systemd[1]: Started libpod-conmon-b9f0ba76926626c5546794985618e35fb62d7bad4c0bb128bc39f68aa8eb2710.scope.
Feb 02 15:28:17 compute-0 podman[242734]: 2026-02-02 15:28:17.661175674 +0000 UTC m=+0.029729507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:28:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8f3d9185377f2cbd5b2f6bec4448d1a25f81f5251d12a7e10c03658940785c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8f3d9185377f2cbd5b2f6bec4448d1a25f81f5251d12a7e10c03658940785c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8f3d9185377f2cbd5b2f6bec4448d1a25f81f5251d12a7e10c03658940785c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8f3d9185377f2cbd5b2f6bec4448d1a25f81f5251d12a7e10c03658940785c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:17 compute-0 podman[242734]: 2026-02-02 15:28:17.785151345 +0000 UTC m=+0.153705188 container init b9f0ba76926626c5546794985618e35fb62d7bad4c0bb128bc39f68aa8eb2710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:28:17 compute-0 podman[242734]: 2026-02-02 15:28:17.791417491 +0000 UTC m=+0.159971314 container start b9f0ba76926626c5546794985618e35fb62d7bad4c0bb128bc39f68aa8eb2710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:28:17 compute-0 podman[242734]: 2026-02-02 15:28:17.803122485 +0000 UTC m=+0.171676318 container attach b9f0ba76926626c5546794985618e35fb62d7bad4c0bb128bc39f68aa8eb2710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]: {
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:     "0": [
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:         {
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "devices": [
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "/dev/loop3"
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             ],
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_name": "ceph_lv0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_size": "21470642176",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "name": "ceph_lv0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "tags": {
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.cluster_name": "ceph",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.crush_device_class": "",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.encrypted": "0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.objectstore": "bluestore",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.osd_id": "0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.type": "block",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.vdo": "0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.with_tpm": "0"
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             },
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "type": "block",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "vg_name": "ceph_vg0"
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:         }
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:     ],
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:     "1": [
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:         {
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "devices": [
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "/dev/loop4"
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             ],
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_name": "ceph_lv1",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_size": "21470642176",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "name": "ceph_lv1",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "tags": {
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.cluster_name": "ceph",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.crush_device_class": "",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.encrypted": "0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.objectstore": "bluestore",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.osd_id": "1",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.type": "block",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.vdo": "0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.with_tpm": "0"
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             },
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "type": "block",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "vg_name": "ceph_vg1"
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:         }
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:     ],
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:     "2": [
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:         {
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "devices": [
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "/dev/loop5"
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             ],
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_name": "ceph_lv2",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_size": "21470642176",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "name": "ceph_lv2",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "tags": {
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.cluster_name": "ceph",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.crush_device_class": "",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.encrypted": "0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.objectstore": "bluestore",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.osd_id": "2",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.type": "block",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.vdo": "0",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:                 "ceph.with_tpm": "0"
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             },
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "type": "block",
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:             "vg_name": "ceph_vg2"
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:         }
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]:     ]
Feb 02 15:28:18 compute-0 gifted_meninsky[242751]: }
Feb 02 15:28:18 compute-0 systemd[1]: libpod-b9f0ba76926626c5546794985618e35fb62d7bad4c0bb128bc39f68aa8eb2710.scope: Deactivated successfully.
Feb 02 15:28:18 compute-0 podman[242734]: 2026-02-02 15:28:18.066607003 +0000 UTC m=+0.435160816 container died b9f0ba76926626c5546794985618e35fb62d7bad4c0bb128bc39f68aa8eb2710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf8f3d9185377f2cbd5b2f6bec4448d1a25f81f5251d12a7e10c03658940785c-merged.mount: Deactivated successfully.
Feb 02 15:28:18 compute-0 podman[242734]: 2026-02-02 15:28:18.145523122 +0000 UTC m=+0.514076935 container remove b9f0ba76926626c5546794985618e35fb62d7bad4c0bb128bc39f68aa8eb2710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:28:18 compute-0 systemd[1]: libpod-conmon-b9f0ba76926626c5546794985618e35fb62d7bad4c0bb128bc39f68aa8eb2710.scope: Deactivated successfully.
Feb 02 15:28:18 compute-0 sudo[242658]: pam_unix(sudo:session): session closed for user root
Feb 02 15:28:18 compute-0 sudo[242774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:28:18 compute-0 sudo[242774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:28:18 compute-0 sudo[242774]: pam_unix(sudo:session): session closed for user root
Feb 02 15:28:18 compute-0 sudo[242799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:28:18 compute-0 sudo[242799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:28:18 compute-0 podman[242837]: 2026-02-02 15:28:18.525938171 +0000 UTC m=+0.044404384 container create 971dcf7c52fa99e20dd2de0a3b4b45436b63438773ec31ed0c1f97cb70bd6ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:28:18 compute-0 systemd[1]: Started libpod-conmon-971dcf7c52fa99e20dd2de0a3b4b45436b63438773ec31ed0c1f97cb70bd6ff7.scope.
Feb 02 15:28:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:28:18 compute-0 podman[242837]: 2026-02-02 15:28:18.591765871 +0000 UTC m=+0.110232134 container init 971dcf7c52fa99e20dd2de0a3b4b45436b63438773ec31ed0c1f97cb70bd6ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 02 15:28:18 compute-0 podman[242837]: 2026-02-02 15:28:18.597041186 +0000 UTC m=+0.115507409 container start 971dcf7c52fa99e20dd2de0a3b4b45436b63438773ec31ed0c1f97cb70bd6ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:28:18 compute-0 bold_mcclintock[242854]: 167 167
Feb 02 15:28:18 compute-0 podman[242837]: 2026-02-02 15:28:18.601382081 +0000 UTC m=+0.119848354 container attach 971dcf7c52fa99e20dd2de0a3b4b45436b63438773ec31ed0c1f97cb70bd6ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:28:18 compute-0 systemd[1]: libpod-971dcf7c52fa99e20dd2de0a3b4b45436b63438773ec31ed0c1f97cb70bd6ff7.scope: Deactivated successfully.
Feb 02 15:28:18 compute-0 podman[242837]: 2026-02-02 15:28:18.602340731 +0000 UTC m=+0.120806954 container died 971dcf7c52fa99e20dd2de0a3b4b45436b63438773ec31ed0c1f97cb70bd6ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:28:18 compute-0 podman[242837]: 2026-02-02 15:28:18.512884825 +0000 UTC m=+0.031351068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3488fb4e13fee8fefdab8aee0b69c49563709eaae525ea37a64624b8d9dcaaf-merged.mount: Deactivated successfully.
Feb 02 15:28:18 compute-0 podman[242837]: 2026-02-02 15:28:18.643156283 +0000 UTC m=+0.161622506 container remove 971dcf7c52fa99e20dd2de0a3b4b45436b63438773ec31ed0c1f97cb70bd6ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mcclintock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:28:18 compute-0 systemd[1]: libpod-conmon-971dcf7c52fa99e20dd2de0a3b4b45436b63438773ec31ed0c1f97cb70bd6ff7.scope: Deactivated successfully.
Feb 02 15:28:18 compute-0 podman[242880]: 2026-02-02 15:28:18.793746854 +0000 UTC m=+0.051141305 container create 8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:28:18 compute-0 systemd[1]: Started libpod-conmon-8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d.scope.
Feb 02 15:28:18 compute-0 podman[242880]: 2026-02-02 15:28:18.773187563 +0000 UTC m=+0.030582094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:28:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58699f463a4e95445a052499b7276cb3b1fb52710744e77129e7092f43ce46f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58699f463a4e95445a052499b7276cb3b1fb52710744e77129e7092f43ce46f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58699f463a4e95445a052499b7276cb3b1fb52710744e77129e7092f43ce46f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58699f463a4e95445a052499b7276cb3b1fb52710744e77129e7092f43ce46f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:28:18 compute-0 podman[242880]: 2026-02-02 15:28:18.912989918 +0000 UTC m=+0.170384379 container init 8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dewdney, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:28:18 compute-0 podman[242880]: 2026-02-02 15:28:18.92366245 +0000 UTC m=+0.181056901 container start 8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:28:18 compute-0 podman[242880]: 2026-02-02 15:28:18.969039344 +0000 UTC m=+0.226433795 container attach 8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:28:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:19 compute-0 ceph-mon[75334]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:19 compute-0 lvm[242973]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:28:19 compute-0 lvm[242973]: VG ceph_vg0 finished
Feb 02 15:28:19 compute-0 lvm[242976]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:28:19 compute-0 lvm[242976]: VG ceph_vg1 finished
Feb 02 15:28:19 compute-0 lvm[242978]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:28:19 compute-0 lvm[242978]: VG ceph_vg2 finished
Feb 02 15:28:19 compute-0 crazy_dewdney[242897]: {}
Feb 02 15:28:19 compute-0 systemd[1]: libpod-8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d.scope: Deactivated successfully.
Feb 02 15:28:19 compute-0 systemd[1]: libpod-8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d.scope: Consumed 1.116s CPU time.
Feb 02 15:28:19 compute-0 podman[242880]: 2026-02-02 15:28:19.755665958 +0000 UTC m=+1.013060419 container died 8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-58699f463a4e95445a052499b7276cb3b1fb52710744e77129e7092f43ce46f3-merged.mount: Deactivated successfully.
Feb 02 15:28:19 compute-0 podman[242880]: 2026-02-02 15:28:19.878195245 +0000 UTC m=+1.135589716 container remove 8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:28:19 compute-0 systemd[1]: libpod-conmon-8b51c2255286bbce6d6091f2c4edf1903f5aab20844d49189bee38bc5be68d5d.scope: Deactivated successfully.
Feb 02 15:28:19 compute-0 sudo[242799]: pam_unix(sudo:session): session closed for user root
Feb 02 15:28:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:28:19 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:28:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:28:19 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:28:19 compute-0 sudo[242993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:28:19 compute-0 sudo[242993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:28:20 compute-0 sudo[242993]: pam_unix(sudo:session): session closed for user root
Feb 02 15:28:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:28:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:28:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:21 compute-0 ceph-mon[75334]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 12 op/s
Feb 02 15:28:23 compute-0 ceph-mon[75334]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 12 op/s
Feb 02 15:28:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Feb 02 15:28:25 compute-0 ceph-mon[75334]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Feb 02 15:28:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 02 15:28:27 compute-0 ceph-mon[75334]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 02 15:28:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:28:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1118811894' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:28:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:28:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1118811894' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:28:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1118811894' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:28:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1118811894' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:28:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:28:29 compute-0 ceph-mon[75334]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:28:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:28:31 compute-0 ceph-mon[75334]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:28:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:28:33 compute-0 ceph-mon[75334]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:28:34 compute-0 ceph-osd[88227]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.000032s
Feb 02 15:28:34 compute-0 ceph-osd[87170]: bluestore.MempoolThread fragmentation_score=0.000128 took=0.000020s
Feb 02 15:28:34 compute-0 ceph-osd[86115]: bluestore.MempoolThread fragmentation_score=0.000117 took=0.000017s
Feb 02 15:28:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Feb 02 15:28:35 compute-0 ceph-mon[75334]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Feb 02 15:28:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Feb 02 15:28:37 compute-0 ceph-mon[75334]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Feb 02 15:28:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 02 15:28:39 compute-0 ceph-mon[75334]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 02 15:28:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.368223) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046121368259, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1375, "num_deletes": 251, "total_data_size": 2150249, "memory_usage": 2187904, "flush_reason": "Manual Compaction"}
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Feb 02 15:28:41 compute-0 ceph-mon[75334]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046121381193, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2107791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14966, "largest_seqno": 16340, "table_properties": {"data_size": 2101414, "index_size": 3577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13296, "raw_average_key_size": 19, "raw_value_size": 2088571, "raw_average_value_size": 3080, "num_data_blocks": 164, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770045983, "oldest_key_time": 1770045983, "file_creation_time": 1770046121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 13039 microseconds, and 4332 cpu microseconds.
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.381257) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2107791 bytes OK
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.381279) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.383000) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.383021) EVENT_LOG_v1 {"time_micros": 1770046121383015, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.383044) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2144141, prev total WAL file size 2144141, number of live WAL files 2.
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.383685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2058KB)], [35(7356KB)]
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046121383784, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9640678, "oldest_snapshot_seqno": -1}
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4025 keys, 7831756 bytes, temperature: kUnknown
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046121422465, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7831756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7802755, "index_size": 17819, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98348, "raw_average_key_size": 24, "raw_value_size": 7727861, "raw_average_value_size": 1919, "num_data_blocks": 755, "num_entries": 4025, "num_filter_entries": 4025, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770046121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.422693) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7831756 bytes
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.424088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 248.7 rd, 202.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.2 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.3) write-amplify(3.7) OK, records in: 4539, records dropped: 514 output_compression: NoCompression
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.424108) EVENT_LOG_v1 {"time_micros": 1770046121424098, "job": 16, "event": "compaction_finished", "compaction_time_micros": 38757, "compaction_time_cpu_micros": 18170, "output_level": 6, "num_output_files": 1, "total_output_size": 7831756, "num_input_records": 4539, "num_output_records": 4025, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046121424434, "job": 16, "event": "table_file_deletion", "file_number": 37}
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046121425195, "job": 16, "event": "table_file_deletion", "file_number": 35}
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.383568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.425320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.425328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.425330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.425332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:28:41 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:28:41.425333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:28:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:28:42
Feb 02 15:28:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:28:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:28:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.data']
Feb 02 15:28:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:28:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:43 compute-0 ceph-mon[75334]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:28:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:28:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:45 compute-0 podman[243020]: 2026-02-02 15:28:45.33152542 +0000 UTC m=+0.064493803 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:28:45 compute-0 podman[243019]: 2026-02-02 15:28:45.355831187 +0000 UTC m=+0.087328688 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true)
Feb 02 15:28:45 compute-0 ceph-mon[75334]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:48 compute-0 ceph-mon[75334]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:50 compute-0 ceph-mon[75334]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:52 compute-0 ceph-mon[75334]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:52 compute-0 nova_compute[239545]: 2026-02-02 15:28:52.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:52 compute-0 nova_compute[239545]: 2026-02-02 15:28:52.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 15:28:52 compute-0 nova_compute[239545]: 2026-02-02 15:28:52.570 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 15:28:52 compute-0 nova_compute[239545]: 2026-02-02 15:28:52.572 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:52 compute-0 nova_compute[239545]: 2026-02-02 15:28:52.572 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 15:28:52 compute-0 nova_compute[239545]: 2026-02-02 15:28:52.587 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:53 compute-0 nova_compute[239545]: 2026-02-02 15:28:53.600 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:53 compute-0 nova_compute[239545]: 2026-02-02 15:28:53.601 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3969617872069868e-06 of space, bias 4.0, pg target 0.001676354144648384 quantized to 16 (current 16)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:28:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:28:54 compute-0 ceph-mon[75334]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:55 compute-0 nova_compute[239545]: 2026-02-02 15:28:55.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:55 compute-0 nova_compute[239545]: 2026-02-02 15:28:55.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:55 compute-0 nova_compute[239545]: 2026-02-02 15:28:55.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:55 compute-0 nova_compute[239545]: 2026-02-02 15:28:55.579 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:28:55 compute-0 nova_compute[239545]: 2026-02-02 15:28:55.580 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:28:55 compute-0 nova_compute[239545]: 2026-02-02 15:28:55.580 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:28:55 compute-0 nova_compute[239545]: 2026-02-02 15:28:55.580 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:28:55 compute-0 nova_compute[239545]: 2026-02-02 15:28:55.581 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:28:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:28:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3740033920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.143 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.290 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.292 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5107MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.293 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.293 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:28:56 compute-0 ceph-mon[75334]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:56 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3740033920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.492 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.493 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.590 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing inventories for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.671 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating ProviderTree inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.671 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.686 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing aggregate associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.708 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing trait associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, traits: COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 15:28:56 compute-0 nova_compute[239545]: 2026-02-02 15:28:56.724 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:28:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:28:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:28:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917389922' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:28:57 compute-0 nova_compute[239545]: 2026-02-02 15:28:57.218 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:28:57 compute-0 nova_compute[239545]: 2026-02-02 15:28:57.224 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:28:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:57 compute-0 nova_compute[239545]: 2026-02-02 15:28:57.322 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:28:57 compute-0 nova_compute[239545]: 2026-02-02 15:28:57.326 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:28:57 compute-0 nova_compute[239545]: 2026-02-02 15:28:57.327 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:28:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1917389922' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:28:58 compute-0 nova_compute[239545]: 2026-02-02 15:28:58.328 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:58 compute-0 nova_compute[239545]: 2026-02-02 15:28:58.328 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:28:58 compute-0 nova_compute[239545]: 2026-02-02 15:28:58.328 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:28:58 compute-0 nova_compute[239545]: 2026-02-02 15:28:58.342 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:28:58 compute-0 nova_compute[239545]: 2026-02-02 15:28:58.342 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:58 compute-0 nova_compute[239545]: 2026-02-02 15:28:58.342 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:58 compute-0 nova_compute[239545]: 2026-02-02 15:28:58.343 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:28:58 compute-0 nova_compute[239545]: 2026-02-02 15:28:58.344 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:28:58 compute-0 ceph-mon[75334]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:28:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:28:59.241 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:28:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:28:59.241 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:28:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:28:59.241 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:28:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:00 compute-0 ceph-mon[75334]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:01 compute-0 ceph-mon[75334]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:04 compute-0 ceph-mon[75334]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:06 compute-0 ceph-mon[75334]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:07 compute-0 ceph-mon[75334]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:10 compute-0 ceph-mon[75334]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:12 compute-0 ceph-mon[75334]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:14 compute-0 ceph-mon[75334]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:29:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:29:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:29:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:29:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:29:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:29:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:16 compute-0 podman[243110]: 2026-02-02 15:29:16.309386366 +0000 UTC m=+0.051314823 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 02 15:29:16 compute-0 podman[243109]: 2026-02-02 15:29:16.330470171 +0000 UTC m=+0.073523642 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:29:16 compute-0 ceph-mon[75334]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:18 compute-0 ceph-mon[75334]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Feb 02 15:29:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Feb 02 15:29:19 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Feb 02 15:29:20 compute-0 sudo[243152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:29:20 compute-0 sudo[243152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:20 compute-0 sudo[243152]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:20 compute-0 sudo[243177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:29:20 compute-0 sudo[243177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Feb 02 15:29:20 compute-0 ceph-mon[75334]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:20 compute-0 ceph-mon[75334]: osdmap e117: 3 total, 3 up, 3 in
Feb 02 15:29:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Feb 02 15:29:20 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Feb 02 15:29:20 compute-0 podman[243245]: 2026-02-02 15:29:20.477647421 +0000 UTC m=+0.070874423 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:29:20 compute-0 podman[243265]: 2026-02-02 15:29:20.616979514 +0000 UTC m=+0.050781641 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:29:20 compute-0 podman[243245]: 2026-02-02 15:29:20.635909651 +0000 UTC m=+0.229136633 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:29:21 compute-0 sudo[243177]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:29:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Feb 02 15:29:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:29:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:21 compute-0 sudo[243430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:29:21 compute-0 sudo[243430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:21 compute-0 sudo[243430]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:21 compute-0 sudo[243455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:29:21 compute-0 sudo[243455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Feb 02 15:29:21 compute-0 ceph-mon[75334]: osdmap e118: 3 total, 3 up, 3 in
Feb 02 15:29:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:21 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Feb 02 15:29:21 compute-0 sudo[243455]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:29:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:29:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:29:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:29:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:29:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:29:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:29:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:21 compute-0 sudo[243511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:29:21 compute-0 sudo[243511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:21 compute-0 sudo[243511]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:21 compute-0 sudo[243536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:29:21 compute-0 sudo[243536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:22 compute-0 podman[243574]: 2026-02-02 15:29:22.336794792 +0000 UTC m=+0.122758818 container create d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:29:22 compute-0 podman[243574]: 2026-02-02 15:29:22.243403012 +0000 UTC m=+0.029367118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:29:22 compute-0 systemd[1]: Started libpod-conmon-d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4.scope.
Feb 02 15:29:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:29:22 compute-0 podman[243574]: 2026-02-02 15:29:22.516026135 +0000 UTC m=+0.301990181 container init d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jang, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:29:22 compute-0 ceph-mon[75334]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Feb 02 15:29:22 compute-0 ceph-mon[75334]: osdmap e119: 3 total, 3 up, 3 in
Feb 02 15:29:22 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:29:22 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:29:22 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:22 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:29:22 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:29:22 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:29:22 compute-0 podman[243574]: 2026-02-02 15:29:22.527546719 +0000 UTC m=+0.313510745 container start d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jang, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:29:22 compute-0 systemd[1]: libpod-d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4.scope: Deactivated successfully.
Feb 02 15:29:22 compute-0 condescending_jang[243590]: 167 167
Feb 02 15:29:22 compute-0 conmon[243590]: conmon d5fe2b4525989201a782 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4.scope/container/memory.events
Feb 02 15:29:22 compute-0 podman[243574]: 2026-02-02 15:29:22.588904432 +0000 UTC m=+0.374868448 container attach d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:29:22 compute-0 podman[243574]: 2026-02-02 15:29:22.589482184 +0000 UTC m=+0.375446210 container died d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jang, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:29:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac2d79d4469582876f2b02855b5e3e3641a758d84c5b5cac64da80309a1ac5ff-merged.mount: Deactivated successfully.
Feb 02 15:29:22 compute-0 podman[243574]: 2026-02-02 15:29:22.77111524 +0000 UTC m=+0.557079276 container remove d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:29:22 compute-0 systemd[1]: libpod-conmon-d5fe2b4525989201a7822c49febd35f7561c68e064f0709c526ee3cd365879e4.scope: Deactivated successfully.
Feb 02 15:29:23 compute-0 podman[243615]: 2026-02-02 15:29:22.923176284 +0000 UTC m=+0.027520928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:29:23 compute-0 podman[243615]: 2026-02-02 15:29:23.027795862 +0000 UTC m=+0.132140426 container create 06d5143dcb283f8afcd6f0cffb6b922d4601d6f450893f0c2050ff328b57a308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_hawking, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:29:23 compute-0 systemd[1]: Started libpod-conmon-06d5143dcb283f8afcd6f0cffb6b922d4601d6f450893f0c2050ff328b57a308.scope.
Feb 02 15:29:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64679e90fe16a46cea675b167243509405c912a301564be798212ecd32205e5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64679e90fe16a46cea675b167243509405c912a301564be798212ecd32205e5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64679e90fe16a46cea675b167243509405c912a301564be798212ecd32205e5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64679e90fe16a46cea675b167243509405c912a301564be798212ecd32205e5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64679e90fe16a46cea675b167243509405c912a301564be798212ecd32205e5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:23 compute-0 podman[243615]: 2026-02-02 15:29:23.226852691 +0000 UTC m=+0.331197295 container init 06d5143dcb283f8afcd6f0cffb6b922d4601d6f450893f0c2050ff328b57a308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:29:23 compute-0 podman[243615]: 2026-02-02 15:29:23.234563771 +0000 UTC m=+0.338908325 container start 06d5143dcb283f8afcd6f0cffb6b922d4601d6f450893f0c2050ff328b57a308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_hawking, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:29:23 compute-0 podman[243615]: 2026-02-02 15:29:23.238072859 +0000 UTC m=+0.342417463 container attach 06d5143dcb283f8afcd6f0cffb6b922d4601d6f450893f0c2050ff328b57a308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_hawking, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:29:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 29 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 4.7 MiB/s wr, 32 op/s
Feb 02 15:29:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Feb 02 15:29:23 compute-0 ceph-mon[75334]: pgmap v790: 305 pgs: 305 active+clean; 29 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 4.7 MiB/s wr, 32 op/s
Feb 02 15:29:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Feb 02 15:29:23 compute-0 hopeful_hawking[243632]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:29:23 compute-0 hopeful_hawking[243632]: --> All data devices are unavailable
Feb 02 15:29:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Feb 02 15:29:23 compute-0 systemd[1]: libpod-06d5143dcb283f8afcd6f0cffb6b922d4601d6f450893f0c2050ff328b57a308.scope: Deactivated successfully.
Feb 02 15:29:23 compute-0 podman[243615]: 2026-02-02 15:29:23.674966354 +0000 UTC m=+0.779310948 container died 06d5143dcb283f8afcd6f0cffb6b922d4601d6f450893f0c2050ff328b57a308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 02 15:29:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-64679e90fe16a46cea675b167243509405c912a301564be798212ecd32205e5b-merged.mount: Deactivated successfully.
Feb 02 15:29:23 compute-0 podman[243615]: 2026-02-02 15:29:23.864186307 +0000 UTC m=+0.968530891 container remove 06d5143dcb283f8afcd6f0cffb6b922d4601d6f450893f0c2050ff328b57a308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:29:23 compute-0 systemd[1]: libpod-conmon-06d5143dcb283f8afcd6f0cffb6b922d4601d6f450893f0c2050ff328b57a308.scope: Deactivated successfully.
Feb 02 15:29:23 compute-0 sudo[243536]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:23 compute-0 sudo[243665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:29:23 compute-0 sudo[243665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:23 compute-0 sudo[243665]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:24 compute-0 sudo[243690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:29:24 compute-0 sudo[243690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:24 compute-0 podman[243728]: 2026-02-02 15:29:24.314968738 +0000 UTC m=+0.078521082 container create c7fe5c2d9b327663b7174eec7e1e1a3c0f2886484527a9312ff3211e8aa3fab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:29:24 compute-0 podman[243728]: 2026-02-02 15:29:24.254928204 +0000 UTC m=+0.018480528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:29:24 compute-0 systemd[1]: Started libpod-conmon-c7fe5c2d9b327663b7174eec7e1e1a3c0f2886484527a9312ff3211e8aa3fab9.scope.
Feb 02 15:29:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:29:24 compute-0 podman[243728]: 2026-02-02 15:29:24.478952475 +0000 UTC m=+0.242504869 container init c7fe5c2d9b327663b7174eec7e1e1a3c0f2886484527a9312ff3211e8aa3fab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jones, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:29:24 compute-0 podman[243728]: 2026-02-02 15:29:24.486854639 +0000 UTC m=+0.250406953 container start c7fe5c2d9b327663b7174eec7e1e1a3c0f2886484527a9312ff3211e8aa3fab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jones, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 15:29:24 compute-0 objective_jones[243744]: 167 167
Feb 02 15:29:24 compute-0 systemd[1]: libpod-c7fe5c2d9b327663b7174eec7e1e1a3c0f2886484527a9312ff3211e8aa3fab9.scope: Deactivated successfully.
Feb 02 15:29:24 compute-0 podman[243728]: 2026-02-02 15:29:24.507189387 +0000 UTC m=+0.270741731 container attach c7fe5c2d9b327663b7174eec7e1e1a3c0f2886484527a9312ff3211e8aa3fab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:29:24 compute-0 podman[243728]: 2026-02-02 15:29:24.507988355 +0000 UTC m=+0.271540699 container died c7fe5c2d9b327663b7174eec7e1e1a3c0f2886484527a9312ff3211e8aa3fab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7897f148b6b02bff8275befb0d93df868a77f963e889e56aed1e7352f99410d3-merged.mount: Deactivated successfully.
Feb 02 15:29:24 compute-0 ceph-mon[75334]: osdmap e120: 3 total, 3 up, 3 in
Feb 02 15:29:24 compute-0 podman[243728]: 2026-02-02 15:29:24.843378021 +0000 UTC m=+0.606930335 container remove c7fe5c2d9b327663b7174eec7e1e1a3c0f2886484527a9312ff3211e8aa3fab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:29:24 compute-0 systemd[1]: libpod-conmon-c7fe5c2d9b327663b7174eec7e1e1a3c0f2886484527a9312ff3211e8aa3fab9.scope: Deactivated successfully.
Feb 02 15:29:25 compute-0 podman[243769]: 2026-02-02 15:29:25.007770057 +0000 UTC m=+0.068570523 container create e7f06a0fa6b5f8d7cab6570fdeb95f6f9bc4a6b0cbd2ab75c1b23df9ac32501c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:29:25 compute-0 podman[243769]: 2026-02-02 15:29:24.967922538 +0000 UTC m=+0.028723034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:29:25 compute-0 systemd[1]: Started libpod-conmon-e7f06a0fa6b5f8d7cab6570fdeb95f6f9bc4a6b0cbd2ab75c1b23df9ac32501c.scope.
Feb 02 15:29:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a927176678edd53b7f0b8c97ed35a8cbad95be19331c2eeead5b0a74ab18c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a927176678edd53b7f0b8c97ed35a8cbad95be19331c2eeead5b0a74ab18c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a927176678edd53b7f0b8c97ed35a8cbad95be19331c2eeead5b0a74ab18c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a927176678edd53b7f0b8c97ed35a8cbad95be19331c2eeead5b0a74ab18c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:25 compute-0 podman[243769]: 2026-02-02 15:29:25.186251503 +0000 UTC m=+0.247052049 container init e7f06a0fa6b5f8d7cab6570fdeb95f6f9bc4a6b0cbd2ab75c1b23df9ac32501c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:29:25 compute-0 podman[243769]: 2026-02-02 15:29:25.196255073 +0000 UTC m=+0.257055569 container start e7f06a0fa6b5f8d7cab6570fdeb95f6f9bc4a6b0cbd2ab75c1b23df9ac32501c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:29:25 compute-0 podman[243769]: 2026-02-02 15:29:25.259528429 +0000 UTC m=+0.320329015 container attach e7f06a0fa6b5f8d7cab6570fdeb95f6f9bc4a6b0cbd2ab75c1b23df9ac32501c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:29:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.9 MiB/s wr, 64 op/s
Feb 02 15:29:25 compute-0 boring_blackburn[243786]: {
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:     "0": [
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:         {
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "devices": [
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "/dev/loop3"
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             ],
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_name": "ceph_lv0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_size": "21470642176",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "name": "ceph_lv0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "tags": {
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.cluster_name": "ceph",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.crush_device_class": "",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.encrypted": "0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.objectstore": "bluestore",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.osd_id": "0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.type": "block",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.vdo": "0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.with_tpm": "0"
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             },
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "type": "block",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "vg_name": "ceph_vg0"
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:         }
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:     ],
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:     "1": [
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:         {
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "devices": [
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "/dev/loop4"
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             ],
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_name": "ceph_lv1",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_size": "21470642176",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "name": "ceph_lv1",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "tags": {
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.cluster_name": "ceph",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.crush_device_class": "",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.encrypted": "0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.objectstore": "bluestore",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.osd_id": "1",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.type": "block",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.vdo": "0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.with_tpm": "0"
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             },
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "type": "block",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "vg_name": "ceph_vg1"
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:         }
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:     ],
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:     "2": [
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:         {
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "devices": [
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "/dev/loop5"
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             ],
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_name": "ceph_lv2",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_size": "21470642176",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "name": "ceph_lv2",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "tags": {
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.cluster_name": "ceph",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.crush_device_class": "",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.encrypted": "0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.objectstore": "bluestore",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.osd_id": "2",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.type": "block",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.vdo": "0",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:                 "ceph.with_tpm": "0"
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             },
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "type": "block",
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:             "vg_name": "ceph_vg2"
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:         }
Feb 02 15:29:25 compute-0 boring_blackburn[243786]:     ]
Feb 02 15:29:25 compute-0 boring_blackburn[243786]: }
Feb 02 15:29:25 compute-0 systemd[1]: libpod-e7f06a0fa6b5f8d7cab6570fdeb95f6f9bc4a6b0cbd2ab75c1b23df9ac32501c.scope: Deactivated successfully.
Feb 02 15:29:25 compute-0 podman[243769]: 2026-02-02 15:29:25.495299549 +0000 UTC m=+0.556100025 container died e7f06a0fa6b5f8d7cab6570fdeb95f6f9bc4a6b0cbd2ab75c1b23df9ac32501c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9a927176678edd53b7f0b8c97ed35a8cbad95be19331c2eeead5b0a74ab18c0-merged.mount: Deactivated successfully.
Feb 02 15:29:25 compute-0 podman[243769]: 2026-02-02 15:29:25.683178542 +0000 UTC m=+0.743979048 container remove e7f06a0fa6b5f8d7cab6570fdeb95f6f9bc4a6b0cbd2ab75c1b23df9ac32501c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:29:25 compute-0 systemd[1]: libpod-conmon-e7f06a0fa6b5f8d7cab6570fdeb95f6f9bc4a6b0cbd2ab75c1b23df9ac32501c.scope: Deactivated successfully.
Feb 02 15:29:25 compute-0 ceph-mon[75334]: pgmap v792: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.9 MiB/s wr, 64 op/s
Feb 02 15:29:25 compute-0 sudo[243690]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:25 compute-0 sudo[243808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:29:25 compute-0 sudo[243808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:25 compute-0 sudo[243808]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:25 compute-0 sudo[243833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:29:25 compute-0 sudo[243833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:26 compute-0 podman[243871]: 2026-02-02 15:29:26.099204777 +0000 UTC m=+0.059362810 container create 0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:29:26 compute-0 systemd[1]: Started libpod-conmon-0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e.scope.
Feb 02 15:29:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:29:26 compute-0 podman[243871]: 2026-02-02 15:29:26.060114515 +0000 UTC m=+0.020272608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:29:26 compute-0 podman[243871]: 2026-02-02 15:29:26.184513568 +0000 UTC m=+0.144671631 container init 0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:29:26 compute-0 podman[243871]: 2026-02-02 15:29:26.188553568 +0000 UTC m=+0.148711611 container start 0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:29:26 compute-0 trusting_pike[243888]: 167 167
Feb 02 15:29:26 compute-0 systemd[1]: libpod-0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e.scope: Deactivated successfully.
Feb 02 15:29:26 compute-0 conmon[243888]: conmon 0d505dc0bc6865e63724 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e.scope/container/memory.events
Feb 02 15:29:26 compute-0 podman[243871]: 2026-02-02 15:29:26.212029946 +0000 UTC m=+0.172187999 container attach 0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:29:26 compute-0 podman[243871]: 2026-02-02 15:29:26.212343142 +0000 UTC m=+0.172501175 container died 0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1406d0fae52d7c6aebe198d97b7b3ffb119605159e49770adaead1561a71615-merged.mount: Deactivated successfully.
Feb 02 15:29:26 compute-0 podman[243871]: 2026-02-02 15:29:26.403809425 +0000 UTC m=+0.363967468 container remove 0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pike, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:29:26 compute-0 systemd[1]: libpod-conmon-0d505dc0bc6865e63724cf14466b71a137c0784f46b2f8c176f2eb1caea6d34e.scope: Deactivated successfully.
Feb 02 15:29:26 compute-0 podman[243915]: 2026-02-02 15:29:26.644247607 +0000 UTC m=+0.104960585 container create 15b78f63858aea87f0c854abd763a8fdcc04cf490e345800c6ffbed96be91064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:29:26 compute-0 podman[243915]: 2026-02-02 15:29:26.570135293 +0000 UTC m=+0.030848271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:29:26 compute-0 systemd[1]: Started libpod-conmon-15b78f63858aea87f0c854abd763a8fdcc04cf490e345800c6ffbed96be91064.scope.
Feb 02 15:29:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1ddfb0168250a8326a13a0cc70e884767adc984279bf5bf9a2e62e610a401b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1ddfb0168250a8326a13a0cc70e884767adc984279bf5bf9a2e62e610a401b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1ddfb0168250a8326a13a0cc70e884767adc984279bf5bf9a2e62e610a401b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1ddfb0168250a8326a13a0cc70e884767adc984279bf5bf9a2e62e610a401b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:29:26 compute-0 podman[243915]: 2026-02-02 15:29:26.778382665 +0000 UTC m=+0.239095693 container init 15b78f63858aea87f0c854abd763a8fdcc04cf490e345800c6ffbed96be91064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:29:26 compute-0 podman[243915]: 2026-02-02 15:29:26.78903449 +0000 UTC m=+0.249747428 container start 15b78f63858aea87f0c854abd763a8fdcc04cf490e345800c6ffbed96be91064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_roentgen, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:29:26 compute-0 podman[243915]: 2026-02-02 15:29:26.793952728 +0000 UTC m=+0.254665706 container attach 15b78f63858aea87f0c854abd763a8fdcc04cf490e345800c6ffbed96be91064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_roentgen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:29:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Feb 02 15:29:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Feb 02 15:29:26 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Feb 02 15:29:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Feb 02 15:29:27 compute-0 lvm[244007]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:29:27 compute-0 lvm[244007]: VG ceph_vg0 finished
Feb 02 15:29:27 compute-0 lvm[244010]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:29:27 compute-0 lvm[244010]: VG ceph_vg1 finished
Feb 02 15:29:27 compute-0 lvm[244012]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:29:27 compute-0 lvm[244012]: VG ceph_vg2 finished
Feb 02 15:29:27 compute-0 affectionate_roentgen[243931]: {}
Feb 02 15:29:27 compute-0 systemd[1]: libpod-15b78f63858aea87f0c854abd763a8fdcc04cf490e345800c6ffbed96be91064.scope: Deactivated successfully.
Feb 02 15:29:27 compute-0 podman[243915]: 2026-02-02 15:29:27.554494202 +0000 UTC m=+1.015207150 container died 15b78f63858aea87f0c854abd763a8fdcc04cf490e345800c6ffbed96be91064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_roentgen, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:29:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c1ddfb0168250a8326a13a0cc70e884767adc984279bf5bf9a2e62e610a401b-merged.mount: Deactivated successfully.
Feb 02 15:29:27 compute-0 podman[243915]: 2026-02-02 15:29:27.699628322 +0000 UTC m=+1.160341280 container remove 15b78f63858aea87f0c854abd763a8fdcc04cf490e345800c6ffbed96be91064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:29:27 compute-0 systemd[1]: libpod-conmon-15b78f63858aea87f0c854abd763a8fdcc04cf490e345800c6ffbed96be91064.scope: Deactivated successfully.
Feb 02 15:29:27 compute-0 sudo[243833]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:29:27 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:29:27 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:27 compute-0 sudo[244028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:29:27 compute-0 sudo[244028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:29:27 compute-0 sudo[244028]: pam_unix(sudo:session): session closed for user root
Feb 02 15:29:28 compute-0 ceph-mon[75334]: osdmap e121: 3 total, 3 up, 3 in
Feb 02 15:29:28 compute-0 ceph-mon[75334]: pgmap v794: 305 pgs: 305 active+clean; 41 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Feb 02 15:29:28 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:28 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:29:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:29:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2390332654' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:29:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:29:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2390332654' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:29:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2390332654' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:29:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2390332654' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:29:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Feb 02 15:29:30 compute-0 ceph-mon[75334]: pgmap v795: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Feb 02 15:29:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 23 op/s
Feb 02 15:29:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:32 compute-0 ceph-mon[75334]: pgmap v796: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 23 op/s
Feb 02 15:29:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Feb 02 15:29:34 compute-0 ceph-mon[75334]: pgmap v797: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Feb 02 15:29:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 204 B/s wr, 0 op/s
Feb 02 15:29:36 compute-0 ceph-mon[75334]: pgmap v798: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 204 B/s wr, 0 op/s
Feb 02 15:29:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 98 B/s wr, 0 op/s
Feb 02 15:29:38 compute-0 ceph-mon[75334]: pgmap v799: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 98 B/s wr, 0 op/s
Feb 02 15:29:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Feb 02 15:29:40 compute-0 ceph-mon[75334]: pgmap v800: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Feb 02 15:29:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:42 compute-0 ceph-mon[75334]: pgmap v801: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:29:42
Feb 02 15:29:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:29:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:29:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta']
Feb 02 15:29:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:29:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:44 compute-0 ceph-mon[75334]: pgmap v802: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:29:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:29:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:46 compute-0 ceph-mon[75334]: pgmap v803: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:29:46.457 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:29:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:29:46.458 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:29:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:47 compute-0 podman[244054]: 2026-02-02 15:29:47.323381266 +0000 UTC m=+0.054788578 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 15:29:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:47 compute-0 podman[244053]: 2026-02-02 15:29:47.337494908 +0000 UTC m=+0.070819991 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Feb 02 15:29:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Feb 02 15:29:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Feb 02 15:29:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Feb 02 15:29:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Feb 02 15:29:48 compute-0 ceph-mon[75334]: pgmap v804: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:48 compute-0 ceph-mon[75334]: osdmap e122: 3 total, 3 up, 3 in
Feb 02 15:29:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Feb 02 15:29:48 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Feb 02 15:29:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Feb 02 15:29:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Feb 02 15:29:49 compute-0 ceph-mon[75334]: osdmap e123: 3 total, 3 up, 3 in
Feb 02 15:29:49 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Feb 02 15:29:50 compute-0 ceph-mon[75334]: pgmap v807: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:29:50 compute-0 ceph-mon[75334]: osdmap e124: 3 total, 3 up, 3 in
Feb 02 15:29:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Feb 02 15:29:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 02 15:29:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Feb 02 15:29:52 compute-0 ceph-mon[75334]: pgmap v809: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Feb 02 15:29:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Feb 02 15:29:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Feb 02 15:29:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.9 KiB/s wr, 37 op/s
Feb 02 15:29:53 compute-0 ceph-mon[75334]: osdmap e125: 3 total, 3 up, 3 in
Feb 02 15:29:53 compute-0 ceph-mon[75334]: pgmap v811: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.9 KiB/s wr, 37 op/s
Feb 02 15:29:53 compute-0 nova_compute[239545]: 2026-02-02 15:29:53.547 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:29:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:29:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/450103298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:29:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:29:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/450103298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:29:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:29:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2310079891' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:29:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:29:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2310079891' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.114580757909666e-07 of space, bias 1.0, pg target 0.00015343742273729 quantized to 32 (current 32)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659701442302434 of space, bias 1.0, pg target 0.19979104326907302 quantized to 32 (current 32)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4890410079305244e-06 of space, bias 4.0, pg target 0.0017868492095166294 quantized to 16 (current 16)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:29:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:29:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/450103298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:29:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/450103298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:29:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2310079891' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:29:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2310079891' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:29:54 compute-0 nova_compute[239545]: 2026-02-02 15:29:54.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:29:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.1 KiB/s wr, 108 op/s
Feb 02 15:29:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Feb 02 15:29:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Feb 02 15:29:55 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Feb 02 15:29:55 compute-0 ceph-mon[75334]: pgmap v812: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.1 KiB/s wr, 108 op/s
Feb 02 15:29:55 compute-0 nova_compute[239545]: 2026-02-02 15:29:55.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:29:55 compute-0 nova_compute[239545]: 2026-02-02 15:29:55.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:29:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:29:56.460 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:29:56 compute-0 ceph-mon[75334]: osdmap e126: 3 total, 3 up, 3 in
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.561 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.562 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.585 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.585 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.586 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.586 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:29:56 compute-0 nova_compute[239545]: 2026-02-02 15:29:56.587 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:29:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:29:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Feb 02 15:29:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Feb 02 15:29:56 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Feb 02 15:29:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:29:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3418876962' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.104 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:29:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:29:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4088383216' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:29:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:29:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4088383216' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.252 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.254 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5107MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.254 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.254 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.316 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.317 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:29:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 5.0 KiB/s wr, 104 op/s
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.341 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:29:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:29:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2827527377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.872 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.877 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.889 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.891 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:29:57 compute-0 nova_compute[239545]: 2026-02-02 15:29:57.891 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:29:57 compute-0 ceph-mon[75334]: osdmap e127: 3 total, 3 up, 3 in
Feb 02 15:29:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3418876962' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:29:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4088383216' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:29:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4088383216' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:29:57 compute-0 ceph-mon[75334]: pgmap v815: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 5.0 KiB/s wr, 104 op/s
Feb 02 15:29:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2827527377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:29:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:29:59.242 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:29:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:29:59.242 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:29:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:29:59.243 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:29:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.4 KiB/s wr, 108 op/s
Feb 02 15:29:59 compute-0 nova_compute[239545]: 2026-02-02 15:29:59.874 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:29:59 compute-0 nova_compute[239545]: 2026-02-02 15:29:59.875 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:29:59 compute-0 nova_compute[239545]: 2026-02-02 15:29:59.875 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:29:59 compute-0 nova_compute[239545]: 2026-02-02 15:29:59.875 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:30:00 compute-0 ceph-mon[75334]: pgmap v816: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.4 KiB/s wr, 108 op/s
Feb 02 15:30:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.1 KiB/s wr, 111 op/s
Feb 02 15:30:01 compute-0 nova_compute[239545]: 2026-02-02 15:30:01.541 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Feb 02 15:30:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Feb 02 15:30:01 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Feb 02 15:30:02 compute-0 ceph-mon[75334]: pgmap v817: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.1 KiB/s wr, 111 op/s
Feb 02 15:30:02 compute-0 ceph-mon[75334]: osdmap e128: 3 total, 3 up, 3 in
Feb 02 15:30:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/151012886' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/151012886' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.4 KiB/s wr, 62 op/s
Feb 02 15:30:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/151012886' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/151012886' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2604678184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2604678184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:04 compute-0 ceph-mon[75334]: pgmap v819: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.4 KiB/s wr, 62 op/s
Feb 02 15:30:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2604678184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2604678184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.9 KiB/s wr, 56 op/s
Feb 02 15:30:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Feb 02 15:30:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Feb 02 15:30:05 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Feb 02 15:30:06 compute-0 ceph-mon[75334]: pgmap v820: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.9 KiB/s wr, 56 op/s
Feb 02 15:30:06 compute-0 ceph-mon[75334]: osdmap e129: 3 total, 3 up, 3 in
Feb 02 15:30:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Feb 02 15:30:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Feb 02 15:30:06 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Feb 02 15:30:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.3 KiB/s wr, 31 op/s
Feb 02 15:30:07 compute-0 ceph-mon[75334]: osdmap e130: 3 total, 3 up, 3 in
Feb 02 15:30:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb 02 15:30:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb 02 15:30:08 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb 02 15:30:08 compute-0 ceph-mon[75334]: pgmap v823: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.3 KiB/s wr, 31 op/s
Feb 02 15:30:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 38 op/s
Feb 02 15:30:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb 02 15:30:09 compute-0 ceph-mon[75334]: osdmap e131: 3 total, 3 up, 3 in
Feb 02 15:30:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb 02 15:30:09 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb 02 15:30:10 compute-0 ceph-mon[75334]: pgmap v825: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 38 op/s
Feb 02 15:30:10 compute-0 ceph-mon[75334]: osdmap e132: 3 total, 3 up, 3 in
Feb 02 15:30:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.8 KiB/s wr, 66 op/s
Feb 02 15:30:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb 02 15:30:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb 02 15:30:11 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb 02 15:30:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722809092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722809092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/719124641' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/719124641' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:12 compute-0 ceph-mon[75334]: pgmap v827: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.8 KiB/s wr, 66 op/s
Feb 02 15:30:12 compute-0 ceph-mon[75334]: osdmap e133: 3 total, 3 up, 3 in
Feb 02 15:30:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/722809092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/722809092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/719124641' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/719124641' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747220463' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747220463' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 6.3 KiB/s wr, 146 op/s
Feb 02 15:30:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3747220463' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3747220463' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:13 compute-0 ceph-mon[75334]: pgmap v829: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 6.3 KiB/s wr, 146 op/s
Feb 02 15:30:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2632750884' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2632750884' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2632750884' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2632750884' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:30:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:30:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:30:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:30:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:30:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:30:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 4.9 KiB/s wr, 158 op/s
Feb 02 15:30:15 compute-0 ceph-mon[75334]: pgmap v830: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 4.9 KiB/s wr, 158 op/s
Feb 02 15:30:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb 02 15:30:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb 02 15:30:16 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb 02 15:30:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb 02 15:30:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb 02 15:30:16 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb 02 15:30:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 6.2 KiB/s wr, 199 op/s
Feb 02 15:30:17 compute-0 ceph-mon[75334]: osdmap e134: 3 total, 3 up, 3 in
Feb 02 15:30:17 compute-0 ceph-mon[75334]: osdmap e135: 3 total, 3 up, 3 in
Feb 02 15:30:17 compute-0 ceph-mon[75334]: pgmap v833: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 6.2 KiB/s wr, 199 op/s
Feb 02 15:30:18 compute-0 podman[244144]: 2026-02-02 15:30:18.297290515 +0000 UTC m=+0.038111201 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 02 15:30:18 compute-0 podman[244143]: 2026-02-02 15:30:18.342435346 +0000 UTC m=+0.086296854 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 02 15:30:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb 02 15:30:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb 02 15:30:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb 02 15:30:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 5.2 KiB/s wr, 120 op/s
Feb 02 15:30:20 compute-0 ceph-mon[75334]: osdmap e136: 3 total, 3 up, 3 in
Feb 02 15:30:20 compute-0 ceph-mon[75334]: pgmap v835: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 5.2 KiB/s wr, 120 op/s
Feb 02 15:30:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3368007589' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3368007589' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb 02 15:30:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3368007589' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3368007589' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb 02 15:30:21 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb 02 15:30:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 69 op/s
Feb 02 15:30:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:22 compute-0 ceph-mon[75334]: osdmap e137: 3 total, 3 up, 3 in
Feb 02 15:30:22 compute-0 ceph-mon[75334]: pgmap v837: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 69 op/s
Feb 02 15:30:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb 02 15:30:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb 02 15:30:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb 02 15:30:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543731041' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543731041' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.3 KiB/s wr, 62 op/s
Feb 02 15:30:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb 02 15:30:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb 02 15:30:24 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb 02 15:30:24 compute-0 ceph-mon[75334]: osdmap e138: 3 total, 3 up, 3 in
Feb 02 15:30:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1543731041' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1543731041' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:24 compute-0 ceph-mon[75334]: pgmap v839: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.3 KiB/s wr, 62 op/s
Feb 02 15:30:25 compute-0 ceph-mon[75334]: osdmap e139: 3 total, 3 up, 3 in
Feb 02 15:30:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.8 KiB/s wr, 104 op/s
Feb 02 15:30:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb 02 15:30:26 compute-0 ceph-mon[75334]: pgmap v841: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.8 KiB/s wr, 104 op/s
Feb 02 15:30:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb 02 15:30:26 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb 02 15:30:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb 02 15:30:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb 02 15:30:26 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb 02 15:30:27 compute-0 ceph-mon[75334]: osdmap e140: 3 total, 3 up, 3 in
Feb 02 15:30:27 compute-0 ceph-mon[75334]: osdmap e141: 3 total, 3 up, 3 in
Feb 02 15:30:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.5 KiB/s wr, 74 op/s
Feb 02 15:30:27 compute-0 sudo[244187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:30:27 compute-0 sudo[244187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:30:27 compute-0 sudo[244187]: pam_unix(sudo:session): session closed for user root
Feb 02 15:30:28 compute-0 sudo[244212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:30:28 compute-0 sudo[244212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb 02 15:30:28 compute-0 ceph-mon[75334]: pgmap v844: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.5 KiB/s wr, 74 op/s
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb 02 15:30:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712469466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712469466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:28 compute-0 sudo[244212]: pam_unix(sudo:session): session closed for user root
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:30:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:30:28 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:30:28 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:30:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:30:28 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:30:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:30:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:30:28 compute-0 sudo[244267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:30:28 compute-0 sudo[244267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:30:28 compute-0 sudo[244267]: pam_unix(sudo:session): session closed for user root
Feb 02 15:30:28 compute-0 sudo[244292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:30:28 compute-0 sudo[244292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:30:28 compute-0 podman[244330]: 2026-02-02 15:30:28.967332139 +0000 UTC m=+0.032026506 container create e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:30:28 compute-0 systemd[1]: Started libpod-conmon-e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521.scope.
Feb 02 15:30:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:30:29 compute-0 podman[244330]: 2026-02-02 15:30:29.030127383 +0000 UTC m=+0.094821770 container init e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:30:29 compute-0 podman[244330]: 2026-02-02 15:30:29.036948235 +0000 UTC m=+0.101642642 container start e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:30:29 compute-0 goofy_zhukovsky[244347]: 167 167
Feb 02 15:30:29 compute-0 systemd[1]: libpod-e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521.scope: Deactivated successfully.
Feb 02 15:30:29 compute-0 conmon[244347]: conmon e0f7a9a5fe3dbccf7661 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521.scope/container/memory.events
Feb 02 15:30:29 compute-0 podman[244330]: 2026-02-02 15:30:29.040669322 +0000 UTC m=+0.105363719 container attach e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_zhukovsky, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:30:29 compute-0 podman[244330]: 2026-02-02 15:30:29.041109569 +0000 UTC m=+0.105803946 container died e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:30:29 compute-0 podman[244330]: 2026-02-02 15:30:28.951798265 +0000 UTC m=+0.016492652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:30:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d14c8c2793a871d0623d941d16a551204e06c9e2976916584830f157958d87-merged.mount: Deactivated successfully.
Feb 02 15:30:29 compute-0 podman[244330]: 2026-02-02 15:30:29.076592212 +0000 UTC m=+0.141286579 container remove e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_zhukovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:30:29 compute-0 systemd[1]: libpod-conmon-e0f7a9a5fe3dbccf766175cf33f69fa108542d9dad8d9ffadfa145411ea4e521.scope: Deactivated successfully.
Feb 02 15:30:29 compute-0 ceph-mon[75334]: osdmap e142: 3 total, 3 up, 3 in
Feb 02 15:30:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3712469466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3712469466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:30:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:30:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:30:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:30:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:30:29 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:30:29 compute-0 podman[244370]: 2026-02-02 15:30:29.211602646 +0000 UTC m=+0.029874586 container create e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:30:29 compute-0 systemd[1]: Started libpod-conmon-e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe.scope.
Feb 02 15:30:29 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1baf189aa023af08b451c16cf0a7f2e327d4677ab183804fec62cca0a9115ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1baf189aa023af08b451c16cf0a7f2e327d4677ab183804fec62cca0a9115ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1baf189aa023af08b451c16cf0a7f2e327d4677ab183804fec62cca0a9115ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1baf189aa023af08b451c16cf0a7f2e327d4677ab183804fec62cca0a9115ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1baf189aa023af08b451c16cf0a7f2e327d4677ab183804fec62cca0a9115ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:29 compute-0 podman[244370]: 2026-02-02 15:30:29.288142859 +0000 UTC m=+0.106414849 container init e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cartwright, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:30:29 compute-0 podman[244370]: 2026-02-02 15:30:29.198772022 +0000 UTC m=+0.017043982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:30:29 compute-0 podman[244370]: 2026-02-02 15:30:29.302688956 +0000 UTC m=+0.120960936 container start e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:30:29 compute-0 podman[244370]: 2026-02-02 15:30:29.306478887 +0000 UTC m=+0.124750837 container attach e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cartwright, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:30:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.5 KiB/s wr, 57 op/s
Feb 02 15:30:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb 02 15:30:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb 02 15:30:29 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb 02 15:30:29 compute-0 epic_cartwright[244386]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:30:29 compute-0 epic_cartwright[244386]: --> All data devices are unavailable
Feb 02 15:30:29 compute-0 systemd[1]: libpod-e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe.scope: Deactivated successfully.
Feb 02 15:30:29 compute-0 conmon[244386]: conmon e2a9f9473438dbf5fbfb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe.scope/container/memory.events
Feb 02 15:30:29 compute-0 podman[244370]: 2026-02-02 15:30:29.699779999 +0000 UTC m=+0.518051939 container died e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:30:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1baf189aa023af08b451c16cf0a7f2e327d4677ab183804fec62cca0a9115ca-merged.mount: Deactivated successfully.
Feb 02 15:30:29 compute-0 podman[244370]: 2026-02-02 15:30:29.72901606 +0000 UTC m=+0.547288000 container remove e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:30:29 compute-0 systemd[1]: libpod-conmon-e2a9f9473438dbf5fbfbcbf690073da079b8c798239c885f0ec8c935c5aedcfe.scope: Deactivated successfully.
Feb 02 15:30:29 compute-0 sudo[244292]: pam_unix(sudo:session): session closed for user root
Feb 02 15:30:29 compute-0 sudo[244417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:30:29 compute-0 sudo[244417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:30:29 compute-0 sudo[244417]: pam_unix(sudo:session): session closed for user root
Feb 02 15:30:29 compute-0 sudo[244442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:30:29 compute-0 sudo[244442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:30:30 compute-0 podman[244479]: 2026-02-02 15:30:30.089483507 +0000 UTC m=+0.037184567 container create 60fb1b0a492501b289042dbbc3a6202bfaa5868f9305082be473fe6f0098ad8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:30:30 compute-0 ceph-mon[75334]: pgmap v846: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.5 KiB/s wr, 57 op/s
Feb 02 15:30:30 compute-0 ceph-mon[75334]: osdmap e143: 3 total, 3 up, 3 in
Feb 02 15:30:30 compute-0 systemd[1]: Started libpod-conmon-60fb1b0a492501b289042dbbc3a6202bfaa5868f9305082be473fe6f0098ad8f.scope.
Feb 02 15:30:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:30:30 compute-0 podman[244479]: 2026-02-02 15:30:30.157621218 +0000 UTC m=+0.105322298 container init 60fb1b0a492501b289042dbbc3a6202bfaa5868f9305082be473fe6f0098ad8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:30:30 compute-0 podman[244479]: 2026-02-02 15:30:30.161881706 +0000 UTC m=+0.109582766 container start 60fb1b0a492501b289042dbbc3a6202bfaa5868f9305082be473fe6f0098ad8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:30:30 compute-0 podman[244479]: 2026-02-02 15:30:30.164664258 +0000 UTC m=+0.112365318 container attach 60fb1b0a492501b289042dbbc3a6202bfaa5868f9305082be473fe6f0098ad8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:30:30 compute-0 jolly_lamarr[244495]: 167 167
Feb 02 15:30:30 compute-0 systemd[1]: libpod-60fb1b0a492501b289042dbbc3a6202bfaa5868f9305082be473fe6f0098ad8f.scope: Deactivated successfully.
Feb 02 15:30:30 compute-0 podman[244479]: 2026-02-02 15:30:30.166218576 +0000 UTC m=+0.113919636 container died 60fb1b0a492501b289042dbbc3a6202bfaa5868f9305082be473fe6f0098ad8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:30:30 compute-0 podman[244479]: 2026-02-02 15:30:30.071602866 +0000 UTC m=+0.019303966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:30:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-10002b2e40b8cb20229437646177e30e832a490eb297accd8cf11309c34daacc-merged.mount: Deactivated successfully.
Feb 02 15:30:30 compute-0 podman[244479]: 2026-02-02 15:30:30.196506917 +0000 UTC m=+0.144207977 container remove 60fb1b0a492501b289042dbbc3a6202bfaa5868f9305082be473fe6f0098ad8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lamarr, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:30:30 compute-0 systemd[1]: libpod-conmon-60fb1b0a492501b289042dbbc3a6202bfaa5868f9305082be473fe6f0098ad8f.scope: Deactivated successfully.
Feb 02 15:30:30 compute-0 podman[244520]: 2026-02-02 15:30:30.339544499 +0000 UTC m=+0.056958829 container create fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:30:30 compute-0 systemd[1]: Started libpod-conmon-fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e.scope.
Feb 02 15:30:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:30:30 compute-0 podman[244520]: 2026-02-02 15:30:30.316425384 +0000 UTC m=+0.033839804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0dec9393dcb9735a7d4e9a798187e0571bff15e19e0597b93035251da8cc58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0dec9393dcb9735a7d4e9a798187e0571bff15e19e0597b93035251da8cc58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0dec9393dcb9735a7d4e9a798187e0571bff15e19e0597b93035251da8cc58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0dec9393dcb9735a7d4e9a798187e0571bff15e19e0597b93035251da8cc58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:30 compute-0 podman[244520]: 2026-02-02 15:30:30.426196615 +0000 UTC m=+0.143610955 container init fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ishizaka, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:30:30 compute-0 podman[244520]: 2026-02-02 15:30:30.432939145 +0000 UTC m=+0.150353465 container start fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ishizaka, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:30:30 compute-0 podman[244520]: 2026-02-02 15:30:30.436382562 +0000 UTC m=+0.153796902 container attach fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ishizaka, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]: {
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:     "0": [
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:         {
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "devices": [
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "/dev/loop3"
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             ],
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_name": "ceph_lv0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_size": "21470642176",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "name": "ceph_lv0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "tags": {
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.cluster_name": "ceph",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.crush_device_class": "",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.encrypted": "0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.objectstore": "bluestore",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.osd_id": "0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.type": "block",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.vdo": "0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.with_tpm": "0"
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             },
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "type": "block",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "vg_name": "ceph_vg0"
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:         }
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:     ],
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:     "1": [
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:         {
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "devices": [
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "/dev/loop4"
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             ],
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_name": "ceph_lv1",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_size": "21470642176",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "name": "ceph_lv1",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "tags": {
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.cluster_name": "ceph",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.crush_device_class": "",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.encrypted": "0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.objectstore": "bluestore",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.osd_id": "1",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.type": "block",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.vdo": "0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.with_tpm": "0"
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             },
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "type": "block",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "vg_name": "ceph_vg1"
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:         }
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:     ],
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:     "2": [
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:         {
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "devices": [
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "/dev/loop5"
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             ],
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_name": "ceph_lv2",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_size": "21470642176",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "name": "ceph_lv2",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "tags": {
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.cluster_name": "ceph",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.crush_device_class": "",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.encrypted": "0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.objectstore": "bluestore",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.osd_id": "2",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.type": "block",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.vdo": "0",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:                 "ceph.with_tpm": "0"
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             },
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "type": "block",
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:             "vg_name": "ceph_vg2"
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:         }
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]:     ]
Feb 02 15:30:30 compute-0 crazy_ishizaka[244537]: }
Feb 02 15:30:30 compute-0 systemd[1]: libpod-fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e.scope: Deactivated successfully.
Feb 02 15:30:30 compute-0 conmon[244537]: conmon fa7d3a85e22d4a105dad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e.scope/container/memory.events
Feb 02 15:30:30 compute-0 podman[244520]: 2026-02-02 15:30:30.695012261 +0000 UTC m=+0.412426581 container died fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ishizaka, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:30:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd0dec9393dcb9735a7d4e9a798187e0571bff15e19e0597b93035251da8cc58-merged.mount: Deactivated successfully.
Feb 02 15:30:30 compute-0 podman[244520]: 2026-02-02 15:30:30.735169287 +0000 UTC m=+0.452583607 container remove fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:30:30 compute-0 systemd[1]: libpod-conmon-fa7d3a85e22d4a105dadbb1eb18b156dc29621c5e3d2b0c16c732900270ab93e.scope: Deactivated successfully.
Feb 02 15:30:30 compute-0 sudo[244442]: pam_unix(sudo:session): session closed for user root
Feb 02 15:30:30 compute-0 sudo[244560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:30:30 compute-0 sudo[244560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:30:30 compute-0 sudo[244560]: pam_unix(sudo:session): session closed for user root
Feb 02 15:30:30 compute-0 sudo[244585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:30:30 compute-0 sudo[244585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:30:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb 02 15:30:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb 02 15:30:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb 02 15:30:31 compute-0 podman[244623]: 2026-02-02 15:30:31.173881338 +0000 UTC m=+0.043392086 container create e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_blackwell, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:30:31 compute-0 systemd[1]: Started libpod-conmon-e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f.scope.
Feb 02 15:30:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:30:31 compute-0 podman[244623]: 2026-02-02 15:30:31.233818336 +0000 UTC m=+0.103329184 container init e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_blackwell, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:30:31 compute-0 podman[244623]: 2026-02-02 15:30:31.23852205 +0000 UTC m=+0.108032808 container start e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_blackwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:30:31 compute-0 podman[244623]: 2026-02-02 15:30:31.241532931 +0000 UTC m=+0.111043759 container attach e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:30:31 compute-0 adoring_blackwell[244639]: 167 167
Feb 02 15:30:31 compute-0 systemd[1]: libpod-e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f.scope: Deactivated successfully.
Feb 02 15:30:31 compute-0 conmon[244639]: conmon e4ec175bd71d8a8e46aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f.scope/container/memory.events
Feb 02 15:30:31 compute-0 podman[244623]: 2026-02-02 15:30:31.243756684 +0000 UTC m=+0.113267472 container died e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:30:31 compute-0 podman[244623]: 2026-02-02 15:30:31.154604665 +0000 UTC m=+0.024115473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:30:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-10f21b247e66f0f100b6b1a2e34de26a9825b405f9324abc4ae74feaa0932f88-merged.mount: Deactivated successfully.
Feb 02 15:30:31 compute-0 podman[244623]: 2026-02-02 15:30:31.268950286 +0000 UTC m=+0.138461044 container remove e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:30:31 compute-0 systemd[1]: libpod-conmon-e4ec175bd71d8a8e46aa8e3ce710af3aa834ebd288e86715ed82baaa3b02c42f.scope: Deactivated successfully.
Feb 02 15:30:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Feb 02 15:30:31 compute-0 podman[244662]: 2026-02-02 15:30:31.381548281 +0000 UTC m=+0.031617070 container create ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:30:31 compute-0 systemd[1]: Started libpod-conmon-ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32.scope.
Feb 02 15:30:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a792884774e661c7971fd58dba5138374a42f55e9fe23429f29fc6a3a9571181/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a792884774e661c7971fd58dba5138374a42f55e9fe23429f29fc6a3a9571181/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a792884774e661c7971fd58dba5138374a42f55e9fe23429f29fc6a3a9571181/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a792884774e661c7971fd58dba5138374a42f55e9fe23429f29fc6a3a9571181/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:30:31 compute-0 podman[244662]: 2026-02-02 15:30:31.447337245 +0000 UTC m=+0.097406044 container init ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 02 15:30:31 compute-0 podman[244662]: 2026-02-02 15:30:31.45449882 +0000 UTC m=+0.104567609 container start ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:30:31 compute-0 podman[244662]: 2026-02-02 15:30:31.457563984 +0000 UTC m=+0.107632773 container attach ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_joliot, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:30:31 compute-0 podman[244662]: 2026-02-02 15:30:31.368514249 +0000 UTC m=+0.018583058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:30:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb 02 15:30:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb 02 15:30:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb 02 15:30:32 compute-0 lvm[244757]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:30:32 compute-0 lvm[244757]: VG ceph_vg1 finished
Feb 02 15:30:32 compute-0 lvm[244756]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:30:32 compute-0 lvm[244756]: VG ceph_vg0 finished
Feb 02 15:30:32 compute-0 lvm[244759]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:30:32 compute-0 lvm[244759]: VG ceph_vg2 finished
Feb 02 15:30:32 compute-0 ceph-mon[75334]: osdmap e144: 3 total, 3 up, 3 in
Feb 02 15:30:32 compute-0 ceph-mon[75334]: pgmap v849: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Feb 02 15:30:32 compute-0 ceph-mon[75334]: osdmap e145: 3 total, 3 up, 3 in
Feb 02 15:30:32 compute-0 naughty_joliot[244678]: {}
Feb 02 15:30:32 compute-0 systemd[1]: libpod-ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32.scope: Deactivated successfully.
Feb 02 15:30:32 compute-0 systemd[1]: libpod-ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32.scope: Consumed 1.008s CPU time.
Feb 02 15:30:32 compute-0 podman[244662]: 2026-02-02 15:30:32.18830235 +0000 UTC m=+0.838371159 container died ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:30:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a792884774e661c7971fd58dba5138374a42f55e9fe23429f29fc6a3a9571181-merged.mount: Deactivated successfully.
Feb 02 15:30:32 compute-0 podman[244662]: 2026-02-02 15:30:32.228951454 +0000 UTC m=+0.879020253 container remove ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_joliot, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:30:32 compute-0 systemd[1]: libpod-conmon-ea46d6d59fdec87ac5fe08255c9a9e481abcd3b84feb047bf921c0231c857e32.scope: Deactivated successfully.
Feb 02 15:30:32 compute-0 sudo[244585]: pam_unix(sudo:session): session closed for user root
Feb 02 15:30:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:30:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:30:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:30:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:30:32 compute-0 sudo[244774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:30:32 compute-0 sudo[244774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:30:32 compute-0 sudo[244774]: pam_unix(sudo:session): session closed for user root
Feb 02 15:30:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb 02 15:30:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb 02 15:30:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb 02 15:30:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:30:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:30:33 compute-0 ceph-mon[75334]: osdmap e146: 3 total, 3 up, 3 in
Feb 02 15:30:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Feb 02 15:30:34 compute-0 ceph-mon[75334]: pgmap v852: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Feb 02 15:30:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb 02 15:30:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb 02 15:30:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb 02 15:30:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.3 KiB/s wr, 76 op/s
Feb 02 15:30:36 compute-0 ceph-mon[75334]: osdmap e147: 3 total, 3 up, 3 in
Feb 02 15:30:36 compute-0 ceph-mon[75334]: pgmap v854: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.3 KiB/s wr, 76 op/s
Feb 02 15:30:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb 02 15:30:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb 02 15:30:36 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb 02 15:30:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Feb 02 15:30:37 compute-0 ceph-mon[75334]: osdmap e148: 3 total, 3 up, 3 in
Feb 02 15:30:37 compute-0 ceph-mon[75334]: pgmap v856: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Feb 02 15:30:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Feb 02 15:30:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Feb 02 15:30:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Feb 02 15:30:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Feb 02 15:30:39 compute-0 ceph-mon[75334]: osdmap e149: 3 total, 3 up, 3 in
Feb 02 15:30:39 compute-0 ceph-mon[75334]: pgmap v858: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Feb 02 15:30:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/656138149' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/656138149' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/656138149' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/656138149' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 46 op/s
Feb 02 15:30:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2600459653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2600459653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Feb 02 15:30:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Feb 02 15:30:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Feb 02 15:30:42 compute-0 ceph-mon[75334]: pgmap v859: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 46 op/s
Feb 02 15:30:42 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2600459653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:42 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2600459653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:30:42
Feb 02 15:30:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:30:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:30:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.control']
Feb 02 15:30:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:30:43 compute-0 ceph-mon[75334]: osdmap e150: 3 total, 3 up, 3 in
Feb 02 15:30:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 48 op/s
Feb 02 15:30:44 compute-0 ceph-mon[75334]: pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 48 op/s
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:30:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:30:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Feb 02 15:30:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1442762875' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1442762875' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:46 compute-0 ceph-mon[75334]: pgmap v862: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Feb 02 15:30:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1442762875' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1442762875' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1397834092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1397834092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Feb 02 15:30:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Feb 02 15:30:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Feb 02 15:30:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 58 op/s
Feb 02 15:30:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1397834092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1397834092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:47 compute-0 ceph-mon[75334]: osdmap e151: 3 total, 3 up, 3 in
Feb 02 15:30:48 compute-0 ceph-mon[75334]: pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 58 op/s
Feb 02 15:30:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:30:48.634 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:30:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:30:48.636 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:30:49 compute-0 podman[244801]: 2026-02-02 15:30:49.30655808 +0000 UTC m=+0.051022596 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 02 15:30:49 compute-0 podman[244800]: 2026-02-02 15:30:49.352404448 +0000 UTC m=+0.097103370 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 15:30:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.5 KiB/s wr, 44 op/s
Feb 02 15:30:50 compute-0 ceph-mon[75334]: pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.5 KiB/s wr, 44 op/s
Feb 02 15:30:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.9 KiB/s wr, 58 op/s
Feb 02 15:30:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:52 compute-0 ceph-mon[75334]: pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.9 KiB/s wr, 58 op/s
Feb 02 15:30:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 50 op/s
Feb 02 15:30:53 compute-0 nova_compute[239545]: 2026-02-02 15:30:53.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/119237603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/119237603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.0462658646097872e-06 of space, bias 1.0, pg target 0.0003138797593829361 quantized to 32 (current 32)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659881067421937 of space, bias 1.0, pg target 0.19979643202265812 quantized to 32 (current 32)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7128970665399813e-06 of space, bias 4.0, pg target 0.0020554764798479774 quantized to 16 (current 16)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:30:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:30:54 compute-0 ceph-mon[75334]: pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 50 op/s
Feb 02 15:30:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/119237603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/119237603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1956468405' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1956468405' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Feb 02 15:30:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1956468405' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1956468405' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:55 compute-0 nova_compute[239545]: 2026-02-02 15:30:55.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:56 compute-0 ceph-mon[75334]: pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Feb 02 15:30:56 compute-0 nova_compute[239545]: 2026-02-02 15:30:56.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:56 compute-0 nova_compute[239545]: 2026-02-02 15:30:56.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:30:56 compute-0 nova_compute[239545]: 2026-02-02 15:30:56.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:30:56 compute-0 nova_compute[239545]: 2026-02-02 15:30:56.558 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:30:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:30:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1765096263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1765096263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.6 KiB/s wr, 48 op/s
Feb 02 15:30:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1765096263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1765096263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:57 compute-0 nova_compute[239545]: 2026-02-02 15:30:57.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:57 compute-0 nova_compute[239545]: 2026-02-02 15:30:57.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:57 compute-0 nova_compute[239545]: 2026-02-02 15:30:57.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:57 compute-0 nova_compute[239545]: 2026-02-02 15:30:57.567 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:30:57 compute-0 nova_compute[239545]: 2026-02-02 15:30:57.568 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:30:57 compute-0 nova_compute[239545]: 2026-02-02 15:30:57.568 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:30:57 compute-0 nova_compute[239545]: 2026-02-02 15:30:57.568 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:30:57 compute-0 nova_compute[239545]: 2026-02-02 15:30:57.568 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:30:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:30:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/999519939' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:30:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/999519939' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:30:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3821025271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.139 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.260 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.261 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5115MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.262 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.262 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.328 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.328 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.344 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:30:58 compute-0 ceph-mon[75334]: pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.6 KiB/s wr, 48 op/s
Feb 02 15:30:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/999519939' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:30:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/999519939' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:30:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3821025271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:30:58 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:30:58.637 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:30:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:30:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010302573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.873 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.877 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.900 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.901 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:30:58 compute-0 nova_compute[239545]: 2026-02-02 15:30:58.902 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:30:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:30:59.243 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:30:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:30:59.243 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:30:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:30:59.243 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:30:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Feb 02 15:30:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2010302573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:30:59 compute-0 nova_compute[239545]: 2026-02-02 15:30:59.903 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:59 compute-0 nova_compute[239545]: 2026-02-02 15:30:59.903 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:59 compute-0 nova_compute[239545]: 2026-02-02 15:30:59.904 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:30:59 compute-0 nova_compute[239545]: 2026-02-02 15:30:59.904 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:31:00 compute-0 ceph-mon[75334]: pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Feb 02 15:31:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 44 op/s
Feb 02 15:31:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Feb 02 15:31:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Feb 02 15:31:02 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Feb 02 15:31:02 compute-0 ceph-mon[75334]: pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 44 op/s
Feb 02 15:31:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 KiB/s wr, 41 op/s
Feb 02 15:31:03 compute-0 ceph-mon[75334]: osdmap e152: 3 total, 3 up, 3 in
Feb 02 15:31:03 compute-0 ceph-mon[75334]: pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 KiB/s wr, 41 op/s
Feb 02 15:31:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Feb 02 15:31:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Feb 02 15:31:04 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Feb 02 15:31:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Feb 02 15:31:05 compute-0 ceph-mon[75334]: osdmap e153: 3 total, 3 up, 3 in
Feb 02 15:31:05 compute-0 ceph-mon[75334]: pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Feb 02 15:31:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094163797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094163797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2094163797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2094163797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Feb 02 15:31:07 compute-0 ceph-mon[75334]: pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Feb 02 15:31:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/180319394' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/180319394' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/180319394' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/180319394' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.9 KiB/s wr, 54 op/s
Feb 02 15:31:09 compute-0 ceph-mon[75334]: pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.9 KiB/s wr, 54 op/s
Feb 02 15:31:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 60 op/s
Feb 02 15:31:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Feb 02 15:31:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Feb 02 15:31:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Feb 02 15:31:12 compute-0 ceph-mon[75334]: pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 60 op/s
Feb 02 15:31:12 compute-0 ceph-mon[75334]: osdmap e154: 3 total, 3 up, 3 in
Feb 02 15:31:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Feb 02 15:31:14 compute-0 ceph-mon[75334]: pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Feb 02 15:31:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:31:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:31:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:31:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:31:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:31:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:31:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 42 op/s
Feb 02 15:31:16 compute-0 ceph-mon[75334]: pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 42 op/s
Feb 02 15:31:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 42 op/s
Feb 02 15:31:18 compute-0 ceph-mon[75334]: pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 42 op/s
Feb 02 15:31:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 921 B/s wr, 9 op/s
Feb 02 15:31:20 compute-0 podman[244887]: 2026-02-02 15:31:20.299486551 +0000 UTC m=+0.039591892 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:31:20 compute-0 podman[244886]: 2026-02-02 15:31:20.314338796 +0000 UTC m=+0.057903739 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:31:20 compute-0 ceph-mon[75334]: pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 921 B/s wr, 9 op/s
Feb 02 15:31:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:31:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:22 compute-0 ceph-mon[75334]: pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:31:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:31:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Feb 02 15:31:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Feb 02 15:31:24 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Feb 02 15:31:24 compute-0 ceph-mon[75334]: pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:31:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2578104583' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2578104583' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 614 B/s wr, 8 op/s
Feb 02 15:31:25 compute-0 ceph-mon[75334]: osdmap e155: 3 total, 3 up, 3 in
Feb 02 15:31:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2578104583' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2578104583' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:26 compute-0 ceph-mon[75334]: pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 614 B/s wr, 8 op/s
Feb 02 15:31:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Feb 02 15:31:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Feb 02 15:31:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Feb 02 15:31:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Feb 02 15:31:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085683211' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085683211' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:28 compute-0 ceph-mon[75334]: pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Feb 02 15:31:28 compute-0 ceph-mon[75334]: osdmap e156: 3 total, 3 up, 3 in
Feb 02 15:31:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1085683211' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1085683211' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4286477447' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4286477447' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Feb 02 15:31:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4286477447' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4286477447' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:30 compute-0 nova_compute[239545]: 2026-02-02 15:31:30.309 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquiring lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:30 compute-0 nova_compute[239545]: 2026-02-02 15:31:30.309 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:30 compute-0 nova_compute[239545]: 2026-02-02 15:31:30.330 239549 DEBUG nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:31:30 compute-0 nova_compute[239545]: 2026-02-02 15:31:30.435 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:30 compute-0 nova_compute[239545]: 2026-02-02 15:31:30.436 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:30 compute-0 nova_compute[239545]: 2026-02-02 15:31:30.445 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:31:30 compute-0 nova_compute[239545]: 2026-02-02 15:31:30.446 239549 INFO nova.compute.claims [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:31:30 compute-0 nova_compute[239545]: 2026-02-02 15:31:30.579 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:30 compute-0 ceph-mon[75334]: pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Feb 02 15:31:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:31:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867875931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.106 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.112 239549 DEBUG nova.compute.provider_tree [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.127 239549 DEBUG nova.scheduler.client.report [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.151 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.152 239549 DEBUG nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.204 239549 DEBUG nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.204 239549 DEBUG nova.network.neutron [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.228 239549 INFO nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.250 239549 DEBUG nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.353 239549 DEBUG nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.354 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.355 239549 INFO nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Creating image(s)
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.375 239549 DEBUG nova.storage.rbd_utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] rbd image 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:31:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 66 op/s
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.396 239549 DEBUG nova.storage.rbd_utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] rbd image 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.420 239549 DEBUG nova.storage.rbd_utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] rbd image 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.423 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.424 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3867875931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:31:31 compute-0 ceph-mon[75334]: pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 66 op/s
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.945 239549 WARNING oslo_policy.policy [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.946 239549 WARNING oslo_policy.policy [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 02 15:31:31 compute-0 nova_compute[239545]: 2026-02-02 15:31:31.950 239549 DEBUG nova.policy [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9b1a2ce320b54cc0982384da6edd201c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ce0bcfcc8db482faceb0e2393ff6f5a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:31:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Feb 02 15:31:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Feb 02 15:31:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Feb 02 15:31:32 compute-0 nova_compute[239545]: 2026-02-02 15:31:32.135 239549 DEBUG nova.virt.libvirt.imagebackend [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Image locations are: [{'url': 'rbd://e43470b2-6632-573a-87d3-0f5428ec59e9/images/271bf15b-9e9a-428a-a098-dcc68b158a7a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e43470b2-6632-573a-87d3-0f5428ec59e9/images/271bf15b-9e9a-428a-a098-dcc68b158a7a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 02 15:31:32 compute-0 sudo[245006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:31:32 compute-0 sudo[245006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:31:32 compute-0 sudo[245006]: pam_unix(sudo:session): session closed for user root
Feb 02 15:31:32 compute-0 sudo[245031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:31:32 compute-0 sudo[245031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:31:32 compute-0 sudo[245031]: pam_unix(sudo:session): session closed for user root
Feb 02 15:31:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:31:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:31:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:31:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:31:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:31:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:31:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:31:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:31:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:31:32 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:31:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:31:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:31:32 compute-0 sudo[245086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:31:32 compute-0 sudo[245086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:31:32 compute-0 sudo[245086]: pam_unix(sudo:session): session closed for user root
Feb 02 15:31:33 compute-0 sudo[245111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:31:33 compute-0 sudo[245111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:31:33 compute-0 ceph-mon[75334]: osdmap e157: 3 total, 3 up, 3 in
Feb 02 15:31:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:31:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:31:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:31:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:31:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:31:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.291 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:33 compute-0 podman[245148]: 2026-02-02 15:31:33.272488962 +0000 UTC m=+0.031401886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:31:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 KiB/s wr, 64 op/s
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.399 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb.part --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.401 239549 DEBUG nova.virt.images [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] 271bf15b-9e9a-428a-a098-dcc68b158a7a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 02 15:31:33 compute-0 podman[245148]: 2026-02-02 15:31:33.452286608 +0000 UTC m=+0.211199502 container create 000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bardeen, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.452 239549 DEBUG nova.privsep.utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.453 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb.part /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:33 compute-0 systemd[1]: Started libpod-conmon-000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb.scope.
Feb 02 15:31:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:31:33 compute-0 podman[245148]: 2026-02-02 15:31:33.590997352 +0000 UTC m=+0.349910256 container init 000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bardeen, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:31:33 compute-0 podman[245148]: 2026-02-02 15:31:33.596007406 +0000 UTC m=+0.354920290 container start 000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bardeen, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:31:33 compute-0 systemd[1]: libpod-000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb.scope: Deactivated successfully.
Feb 02 15:31:33 compute-0 trusting_bardeen[245174]: 167 167
Feb 02 15:31:33 compute-0 conmon[245174]: conmon 000c26f032678dff27f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb.scope/container/memory.events
Feb 02 15:31:33 compute-0 podman[245148]: 2026-02-02 15:31:33.610163992 +0000 UTC m=+0.369076896 container attach 000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bardeen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 02 15:31:33 compute-0 podman[245148]: 2026-02-02 15:31:33.61085013 +0000 UTC m=+0.369763014 container died 000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 02 15:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba1e1cd9771fae17c0b1caf73abdd70f0cb4d6bb9b6369e66c61ff010d891df9-merged.mount: Deactivated successfully.
Feb 02 15:31:33 compute-0 podman[245148]: 2026-02-02 15:31:33.720109632 +0000 UTC m=+0.479022516 container remove 000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bardeen, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.722 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb.part /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb.converted" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:33 compute-0 systemd[1]: libpod-conmon-000c26f032678dff27f69697ce275e9a404ce0cddd04622486974a4a54b2b2fb.scope: Deactivated successfully.
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.726 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.769 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb.converted --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.771 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.793 239549 DEBUG nova.storage.rbd_utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] rbd image 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.797 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:33 compute-0 podman[245218]: 2026-02-02 15:31:33.835442007 +0000 UTC m=+0.040279222 container create dd8f1c8744a2ffe46173bee544897125a79663ee27af11d9cc2526346756756d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lumiere, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:31:33 compute-0 systemd[1]: Started libpod-conmon-dd8f1c8744a2ffe46173bee544897125a79663ee27af11d9cc2526346756756d.scope.
Feb 02 15:31:33 compute-0 nova_compute[239545]: 2026-02-02 15:31:33.872 239549 DEBUG nova.network.neutron [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Successfully created port: ecb9e392-aa7b-4bef-9702-cdda122dd59a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:31:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92798ba0d0e4dab11d854c91190355d44fea3dba4edd6acd5044202ac7566e28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92798ba0d0e4dab11d854c91190355d44fea3dba4edd6acd5044202ac7566e28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92798ba0d0e4dab11d854c91190355d44fea3dba4edd6acd5044202ac7566e28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92798ba0d0e4dab11d854c91190355d44fea3dba4edd6acd5044202ac7566e28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92798ba0d0e4dab11d854c91190355d44fea3dba4edd6acd5044202ac7566e28/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:33 compute-0 podman[245218]: 2026-02-02 15:31:33.815515846 +0000 UTC m=+0.020353061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:31:33 compute-0 podman[245218]: 2026-02-02 15:31:33.914994689 +0000 UTC m=+0.119831894 container init dd8f1c8744a2ffe46173bee544897125a79663ee27af11d9cc2526346756756d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:31:33 compute-0 podman[245218]: 2026-02-02 15:31:33.919874249 +0000 UTC m=+0.124711444 container start dd8f1c8744a2ffe46173bee544897125a79663ee27af11d9cc2526346756756d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:31:33 compute-0 podman[245218]: 2026-02-02 15:31:33.923727812 +0000 UTC m=+0.128565037 container attach dd8f1c8744a2ffe46173bee544897125a79663ee27af11d9cc2526346756756d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lumiere, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:31:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Feb 02 15:31:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Feb 02 15:31:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Feb 02 15:31:34 compute-0 ceph-mon[75334]: pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 KiB/s wr, 64 op/s
Feb 02 15:31:34 compute-0 zen_lumiere[245254]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:31:34 compute-0 zen_lumiere[245254]: --> All data devices are unavailable
Feb 02 15:31:34 compute-0 systemd[1]: libpod-dd8f1c8744a2ffe46173bee544897125a79663ee27af11d9cc2526346756756d.scope: Deactivated successfully.
Feb 02 15:31:34 compute-0 podman[245218]: 2026-02-02 15:31:34.343082892 +0000 UTC m=+0.547920077 container died dd8f1c8744a2ffe46173bee544897125a79663ee27af11d9cc2526346756756d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-92798ba0d0e4dab11d854c91190355d44fea3dba4edd6acd5044202ac7566e28-merged.mount: Deactivated successfully.
Feb 02 15:31:34 compute-0 podman[245218]: 2026-02-02 15:31:34.377468555 +0000 UTC m=+0.582305730 container remove dd8f1c8744a2ffe46173bee544897125a79663ee27af11d9cc2526346756756d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:31:34 compute-0 systemd[1]: libpod-conmon-dd8f1c8744a2ffe46173bee544897125a79663ee27af11d9cc2526346756756d.scope: Deactivated successfully.
Feb 02 15:31:34 compute-0 sudo[245111]: pam_unix(sudo:session): session closed for user root
Feb 02 15:31:34 compute-0 sudo[245284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:31:34 compute-0 sudo[245284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:31:34 compute-0 sudo[245284]: pam_unix(sudo:session): session closed for user root
Feb 02 15:31:34 compute-0 sudo[245309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:31:34 compute-0 sudo[245309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:31:34 compute-0 podman[245345]: 2026-02-02 15:31:34.776343732 +0000 UTC m=+0.038192407 container create f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:31:34 compute-0 systemd[1]: Started libpod-conmon-f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e.scope.
Feb 02 15:31:34 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:31:34 compute-0 podman[245345]: 2026-02-02 15:31:34.837345002 +0000 UTC m=+0.099193707 container init f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bassi, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:31:34 compute-0 podman[245345]: 2026-02-02 15:31:34.842113189 +0000 UTC m=+0.103961874 container start f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:31:34 compute-0 modest_bassi[245361]: 167 167
Feb 02 15:31:34 compute-0 podman[245345]: 2026-02-02 15:31:34.844992305 +0000 UTC m=+0.106841000 container attach f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bassi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:31:34 compute-0 systemd[1]: libpod-f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e.scope: Deactivated successfully.
Feb 02 15:31:34 compute-0 conmon[245361]: conmon f806deafd9c9cdb6f108 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e.scope/container/memory.events
Feb 02 15:31:34 compute-0 podman[245345]: 2026-02-02 15:31:34.846255609 +0000 UTC m=+0.108104294 container died f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:31:34 compute-0 podman[245345]: 2026-02-02 15:31:34.759538355 +0000 UTC m=+0.021387070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1668fa7bdf361d0c4b4845e98f265b93866fe9087227f5b6d6ea8c551bce371-merged.mount: Deactivated successfully.
Feb 02 15:31:34 compute-0 podman[245345]: 2026-02-02 15:31:34.878515686 +0000 UTC m=+0.140364371 container remove f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:31:34 compute-0 systemd[1]: libpod-conmon-f806deafd9c9cdb6f108175fbad0ea03a20db9ae268e91c4417e9bf390f5472e.scope: Deactivated successfully.
Feb 02 15:31:34 compute-0 nova_compute[239545]: 2026-02-02 15:31:34.914 239549 DEBUG nova.network.neutron [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Successfully updated port: ecb9e392-aa7b-4bef-9702-cdda122dd59a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:31:34 compute-0 nova_compute[239545]: 2026-02-02 15:31:34.933 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquiring lock "refresh_cache-4cfba600-0819-408d-b5bb-f2ecefc96cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:31:34 compute-0 nova_compute[239545]: 2026-02-02 15:31:34.934 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquired lock "refresh_cache-4cfba600-0819-408d-b5bb-f2ecefc96cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:31:34 compute-0 nova_compute[239545]: 2026-02-02 15:31:34.934 239549 DEBUG nova.network.neutron [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:31:34 compute-0 podman[245386]: 2026-02-02 15:31:34.983480654 +0000 UTC m=+0.033474340 container create a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:31:35 compute-0 systemd[1]: Started libpod-conmon-a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119.scope.
Feb 02 15:31:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf93460985f85b83737a51c772297b4f74296be39a8b70e9730daf4b1009073/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf93460985f85b83737a51c772297b4f74296be39a8b70e9730daf4b1009073/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf93460985f85b83737a51c772297b4f74296be39a8b70e9730daf4b1009073/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf93460985f85b83737a51c772297b4f74296be39a8b70e9730daf4b1009073/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:35 compute-0 podman[245386]: 2026-02-02 15:31:35.052841967 +0000 UTC m=+0.102835673 container init a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 15:31:35 compute-0 podman[245386]: 2026-02-02 15:31:35.058851436 +0000 UTC m=+0.108845122 container start a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 02 15:31:35 compute-0 podman[245386]: 2026-02-02 15:31:35.06198702 +0000 UTC m=+0.111980706 container attach a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:31:35 compute-0 podman[245386]: 2026-02-02 15:31:34.969056511 +0000 UTC m=+0.019050227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:31:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Feb 02 15:31:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Feb 02 15:31:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Feb 02 15:31:35 compute-0 ceph-mon[75334]: osdmap e158: 3 total, 3 up, 3 in
Feb 02 15:31:35 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]: {
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:     "0": [
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:         {
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "devices": [
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "/dev/loop3"
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             ],
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_name": "ceph_lv0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_size": "21470642176",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "name": "ceph_lv0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "tags": {
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.cluster_name": "ceph",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.crush_device_class": "",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.encrypted": "0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.objectstore": "bluestore",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.osd_id": "0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.type": "block",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.vdo": "0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.with_tpm": "0"
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             },
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "type": "block",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "vg_name": "ceph_vg0"
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:         }
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:     ],
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:     "1": [
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:         {
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "devices": [
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "/dev/loop4"
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             ],
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_name": "ceph_lv1",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_size": "21470642176",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "name": "ceph_lv1",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "tags": {
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.cluster_name": "ceph",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.crush_device_class": "",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.encrypted": "0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.objectstore": "bluestore",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.osd_id": "1",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.type": "block",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.vdo": "0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.with_tpm": "0"
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             },
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "type": "block",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "vg_name": "ceph_vg1"
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:         }
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:     ],
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:     "2": [
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:         {
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "devices": [
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "/dev/loop5"
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             ],
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_name": "ceph_lv2",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_size": "21470642176",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "name": "ceph_lv2",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "tags": {
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.cluster_name": "ceph",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.crush_device_class": "",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.encrypted": "0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.objectstore": "bluestore",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.osd_id": "2",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.type": "block",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.vdo": "0",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:                 "ceph.with_tpm": "0"
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             },
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "type": "block",
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:             "vg_name": "ceph_vg2"
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:         }
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]:     ]
Feb 02 15:31:35 compute-0 nifty_lichterman[245403]: }
Feb 02 15:31:35 compute-0 systemd[1]: libpod-a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119.scope: Deactivated successfully.
Feb 02 15:31:35 compute-0 conmon[245403]: conmon a584a8c5332874b5fee8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119.scope/container/memory.events
Feb 02 15:31:35 compute-0 podman[245386]: 2026-02-02 15:31:35.337473788 +0000 UTC m=+0.387467494 container died a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.341 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdf93460985f85b83737a51c772297b4f74296be39a8b70e9730daf4b1009073-merged.mount: Deactivated successfully.
Feb 02 15:31:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 KiB/s wr, 56 op/s
Feb 02 15:31:35 compute-0 podman[245386]: 2026-02-02 15:31:35.383779068 +0000 UTC m=+0.433772754 container remove a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:31:35 compute-0 systemd[1]: libpod-conmon-a584a8c5332874b5fee86ba3c98ce80add5301cd45ea20b62e60757c99a69119.scope: Deactivated successfully.
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.401 239549 DEBUG nova.storage.rbd_utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] resizing rbd image 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:31:35 compute-0 sudo[245309]: pam_unix(sudo:session): session closed for user root
Feb 02 15:31:35 compute-0 sudo[245483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:31:35 compute-0 sudo[245483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.474 239549 DEBUG nova.objects.instance [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lazy-loading 'migration_context' on Instance uuid 4cfba600-0819-408d-b5bb-f2ecefc96cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:31:35 compute-0 sudo[245483]: pam_unix(sudo:session): session closed for user root
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.476 239549 DEBUG nova.network.neutron [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.498 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.499 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Ensure instance console log exists: /var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.499 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.499 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.500 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:35 compute-0 sudo[245526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:31:35 compute-0 sudo[245526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.542 239549 DEBUG nova.compute.manager [req-ccc1ef91-62ba-4207-836a-806e8a9a5c09 req-c2ce9b2d-2cb2-4918-b3ed-516163cd443a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Received event network-changed-ecb9e392-aa7b-4bef-9702-cdda122dd59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.543 239549 DEBUG nova.compute.manager [req-ccc1ef91-62ba-4207-836a-806e8a9a5c09 req-c2ce9b2d-2cb2-4918-b3ed-516163cd443a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Refreshing instance network info cache due to event network-changed-ecb9e392-aa7b-4bef-9702-cdda122dd59a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:31:35 compute-0 nova_compute[239545]: 2026-02-02 15:31:35.543 239549 DEBUG oslo_concurrency.lockutils [req-ccc1ef91-62ba-4207-836a-806e8a9a5c09 req-c2ce9b2d-2cb2-4918-b3ed-516163cd443a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-4cfba600-0819-408d-b5bb-f2ecefc96cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:31:35 compute-0 podman[245563]: 2026-02-02 15:31:35.773598044 +0000 UTC m=+0.039328016 container create 2c6edbafd5429e3dbf51ca1846ee82cbabcaaf768a73c0c25cfb0fbf1275b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:31:35 compute-0 systemd[1]: Started libpod-conmon-2c6edbafd5429e3dbf51ca1846ee82cbabcaaf768a73c0c25cfb0fbf1275b46e.scope.
Feb 02 15:31:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:31:35 compute-0 podman[245563]: 2026-02-02 15:31:35.849446909 +0000 UTC m=+0.115176871 container init 2c6edbafd5429e3dbf51ca1846ee82cbabcaaf768a73c0c25cfb0fbf1275b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:31:35 compute-0 podman[245563]: 2026-02-02 15:31:35.758429861 +0000 UTC m=+0.024159833 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:31:35 compute-0 podman[245563]: 2026-02-02 15:31:35.859220078 +0000 UTC m=+0.124950020 container start 2c6edbafd5429e3dbf51ca1846ee82cbabcaaf768a73c0c25cfb0fbf1275b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:31:35 compute-0 podman[245563]: 2026-02-02 15:31:35.862202657 +0000 UTC m=+0.127932599 container attach 2c6edbafd5429e3dbf51ca1846ee82cbabcaaf768a73c0c25cfb0fbf1275b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:31:35 compute-0 busy_keller[245580]: 167 167
Feb 02 15:31:35 compute-0 systemd[1]: libpod-2c6edbafd5429e3dbf51ca1846ee82cbabcaaf768a73c0c25cfb0fbf1275b46e.scope: Deactivated successfully.
Feb 02 15:31:35 compute-0 podman[245563]: 2026-02-02 15:31:35.865176826 +0000 UTC m=+0.130906808 container died 2c6edbafd5429e3dbf51ca1846ee82cbabcaaf768a73c0c25cfb0fbf1275b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keller, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f418d0eee805ebb3aa25d396a7ea05c1d86caf2271bcdfcaafe46b4cf9624150-merged.mount: Deactivated successfully.
Feb 02 15:31:35 compute-0 podman[245563]: 2026-02-02 15:31:35.89579144 +0000 UTC m=+0.161521382 container remove 2c6edbafd5429e3dbf51ca1846ee82cbabcaaf768a73c0c25cfb0fbf1275b46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:31:35 compute-0 systemd[1]: libpod-conmon-2c6edbafd5429e3dbf51ca1846ee82cbabcaaf768a73c0c25cfb0fbf1275b46e.scope: Deactivated successfully.
Feb 02 15:31:36 compute-0 podman[245602]: 2026-02-02 15:31:36.005655729 +0000 UTC m=+0.031271783 container create 3ccbe028cb8531c463a8109ac8b6d27e9f7acd3886a53988cbe9ca25be47f6ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:31:36 compute-0 systemd[1]: Started libpod-conmon-3ccbe028cb8531c463a8109ac8b6d27e9f7acd3886a53988cbe9ca25be47f6ce.scope.
Feb 02 15:31:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde841285da625801956e140f20e98d24fdd906b68b6e19919293005a7fe9072/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde841285da625801956e140f20e98d24fdd906b68b6e19919293005a7fe9072/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde841285da625801956e140f20e98d24fdd906b68b6e19919293005a7fe9072/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde841285da625801956e140f20e98d24fdd906b68b6e19919293005a7fe9072/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:36 compute-0 podman[245602]: 2026-02-02 15:31:36.068166649 +0000 UTC m=+0.093782723 container init 3ccbe028cb8531c463a8109ac8b6d27e9f7acd3886a53988cbe9ca25be47f6ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_engelbart, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:31:36 compute-0 podman[245602]: 2026-02-02 15:31:36.074546949 +0000 UTC m=+0.100163003 container start 3ccbe028cb8531c463a8109ac8b6d27e9f7acd3886a53988cbe9ca25be47f6ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_engelbart, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:31:36 compute-0 podman[245602]: 2026-02-02 15:31:36.077879697 +0000 UTC m=+0.103495781 container attach 3ccbe028cb8531c463a8109ac8b6d27e9f7acd3886a53988cbe9ca25be47f6ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:31:36 compute-0 podman[245602]: 2026-02-02 15:31:35.989850918 +0000 UTC m=+0.015467002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:31:36 compute-0 ceph-mon[75334]: osdmap e159: 3 total, 3 up, 3 in
Feb 02 15:31:36 compute-0 ceph-mon[75334]: pgmap v896: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 KiB/s wr, 56 op/s
Feb 02 15:31:36 compute-0 lvm[245696]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:31:36 compute-0 lvm[245696]: VG ceph_vg0 finished
Feb 02 15:31:36 compute-0 lvm[245698]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:31:36 compute-0 lvm[245698]: VG ceph_vg1 finished
Feb 02 15:31:36 compute-0 lvm[245700]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:31:36 compute-0 lvm[245700]: VG ceph_vg2 finished
Feb 02 15:31:36 compute-0 gallant_engelbart[245619]: {}
Feb 02 15:31:36 compute-0 systemd[1]: libpod-3ccbe028cb8531c463a8109ac8b6d27e9f7acd3886a53988cbe9ca25be47f6ce.scope: Deactivated successfully.
Feb 02 15:31:36 compute-0 podman[245602]: 2026-02-02 15:31:36.754189733 +0000 UTC m=+0.779805807 container died 3ccbe028cb8531c463a8109ac8b6d27e9f7acd3886a53988cbe9ca25be47f6ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_engelbart, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 15:31:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cde841285da625801956e140f20e98d24fdd906b68b6e19919293005a7fe9072-merged.mount: Deactivated successfully.
Feb 02 15:31:36 compute-0 podman[245602]: 2026-02-02 15:31:36.785583088 +0000 UTC m=+0.811199142 container remove 3ccbe028cb8531c463a8109ac8b6d27e9f7acd3886a53988cbe9ca25be47f6ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:31:36 compute-0 systemd[1]: libpod-conmon-3ccbe028cb8531c463a8109ac8b6d27e9f7acd3886a53988cbe9ca25be47f6ce.scope: Deactivated successfully.
Feb 02 15:31:36 compute-0 sudo[245526]: pam_unix(sudo:session): session closed for user root
Feb 02 15:31:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:31:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:31:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:31:36 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:31:36 compute-0 sudo[245715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:31:36 compute-0 sudo[245715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:31:36 compute-0 sudo[245715]: pam_unix(sudo:session): session closed for user root
Feb 02 15:31:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 47 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 696 KiB/s wr, 38 op/s
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.700 239549 DEBUG nova.network.neutron [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Updating instance_info_cache with network_info: [{"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.717 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Releasing lock "refresh_cache-4cfba600-0819-408d-b5bb-f2ecefc96cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.718 239549 DEBUG nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Instance network_info: |[{"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.718 239549 DEBUG oslo_concurrency.lockutils [req-ccc1ef91-62ba-4207-836a-806e8a9a5c09 req-c2ce9b2d-2cb2-4918-b3ed-516163cd443a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-4cfba600-0819-408d-b5bb-f2ecefc96cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.718 239549 DEBUG nova.network.neutron [req-ccc1ef91-62ba-4207-836a-806e8a9a5c09 req-c2ce9b2d-2cb2-4918-b3ed-516163cd443a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Refreshing network info cache for port ecb9e392-aa7b-4bef-9702-cdda122dd59a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.722 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Start _get_guest_xml network_info=[{"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.726 239549 WARNING nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.732 239549 DEBUG nova.virt.libvirt.host [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.732 239549 DEBUG nova.virt.libvirt.host [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.739 239549 DEBUG nova.virt.libvirt.host [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.740 239549 DEBUG nova.virt.libvirt.host [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.740 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.740 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.741 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.741 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.741 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.741 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.742 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.742 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.742 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.742 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.743 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.743 239549 DEBUG nova.virt.hardware [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.747 239549 DEBUG nova.privsep.utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 02 15:31:37 compute-0 nova_compute[239545]: 2026-02-02 15:31:37.747 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:31:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:31:37 compute-0 ceph-mon[75334]: pgmap v897: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 47 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 696 KiB/s wr, 38 op/s
Feb 02 15:31:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:31:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602175508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.302 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.326 239549 DEBUG nova.storage.rbd_utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] rbd image 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.331 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:31:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176640249' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.842 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.844 239549 DEBUG nova.virt.libvirt.vif [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:31:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-797615945',display_name='tempest-VolumesActionsTest-instance-797615945',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-797615945',id=1,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ce0bcfcc8db482faceb0e2393ff6f5a',ramdisk_id='',reservation_id='r-p0fweubv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1694672341',owner_user_name='tempest-VolumesActionsTest-1694672341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:31:31Z,user_data=None,user_id='9b1a2ce320b54cc0982384da6edd201c',uuid=4cfba600-0819-408d-b5bb-f2ecefc96cd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.845 239549 DEBUG nova.network.os_vif_util [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Converting VIF {"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.846 239549 DEBUG nova.network.os_vif_util [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:24:ff,bridge_name='br-int',has_traffic_filtering=True,id=ecb9e392-aa7b-4bef-9702-cdda122dd59a,network=Network(f55b5918-7fa4-49c2-a6a6-e765ae3ee25e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb9e392-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:31:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1602175508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:31:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/176640249' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.850 239549 DEBUG nova.objects.instance [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cfba600-0819-408d-b5bb-f2ecefc96cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.870 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <uuid>4cfba600-0819-408d-b5bb-f2ecefc96cd1</uuid>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <name>instance-00000001</name>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <nova:name>tempest-VolumesActionsTest-instance-797615945</nova:name>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:31:37</nova:creationTime>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <nova:user uuid="9b1a2ce320b54cc0982384da6edd201c">tempest-VolumesActionsTest-1694672341-project-member</nova:user>
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <nova:project uuid="1ce0bcfcc8db482faceb0e2393ff6f5a">tempest-VolumesActionsTest-1694672341</nova:project>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <nova:port uuid="ecb9e392-aa7b-4bef-9702-cdda122dd59a">
Feb 02 15:31:38 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <system>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <entry name="serial">4cfba600-0819-408d-b5bb-f2ecefc96cd1</entry>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <entry name="uuid">4cfba600-0819-408d-b5bb-f2ecefc96cd1</entry>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     </system>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <os>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   </os>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <features>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   </features>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk">
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       </source>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk.config">
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       </source>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:31:38 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:5a:24:ff"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <target dev="tapecb9e392-aa"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1/console.log" append="off"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <video>
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     </video>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:31:38 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:31:38 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:31:38 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:31:38 compute-0 nova_compute[239545]: </domain>
Feb 02 15:31:38 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.871 239549 DEBUG nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Preparing to wait for external event network-vif-plugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.872 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquiring lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.872 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.872 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.874 239549 DEBUG nova.virt.libvirt.vif [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:31:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-797615945',display_name='tempest-VolumesActionsTest-instance-797615945',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-797615945',id=1,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ce0bcfcc8db482faceb0e2393ff6f5a',ramdisk_id='',reservation_id='r-p0fweubv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1694672341',owner_user_name='tempest-VolumesActionsTest-1694672341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:31:31Z,user_data=None,user_id='9b1a2ce320b54cc0982384da6edd201c',uuid=4cfba600-0819-408d-b5bb-f2ecefc96cd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.874 239549 DEBUG nova.network.os_vif_util [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Converting VIF {"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.875 239549 DEBUG nova.network.os_vif_util [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:24:ff,bridge_name='br-int',has_traffic_filtering=True,id=ecb9e392-aa7b-4bef-9702-cdda122dd59a,network=Network(f55b5918-7fa4-49c2-a6a6-e765ae3ee25e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb9e392-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.875 239549 DEBUG os_vif [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:24:ff,bridge_name='br-int',has_traffic_filtering=True,id=ecb9e392-aa7b-4bef-9702-cdda122dd59a,network=Network(f55b5918-7fa4-49c2-a6a6-e765ae3ee25e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb9e392-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.911 239549 DEBUG ovsdbapp.backend.ovs_idl [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.911 239549 DEBUG ovsdbapp.backend.ovs_idl [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.912 239549 DEBUG ovsdbapp.backend.ovs_idl [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.912 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.913 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.913 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.914 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.915 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.917 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.931 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.932 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.932 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:31:38 compute-0 nova_compute[239545]: 2026-02-02 15:31:38.934 239549 INFO oslo.privsep.daemon [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpobw5i3q1/privsep.sock']
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.218 239549 DEBUG nova.network.neutron [req-ccc1ef91-62ba-4207-836a-806e8a9a5c09 req-c2ce9b2d-2cb2-4918-b3ed-516163cd443a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Updated VIF entry in instance network info cache for port ecb9e392-aa7b-4bef-9702-cdda122dd59a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.219 239549 DEBUG nova.network.neutron [req-ccc1ef91-62ba-4207-836a-806e8a9a5c09 req-c2ce9b2d-2cb2-4918-b3ed-516163cd443a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Updating instance_info_cache with network_info: [{"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.259 239549 DEBUG oslo_concurrency.lockutils [req-ccc1ef91-62ba-4207-836a-806e8a9a5c09 req-c2ce9b2d-2cb2-4918-b3ed-516163cd443a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-4cfba600-0819-408d-b5bb-f2ecefc96cd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:31:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 51 MiB data, 183 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.1 MiB/s wr, 34 op/s
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.563 239549 INFO oslo.privsep.daemon [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Spawned new privsep daemon via rootwrap
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.436 245806 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.440 245806 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.442 245806 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.442 245806 INFO oslo.privsep.daemon [-] privsep daemon running as pid 245806
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.847 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.848 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapecb9e392-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.849 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapecb9e392-aa, col_values=(('external_ids', {'iface-id': 'ecb9e392-aa7b-4bef-9702-cdda122dd59a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:24:ff', 'vm-uuid': '4cfba600-0819-408d-b5bb-f2ecefc96cd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:31:39 compute-0 NetworkManager[49171]: <info>  [1770046299.8815] manager: (tapecb9e392-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.881 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.884 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.886 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.887 239549 INFO os_vif [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:24:ff,bridge_name='br-int',has_traffic_filtering=True,id=ecb9e392-aa7b-4bef-9702-cdda122dd59a,network=Network(f55b5918-7fa4-49c2-a6a6-e765ae3ee25e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb9e392-aa')
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.888205) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046299888233, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2304, "num_deletes": 259, "total_data_size": 3483180, "memory_usage": 3532928, "flush_reason": "Manual Compaction"}
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Feb 02 15:31:39 compute-0 ceph-mon[75334]: pgmap v898: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 51 MiB data, 183 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.1 MiB/s wr, 34 op/s
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046299899660, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3408779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16341, "largest_seqno": 18644, "table_properties": {"data_size": 3398025, "index_size": 6992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22185, "raw_average_key_size": 20, "raw_value_size": 3376432, "raw_average_value_size": 3170, "num_data_blocks": 308, "num_entries": 1065, "num_filter_entries": 1065, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770046123, "oldest_key_time": 1770046123, "file_creation_time": 1770046299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 11492 microseconds, and 4377 cpu microseconds.
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.899693) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3408779 bytes OK
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.899736) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.900856) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.900867) EVENT_LOG_v1 {"time_micros": 1770046299900864, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.900884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3473333, prev total WAL file size 3473333, number of live WAL files 2.
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.901370) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3328KB)], [38(7648KB)]
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046299901464, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11240535, "oldest_snapshot_seqno": -1}
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.936 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.937 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.937 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] No VIF found with MAC fa:16:3e:5a:24:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.937 239549 INFO nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Using config drive
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4564 keys, 9469515 bytes, temperature: kUnknown
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046299949510, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9469515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9434873, "index_size": 22116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 110622, "raw_average_key_size": 24, "raw_value_size": 9348390, "raw_average_value_size": 2048, "num_data_blocks": 933, "num_entries": 4564, "num_filter_entries": 4564, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770046299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.949683) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9469515 bytes
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.951016) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.8 rd, 196.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5090, records dropped: 526 output_compression: NoCompression
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.951031) EVENT_LOG_v1 {"time_micros": 1770046299951023, "job": 18, "event": "compaction_finished", "compaction_time_micros": 48084, "compaction_time_cpu_micros": 23269, "output_level": 6, "num_output_files": 1, "total_output_size": 9469515, "num_input_records": 5090, "num_output_records": 4564, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046299951341, "job": 18, "event": "table_file_deletion", "file_number": 40}
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046299951902, "job": 18, "event": "table_file_deletion", "file_number": 38}
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.901260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.951933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.951937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.951939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.951940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:31:39 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:31:39.951942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:31:39 compute-0 nova_compute[239545]: 2026-02-02 15:31:39.953 239549 DEBUG nova.storage.rbd_utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] rbd image 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:31:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63170207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63170207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/63170207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/63170207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:40 compute-0 nova_compute[239545]: 2026-02-02 15:31:40.980 239549 INFO nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Creating config drive at /var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1/disk.config
Feb 02 15:31:40 compute-0 nova_compute[239545]: 2026-02-02 15:31:40.987 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpff_jyfaj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.125 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpff_jyfaj" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.148 239549 DEBUG nova.storage.rbd_utils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] rbd image 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.151 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1/disk.config 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.244 239549 DEBUG oslo_concurrency.processutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1/disk.config 4cfba600-0819-408d-b5bb-f2ecefc96cd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.245 239549 INFO nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Deleting local config drive /var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1/disk.config because it was imported into RBD.
Feb 02 15:31:41 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 02 15:31:41 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 02 15:31:41 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb 02 15:31:41 compute-0 kernel: tapecb9e392-aa: entered promiscuous mode
Feb 02 15:31:41 compute-0 NetworkManager[49171]: <info>  [1770046301.3252] manager: (tapecb9e392-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Feb 02 15:31:41 compute-0 ovn_controller[144995]: 2026-02-02T15:31:41Z|00027|binding|INFO|Claiming lport ecb9e392-aa7b-4bef-9702-cdda122dd59a for this chassis.
Feb 02 15:31:41 compute-0 ovn_controller[144995]: 2026-02-02T15:31:41Z|00028|binding|INFO|ecb9e392-aa7b-4bef-9702-cdda122dd59a: Claiming fa:16:3e:5a:24:ff 10.100.0.4
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.327 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.330 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.338 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:24:ff 10.100.0.4'], port_security=['fa:16:3e:5a:24:ff 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4cfba600-0819-408d-b5bb-f2ecefc96cd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ce0bcfcc8db482faceb0e2393ff6f5a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76bcb60a-2934-481f-bd19-7b90959e1935', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58a46ee6-13ea-4eba-875b-a3bb97e5ec29, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ecb9e392-aa7b-4bef-9702-cdda122dd59a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.339 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ecb9e392-aa7b-4bef-9702-cdda122dd59a in datapath f55b5918-7fa4-49c2-a6a6-e765ae3ee25e bound to our chassis
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.341 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f55b5918-7fa4-49c2-a6a6-e765ae3ee25e
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.341 154982 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp8_lsbxge/privsep.sock']
Feb 02 15:31:41 compute-0 systemd-udevd[245905]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:31:41 compute-0 NetworkManager[49171]: <info>  [1770046301.3663] device (tapecb9e392-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:31:41 compute-0 NetworkManager[49171]: <info>  [1770046301.3669] device (tapecb9e392-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:31:41 compute-0 systemd-machined[207609]: New machine qemu-1-instance-00000001.
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.383 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 88 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 63 op/s
Feb 02 15:31:41 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Feb 02 15:31:41 compute-0 ovn_controller[144995]: 2026-02-02T15:31:41Z|00029|binding|INFO|Setting lport ecb9e392-aa7b-4bef-9702-cdda122dd59a ovn-installed in OVS
Feb 02 15:31:41 compute-0 ovn_controller[144995]: 2026-02-02T15:31:41Z|00030|binding|INFO|Setting lport ecb9e392-aa7b-4bef-9702-cdda122dd59a up in Southbound
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.393 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.694 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046301.6941683, 4cfba600-0819-408d-b5bb-f2ecefc96cd1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.695 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] VM Started (Lifecycle Event)
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.729 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.732 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046301.6942961, 4cfba600-0819-408d-b5bb-f2ecefc96cd1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.732 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] VM Paused (Lifecycle Event)
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.764 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.768 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:31:41 compute-0 nova_compute[239545]: 2026-02-02 15:31:41.785 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:31:41 compute-0 ceph-mon[75334]: pgmap v899: 305 pgs: 305 active+clean; 88 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 63 op/s
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.948 154982 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.949 154982 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp8_lsbxge/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.844 245965 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.849 245965 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.853 245965 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.853 245965 INFO oslo.privsep.daemon [-] privsep daemon running as pid 245965
Feb 02 15:31:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:41.951 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcaf7fa-c437-4f5c-bf07-7dd44e312af2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Feb 02 15:31:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Feb 02 15:31:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.591 239549 DEBUG nova.compute.manager [req-4af24270-3629-44c5-8042-f8046ceedf3f req-a5ed03a4-a3d9-49db-a274-9ff5205f10dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Received event network-vif-plugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.591 239549 DEBUG oslo_concurrency.lockutils [req-4af24270-3629-44c5-8042-f8046ceedf3f req-a5ed03a4-a3d9-49db-a274-9ff5205f10dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.592 239549 DEBUG oslo_concurrency.lockutils [req-4af24270-3629-44c5-8042-f8046ceedf3f req-a5ed03a4-a3d9-49db-a274-9ff5205f10dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.592 239549 DEBUG oslo_concurrency.lockutils [req-4af24270-3629-44c5-8042-f8046ceedf3f req-a5ed03a4-a3d9-49db-a274-9ff5205f10dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.592 239549 DEBUG nova.compute.manager [req-4af24270-3629-44c5-8042-f8046ceedf3f req-a5ed03a4-a3d9-49db-a274-9ff5205f10dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Processing event network-vif-plugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.593 239549 DEBUG nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.604 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.605 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046302.6052096, 4cfba600-0819-408d-b5bb-f2ecefc96cd1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.605 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] VM Resumed (Lifecycle Event)
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.609 239549 INFO nova.virt.libvirt.driver [-] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Instance spawned successfully.
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.609 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.640 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.641 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.641 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.642 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.643 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.643 239549 DEBUG nova.virt.libvirt.driver [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.694 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.698 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:31:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:42.717 245965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:42.718 245965 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:42.718 245965 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.721 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.730 239549 INFO nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Took 11.38 seconds to spawn the instance on the hypervisor.
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.731 239549 DEBUG nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.792 239549 INFO nova.compute.manager [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Took 12.40 seconds to build instance.
Feb 02 15:31:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:31:42
Feb 02 15:31:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:31:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:31:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'default.rgw.log', 'backups', 'vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 02 15:31:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.812 239549 DEBUG oslo_concurrency.lockutils [None req-4fba1aef-7e11-40d7-acbe-9b442f3656ec 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:42 compute-0 nova_compute[239545]: 2026-02-02 15:31:42.993 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:43 compute-0 ceph-mon[75334]: osdmap e160: 3 total, 3 up, 3 in
Feb 02 15:31:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 88 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.6 MiB/s wr, 62 op/s
Feb 02 15:31:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:43.600 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[63744282-1b2f-41eb-a113-d24382fd6898]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:43.601 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf55b5918-71 in ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:31:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:43.604 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf55b5918-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:31:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:43.604 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ad99b1-2a74-4b6f-8312-5434a2d2b31b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:43.608 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[552a5046-ddd0-4936-8a5b-79333657c759]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:43.626 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7d12b9-90ed-47c4-9a37-80a5ac00d48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:43.641 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3d74716a-51c8-4026-bcd7-9e1afc4858df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:43.644 154982 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmph7zjumxk/privsep.sock']
Feb 02 15:31:44 compute-0 ceph-mon[75334]: pgmap v901: 305 pgs: 305 active+clean; 88 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.6 MiB/s wr, 62 op/s
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.339 154982 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.341 154982 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmph7zjumxk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.234 245979 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.238 245979 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.240 245979 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.240 245979 INFO oslo.privsep.daemon [-] privsep daemon running as pid 245979
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.344 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f6c021-282f-4265-af15-7db8a2bf6a4a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3786106744' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3786106744' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:31:44 compute-0 nova_compute[239545]: 2026-02-02 15:31:44.810 239549 DEBUG nova.compute.manager [req-2bca08cf-4d27-4c17-91d5-826a7675c986 req-131b12d9-da43-4e57-b070-9b8f7309fe35 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Received event network-vif-plugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.810 245979 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:44 compute-0 nova_compute[239545]: 2026-02-02 15:31:44.811 239549 DEBUG oslo_concurrency.lockutils [req-2bca08cf-4d27-4c17-91d5-826a7675c986 req-131b12d9-da43-4e57-b070-9b8f7309fe35 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.810 245979 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:44 compute-0 nova_compute[239545]: 2026-02-02 15:31:44.811 239549 DEBUG oslo_concurrency.lockutils [req-2bca08cf-4d27-4c17-91d5-826a7675c986 req-131b12d9-da43-4e57-b070-9b8f7309fe35 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:44 compute-0 nova_compute[239545]: 2026-02-02 15:31:44.811 239549 DEBUG oslo_concurrency.lockutils [req-2bca08cf-4d27-4c17-91d5-826a7675c986 req-131b12d9-da43-4e57-b070-9b8f7309fe35 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:44.810 245979 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:44 compute-0 nova_compute[239545]: 2026-02-02 15:31:44.812 239549 DEBUG nova.compute.manager [req-2bca08cf-4d27-4c17-91d5-826a7675c986 req-131b12d9-da43-4e57-b070-9b8f7309fe35 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] No waiting events found dispatching network-vif-plugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:31:44 compute-0 nova_compute[239545]: 2026-02-02 15:31:44.812 239549 WARNING nova.compute.manager [req-2bca08cf-4d27-4c17-91d5-826a7675c986 req-131b12d9-da43-4e57-b070-9b8f7309fe35 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Received unexpected event network-vif-plugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a for instance with vm_state active and task_state None.
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:31:44 compute-0 nova_compute[239545]: 2026-02-02 15:31:44.881 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:31:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3786106744' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3786106744' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.343 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[4beeaf43-569b-41cf-9c53-5e11ab8b33b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.364 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[bff558bf-c688-48c8-b3d1-ee54901bfbdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 NetworkManager[49171]: <info>  [1770046305.3659] manager: (tapf55b5918-70): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Feb 02 15:31:45 compute-0 systemd-udevd[245991]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:31:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.386 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecdd9b2-30d2-41a4-8abb-e7825dc01902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.390 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[db3aba18-b89e-4ade-9a34-374918dc9bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 NetworkManager[49171]: <info>  [1770046305.4060] device (tapf55b5918-70): carrier: link connected
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.408 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[cce68048-7c18-4382-9593-7f195b83071f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.420 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9594961a-71c6-4ae8-b5ed-d5293adca1ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf55b5918-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:61:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 379132, 'reachable_time': 41363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246009, 'error': None, 'target': 'ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.431 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f52232cc-5791-4b66-9394-4a12f23b08eb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:6162'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 379132, 'tstamp': 379132}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246010, 'error': None, 'target': 'ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.439 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[35f5eab6-f47f-4a2d-9cee-e697a9b6a5e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf55b5918-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:61:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 379132, 'reachable_time': 41363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246011, 'error': None, 'target': 'ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.455 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[810a4b30-8529-4c38-9dd9-88bd0b924237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.489 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[87ed2212-814b-412c-a841-07c1a9d79701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.491 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf55b5918-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.491 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.492 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf55b5918-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:31:45 compute-0 kernel: tapf55b5918-70: entered promiscuous mode
Feb 02 15:31:45 compute-0 nova_compute[239545]: 2026-02-02 15:31:45.535 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:45 compute-0 NetworkManager[49171]: <info>  [1770046305.5368] manager: (tapf55b5918-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.538 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf55b5918-70, col_values=(('external_ids', {'iface-id': '5d36ad53-5d41-49fd-bc78-05d3adc753af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:31:45 compute-0 nova_compute[239545]: 2026-02-02 15:31:45.539 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:45 compute-0 ovn_controller[144995]: 2026-02-02T15:31:45Z|00031|binding|INFO|Releasing lport 5d36ad53-5d41-49fd-bc78-05d3adc753af from this chassis (sb_readonly=0)
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.540 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f55b5918-7fa4-49c2-a6a6-e765ae3ee25e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f55b5918-7fa4-49c2-a6a6-e765ae3ee25e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.541 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0685243e-0dbc-435c-ba81-b3b08c8d744b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.542 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/f55b5918-7fa4-49c2-a6a6-e765ae3ee25e.pid.haproxy
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID f55b5918-7fa4-49c2-a6a6-e765ae3ee25e
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:31:45 compute-0 nova_compute[239545]: 2026-02-02 15:31:45.544 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:45.544 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e', 'env', 'PROCESS_TAG=haproxy-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f55b5918-7fa4-49c2-a6a6-e765ae3ee25e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:31:45 compute-0 podman[246044]: 2026-02-02 15:31:45.916004903 +0000 UTC m=+0.070229616 container create 991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 02 15:31:45 compute-0 systemd[1]: Started libpod-conmon-991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4.scope.
Feb 02 15:31:45 compute-0 podman[246044]: 2026-02-02 15:31:45.869276588 +0000 UTC m=+0.023501321 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:31:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d196d0fc8a79959d06edef964a34f7488ed18921bec586166d00562f8f928a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:31:45 compute-0 podman[246044]: 2026-02-02 15:31:45.995706426 +0000 UTC m=+0.149931139 container init 991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:31:46 compute-0 podman[246044]: 2026-02-02 15:31:46.00322919 +0000 UTC m=+0.157453893 container start 991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:31:46 compute-0 neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e[246059]: [NOTICE]   (246063) : New worker (246065) forked
Feb 02 15:31:46 compute-0 neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e[246059]: [NOTICE]   (246063) : Loading success.
Feb 02 15:31:46 compute-0 ceph-mon[75334]: pgmap v902: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Feb 02 15:31:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 127 op/s
Feb 02 15:31:47 compute-0 nova_compute[239545]: 2026-02-02 15:31:47.995 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:48 compute-0 ceph-mon[75334]: pgmap v903: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 127 op/s
Feb 02 15:31:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:48 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/684958815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:48 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/684958815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 139 op/s
Feb 02 15:31:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/684958815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/684958815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.623 239549 DEBUG oslo_concurrency.lockutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquiring lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.624 239549 DEBUG oslo_concurrency.lockutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.624 239549 DEBUG oslo_concurrency.lockutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquiring lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.624 239549 DEBUG oslo_concurrency.lockutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.624 239549 DEBUG oslo_concurrency.lockutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.625 239549 INFO nova.compute.manager [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Terminating instance
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.626 239549 DEBUG nova.compute.manager [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:31:49 compute-0 kernel: tapecb9e392-aa (unregistering): left promiscuous mode
Feb 02 15:31:49 compute-0 NetworkManager[49171]: <info>  [1770046309.6633] device (tapecb9e392-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:31:49 compute-0 ovn_controller[144995]: 2026-02-02T15:31:49Z|00032|binding|INFO|Releasing lport ecb9e392-aa7b-4bef-9702-cdda122dd59a from this chassis (sb_readonly=0)
Feb 02 15:31:49 compute-0 ovn_controller[144995]: 2026-02-02T15:31:49Z|00033|binding|INFO|Setting lport ecb9e392-aa7b-4bef-9702-cdda122dd59a down in Southbound
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.670 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:49 compute-0 ovn_controller[144995]: 2026-02-02T15:31:49Z|00034|binding|INFO|Removing iface tapecb9e392-aa ovn-installed in OVS
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.673 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.681 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:24:ff 10.100.0.4'], port_security=['fa:16:3e:5a:24:ff 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4cfba600-0819-408d-b5bb-f2ecefc96cd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ce0bcfcc8db482faceb0e2393ff6f5a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76bcb60a-2934-481f-bd19-7b90959e1935', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58a46ee6-13ea-4eba-875b-a3bb97e5ec29, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ecb9e392-aa7b-4bef-9702-cdda122dd59a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.683 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ecb9e392-aa7b-4bef-9702-cdda122dd59a in datapath f55b5918-7fa4-49c2-a6a6-e765ae3ee25e unbound from our chassis
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.684 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f55b5918-7fa4-49c2-a6a6-e765ae3ee25e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.685 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[731315de-c296-40ec-ae02-2b661f5eb725]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.685 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e namespace which is not needed anymore
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.686 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:49 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb 02 15:31:49 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 7.409s CPU time.
Feb 02 15:31:49 compute-0 systemd-machined[207609]: Machine qemu-1-instance-00000001 terminated.
Feb 02 15:31:49 compute-0 neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e[246059]: [NOTICE]   (246063) : haproxy version is 2.8.14-c23fe91
Feb 02 15:31:49 compute-0 neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e[246059]: [NOTICE]   (246063) : path to executable is /usr/sbin/haproxy
Feb 02 15:31:49 compute-0 neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e[246059]: [WARNING]  (246063) : Exiting Master process...
Feb 02 15:31:49 compute-0 neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e[246059]: [WARNING]  (246063) : Exiting Master process...
Feb 02 15:31:49 compute-0 neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e[246059]: [ALERT]    (246063) : Current worker (246065) exited with code 143 (Terminated)
Feb 02 15:31:49 compute-0 neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e[246059]: [WARNING]  (246063) : All workers exited. Exiting... (0)
Feb 02 15:31:49 compute-0 systemd[1]: libpod-991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4.scope: Deactivated successfully.
Feb 02 15:31:49 compute-0 podman[246098]: 2026-02-02 15:31:49.795811188 +0000 UTC m=+0.041993366 container died 991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4-userdata-shm.mount: Deactivated successfully.
Feb 02 15:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d196d0fc8a79959d06edef964a34f7488ed18921bec586166d00562f8f928a0-merged.mount: Deactivated successfully.
Feb 02 15:31:49 compute-0 podman[246098]: 2026-02-02 15:31:49.824733156 +0000 UTC m=+0.070915314 container cleanup 991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:31:49 compute-0 systemd[1]: libpod-conmon-991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4.scope: Deactivated successfully.
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.839 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.844 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.857 239549 INFO nova.virt.libvirt.driver [-] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Instance destroyed successfully.
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.857 239549 DEBUG nova.objects.instance [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lazy-loading 'resources' on Instance uuid 4cfba600-0819-408d-b5bb-f2ecefc96cd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.872 239549 DEBUG nova.virt.libvirt.vif [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:31:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-797615945',display_name='tempest-VolumesActionsTest-instance-797615945',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-797615945',id=1,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:31:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ce0bcfcc8db482faceb0e2393ff6f5a',ramdisk_id='',reservation_id='r-p0fweubv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1694672341',owner_user_name='tempest-VolumesActionsTest-1694672341-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:31:42Z,user_data=None,user_id='9b1a2ce320b54cc0982384da6edd201c',uuid=4cfba600-0819-408d-b5bb-f2ecefc96cd1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.872 239549 DEBUG nova.network.os_vif_util [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Converting VIF {"id": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "address": "fa:16:3e:5a:24:ff", "network": {"id": "f55b5918-7fa4-49c2-a6a6-e765ae3ee25e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1300748572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ce0bcfcc8db482faceb0e2393ff6f5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecb9e392-aa", "ovs_interfaceid": "ecb9e392-aa7b-4bef-9702-cdda122dd59a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.873 239549 DEBUG nova.network.os_vif_util [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:24:ff,bridge_name='br-int',has_traffic_filtering=True,id=ecb9e392-aa7b-4bef-9702-cdda122dd59a,network=Network(f55b5918-7fa4-49c2-a6a6-e765ae3ee25e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb9e392-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.873 239549 DEBUG os_vif [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:24:ff,bridge_name='br-int',has_traffic_filtering=True,id=ecb9e392-aa7b-4bef-9702-cdda122dd59a,network=Network(f55b5918-7fa4-49c2-a6a6-e765ae3ee25e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb9e392-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.875 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.875 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecb9e392-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.876 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.877 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:49 compute-0 podman[246129]: 2026-02-02 15:31:49.879659924 +0000 UTC m=+0.040261599 container remove 991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.879 239549 INFO os_vif [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:24:ff,bridge_name='br-int',has_traffic_filtering=True,id=ecb9e392-aa7b-4bef-9702-cdda122dd59a,network=Network(f55b5918-7fa4-49c2-a6a6-e765ae3ee25e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecb9e392-aa')
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.882 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[68737496-802c-42e0-921e-2874a7c3db8b]: (4, ('Mon Feb  2 03:31:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e (991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4)\n991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4\nMon Feb  2 03:31:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e (991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4)\n991004d4edfd7ca5a9e708dc44874f9e9a03886ad44fb1b0490e12d9bc9d53f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.883 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccd989d-08e6-4f10-8ca2-d7ae2c02b59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.884 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf55b5918-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:31:49 compute-0 kernel: tapf55b5918-70: left promiscuous mode
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.893 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[67bc863f-06ac-40a7-b045-9c4ddc1b6888]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:49 compute-0 nova_compute[239545]: 2026-02-02 15:31:49.895 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.908 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7593e40f-117f-4e9b-bb4a-4c1f40a15ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.909 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7f852b7b-642b-407f-8371-80911d81c3c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.917 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[54614f95-d6d1-4ac6-90f3-779aa57dc4c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 379125, 'reachable_time': 22146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246172, 'error': None, 'target': 'ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:49 compute-0 systemd[1]: run-netns-ovnmeta\x2df55b5918\x2d7fa4\x2d49c2\x2da6a6\x2de765ae3ee25e.mount: Deactivated successfully.
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.925 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f55b5918-7fa4-49c2-a6a6-e765ae3ee25e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:31:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:49.925 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0d4fcb-8d38-45e6-ab92-feae6da65e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.138 239549 DEBUG nova.compute.manager [req-67d323be-f842-4f6c-b63b-05a4d467a34e req-19c9f235-40c6-4492-9be8-99c81ebeb8f6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Received event network-vif-unplugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.138 239549 DEBUG oslo_concurrency.lockutils [req-67d323be-f842-4f6c-b63b-05a4d467a34e req-19c9f235-40c6-4492-9be8-99c81ebeb8f6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.139 239549 DEBUG oslo_concurrency.lockutils [req-67d323be-f842-4f6c-b63b-05a4d467a34e req-19c9f235-40c6-4492-9be8-99c81ebeb8f6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.139 239549 DEBUG oslo_concurrency.lockutils [req-67d323be-f842-4f6c-b63b-05a4d467a34e req-19c9f235-40c6-4492-9be8-99c81ebeb8f6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.139 239549 DEBUG nova.compute.manager [req-67d323be-f842-4f6c-b63b-05a4d467a34e req-19c9f235-40c6-4492-9be8-99c81ebeb8f6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] No waiting events found dispatching network-vif-unplugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.139 239549 DEBUG nova.compute.manager [req-67d323be-f842-4f6c-b63b-05a4d467a34e req-19c9f235-40c6-4492-9be8-99c81ebeb8f6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Received event network-vif-unplugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.151 239549 INFO nova.virt.libvirt.driver [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Deleting instance files /var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1_del
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.151 239549 INFO nova.virt.libvirt.driver [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Deletion of /var/lib/nova/instances/4cfba600-0819-408d-b5bb-f2ecefc96cd1_del complete
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.246 239549 DEBUG nova.virt.libvirt.host [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.247 239549 INFO nova.virt.libvirt.host [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] UEFI support detected
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.249 239549 INFO nova.compute.manager [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.249 239549 DEBUG oslo.service.loopingcall [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.249 239549 DEBUG nova.compute.manager [-] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.249 239549 DEBUG nova.network.neutron [-] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:31:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:50.262 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:31:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:50.263 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.263 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:50 compute-0 ceph-mon[75334]: pgmap v904: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 139 op/s
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.931 239549 DEBUG nova.network.neutron [-] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.953 239549 INFO nova.compute.manager [-] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Took 0.70 seconds to deallocate network for instance.
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.993 239549 DEBUG oslo_concurrency.lockutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:50 compute-0 nova_compute[239545]: 2026-02-02 15:31:50.993 239549 DEBUG oslo_concurrency.lockutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.058 239549 DEBUG oslo_concurrency.processutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.130 239549 DEBUG nova.compute.manager [req-28711cf8-2f5e-4aea-bc7a-397b966c1708 req-9f4463f7-34d4-49a2-977c-6cf85da8562d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Received event network-vif-deleted-ecb9e392-aa7b-4bef-9702-cdda122dd59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:31:51 compute-0 podman[246195]: 2026-02-02 15:31:51.320800749 +0000 UTC m=+0.067578293 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 02 15:31:51 compute-0 podman[246196]: 2026-02-02 15:31:51.330623927 +0000 UTC m=+0.072643251 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Feb 02 15:31:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 45 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 152 op/s
Feb 02 15:31:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:31:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306726293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.638 239549 DEBUG oslo_concurrency.processutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.643 239549 DEBUG nova.compute.provider_tree [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.693 239549 ERROR nova.scheduler.client.report [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] [req-c2b15c29-a372-4462-9f5e-5329901af44e] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-c2b15c29-a372-4462-9f5e-5329901af44e"}]}
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.707 239549 DEBUG nova.scheduler.client.report [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Refreshing inventories for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.724 239549 DEBUG nova.scheduler.client.report [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Updating ProviderTree inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.724 239549 DEBUG nova.compute.provider_tree [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.740 239549 DEBUG nova.scheduler.client.report [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Refreshing aggregate associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.762 239549 DEBUG nova.scheduler.client.report [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Refreshing trait associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, traits: COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 15:31:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/590229642' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/590229642' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:51 compute-0 nova_compute[239545]: 2026-02-02 15:31:51.795 239549 DEBUG oslo_concurrency.processutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.216 239549 DEBUG nova.compute.manager [req-98ea5f2c-b7e9-409c-ae43-0956ade938af req-1c858204-0c71-45c2-94ca-20172bab0ba2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Received event network-vif-plugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.216 239549 DEBUG oslo_concurrency.lockutils [req-98ea5f2c-b7e9-409c-ae43-0956ade938af req-1c858204-0c71-45c2-94ca-20172bab0ba2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.217 239549 DEBUG oslo_concurrency.lockutils [req-98ea5f2c-b7e9-409c-ae43-0956ade938af req-1c858204-0c71-45c2-94ca-20172bab0ba2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.217 239549 DEBUG oslo_concurrency.lockutils [req-98ea5f2c-b7e9-409c-ae43-0956ade938af req-1c858204-0c71-45c2-94ca-20172bab0ba2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.217 239549 DEBUG nova.compute.manager [req-98ea5f2c-b7e9-409c-ae43-0956ade938af req-1c858204-0c71-45c2-94ca-20172bab0ba2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] No waiting events found dispatching network-vif-plugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.217 239549 WARNING nova.compute.manager [req-98ea5f2c-b7e9-409c-ae43-0956ade938af req-1c858204-0c71-45c2-94ca-20172bab0ba2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Received unexpected event network-vif-plugged-ecb9e392-aa7b-4bef-9702-cdda122dd59a for instance with vm_state deleted and task_state None.
Feb 02 15:31:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:31:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/587965693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.317 239549 DEBUG oslo_concurrency.processutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.323 239549 DEBUG nova.compute.provider_tree [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.370 239549 DEBUG nova.scheduler.client.report [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Updated inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.371 239549 DEBUG nova.compute.provider_tree [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Updating resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.371 239549 DEBUG nova.compute.provider_tree [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.403 239549 DEBUG oslo_concurrency.lockutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.430 239549 INFO nova.scheduler.client.report [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Deleted allocations for instance 4cfba600-0819-408d-b5bb-f2ecefc96cd1
Feb 02 15:31:52 compute-0 ceph-mon[75334]: pgmap v905: 305 pgs: 305 active+clean; 45 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 152 op/s
Feb 02 15:31:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2306726293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:31:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/590229642' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/590229642' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/587965693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.505 239549 DEBUG oslo_concurrency.lockutils [None req-43c1bfb7-ccd5-437c-8365-9a32366ec42a 9b1a2ce320b54cc0982384da6edd201c 1ce0bcfcc8db482faceb0e2393ff6f5a - - default default] Lock "4cfba600-0819-408d-b5bb-f2ecefc96cd1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:52 compute-0 nova_compute[239545]: 2026-02-02 15:31:52.997 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:53 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:53.265 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:31:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 137 op/s
Feb 02 15:31:53 compute-0 nova_compute[239545]: 2026-02-02 15:31:53.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:31:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3147489723' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3147489723' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.84289764605005e-08 of space, bias 1.0, pg target 2.952869293815015e-05 quantized to 32 (current 32)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.908031736119166e-06 of space, bias 1.0, pg target 0.0005724095208357497 quantized to 32 (current 32)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659824711399137 of space, bias 1.0, pg target 0.1997947413419741 quantized to 32 (current 32)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.533846374817749e-06 of space, bias 4.0, pg target 0.0018406156497812987 quantized to 16 (current 16)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:31:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:31:54 compute-0 ceph-mon[75334]: pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 137 op/s
Feb 02 15:31:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3147489723' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3147489723' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:54 compute-0 nova_compute[239545]: 2026-02-02 15:31:54.878 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1702336581' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1702336581' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 148 op/s
Feb 02 15:31:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1970540020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1970540020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1702336581' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1702336581' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1970540020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1970540020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:56 compute-0 ceph-mon[75334]: pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 148 op/s
Feb 02 15:31:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:31:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 698 KiB/s rd, 4.7 KiB/s wr, 113 op/s
Feb 02 15:31:57 compute-0 nova_compute[239545]: 2026-02-02 15:31:57.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:31:57 compute-0 nova_compute[239545]: 2026-02-02 15:31:57.547 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:31:57 compute-0 nova_compute[239545]: 2026-02-02 15:31:57.547 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:31:57 compute-0 nova_compute[239545]: 2026-02-02 15:31:57.566 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:31:57 compute-0 nova_compute[239545]: 2026-02-02 15:31:57.566 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:31:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1352098515' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1352098515' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:57 compute-0 nova_compute[239545]: 2026-02-02 15:31:57.998 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:31:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/595260343' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:31:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/595260343' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:58 compute-0 ceph-mon[75334]: pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 698 KiB/s rd, 4.7 KiB/s wr, 113 op/s
Feb 02 15:31:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1352098515' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1352098515' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/595260343' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:31:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/595260343' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:31:58 compute-0 nova_compute[239545]: 2026-02-02 15:31:58.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:31:58 compute-0 nova_compute[239545]: 2026-02-02 15:31:58.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:31:58 compute-0 nova_compute[239545]: 2026-02-02 15:31:58.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:58 compute-0 nova_compute[239545]: 2026-02-02 15:31:58.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:58 compute-0 nova_compute[239545]: 2026-02-02 15:31:58.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:58 compute-0 nova_compute[239545]: 2026-02-02 15:31:58.573 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:31:58 compute-0 nova_compute[239545]: 2026-02-02 15:31:58.573 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:31:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657099451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.115 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:59.243 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:59.244 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:31:59.244 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.255 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.256 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4785MB free_disk=59.98827534541488GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.256 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.257 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.329 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.329 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.346 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:31:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 416 KiB/s rd, 4.1 KiB/s wr, 102 op/s
Feb 02 15:31:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/657099451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:31:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:31:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1564894805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.874 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.878 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.880 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.898 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.924 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:31:59 compute-0 nova_compute[239545]: 2026-02-02 15:31:59.925 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:00 compute-0 ceph-mon[75334]: pgmap v909: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 416 KiB/s rd, 4.1 KiB/s wr, 102 op/s
Feb 02 15:32:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1564894805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:32:00 compute-0 nova_compute[239545]: 2026-02-02 15:32:00.925 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:32:00 compute-0 nova_compute[239545]: 2026-02-02 15:32:00.928 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:32:00 compute-0 nova_compute[239545]: 2026-02-02 15:32:00.929 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:32:00 compute-0 nova_compute[239545]: 2026-02-02 15:32:00.929 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:32:00 compute-0 nova_compute[239545]: 2026-02-02 15:32:00.929 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:32:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 5.0 KiB/s wr, 117 op/s
Feb 02 15:32:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:02 compute-0 ceph-mon[75334]: pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 5.0 KiB/s wr, 117 op/s
Feb 02 15:32:02 compute-0 nova_compute[239545]: 2026-02-02 15:32:02.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:32:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1704916554' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1704916554' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:03 compute-0 nova_compute[239545]: 2026-02-02 15:32:03.000 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.5 KiB/s wr, 100 op/s
Feb 02 15:32:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1704916554' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1704916554' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:04 compute-0 ceph-mon[75334]: pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.5 KiB/s wr, 100 op/s
Feb 02 15:32:04 compute-0 nova_compute[239545]: 2026-02-02 15:32:04.854 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046309.8531158, 4cfba600-0819-408d-b5bb-f2ecefc96cd1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:32:04 compute-0 nova_compute[239545]: 2026-02-02 15:32:04.855 239549 INFO nova.compute.manager [-] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] VM Stopped (Lifecycle Event)
Feb 02 15:32:04 compute-0 nova_compute[239545]: 2026-02-02 15:32:04.883 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:04 compute-0 nova_compute[239545]: 2026-02-02 15:32:04.997 239549 DEBUG nova.compute.manager [None req-fe770a06-0043-4520-a8ac-95bbb5202fbe - - - - - -] [instance: 4cfba600-0819-408d-b5bb-f2ecefc96cd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.5 KiB/s wr, 98 op/s
Feb 02 15:32:05 compute-0 ceph-mon[75334]: pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.5 KiB/s wr, 98 op/s
Feb 02 15:32:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1767140315' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1767140315' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1767140315' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1767140315' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.4 KiB/s wr, 80 op/s
Feb 02 15:32:07 compute-0 ceph-mon[75334]: pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.4 KiB/s wr, 80 op/s
Feb 02 15:32:08 compute-0 nova_compute[239545]: 2026-02-02 15:32:08.012 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 KiB/s wr, 71 op/s
Feb 02 15:32:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1207479926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1207479926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:09 compute-0 nova_compute[239545]: 2026-02-02 15:32:09.886 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:10 compute-0 ceph-mon[75334]: pgmap v914: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 KiB/s wr, 71 op/s
Feb 02 15:32:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1207479926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1207479926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.8 KiB/s wr, 86 op/s
Feb 02 15:32:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:12 compute-0 ceph-mon[75334]: pgmap v915: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.8 KiB/s wr, 86 op/s
Feb 02 15:32:13 compute-0 nova_compute[239545]: 2026-02-02 15:32:13.013 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 KiB/s wr, 59 op/s
Feb 02 15:32:14 compute-0 nova_compute[239545]: 2026-02-02 15:32:14.132 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:14 compute-0 ceph-mon[75334]: pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 KiB/s wr, 59 op/s
Feb 02 15:32:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:32:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:32:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:32:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:32:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:32:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:32:14 compute-0 nova_compute[239545]: 2026-02-02 15:32:14.889 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 56 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 768 KiB/s wr, 59 op/s
Feb 02 15:32:15 compute-0 ceph-mon[75334]: pgmap v917: 305 pgs: 305 active+clean; 56 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 768 KiB/s wr, 59 op/s
Feb 02 15:32:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1203594197' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1203594197' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1203594197' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1203594197' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 76 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 63 op/s
Feb 02 15:32:17 compute-0 ceph-mon[75334]: pgmap v918: 305 pgs: 305 active+clean; 76 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 63 op/s
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.054 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.082 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "947d8658-9954-4913-a435-b11628cafdf2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.082 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.103 239549 DEBUG nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.179 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.180 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.189 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.190 239549 INFO nova.compute.claims [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.288 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Feb 02 15:32:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Feb 02 15:32:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Feb 02 15:32:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:32:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154052752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.823 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.830 239549 DEBUG nova.compute.provider_tree [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.849 239549 DEBUG nova.scheduler.client.report [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.874 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.875 239549 DEBUG nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.948 239549 DEBUG nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.949 239549 DEBUG nova.network.neutron [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:32:18 compute-0 nova_compute[239545]: 2026-02-02 15:32:18.971 239549 INFO nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.001 239549 DEBUG nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.165 239549 DEBUG nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.167 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.167 239549 INFO nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Creating image(s)
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.187 239549 DEBUG nova.storage.rbd_utils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 947d8658-9954-4913-a435-b11628cafdf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.208 239549 DEBUG nova.storage.rbd_utils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 947d8658-9954-4913-a435-b11628cafdf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.229 239549 DEBUG nova.storage.rbd_utils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 947d8658-9954-4913-a435-b11628cafdf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.232 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.246 239549 DEBUG nova.policy [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '83ee7fa03617458e9265b743f0ff61cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6977b6ce680b402a9c819ab435e57786', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.282 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.283 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.283 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.284 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.303 239549 DEBUG nova.storage.rbd_utils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 947d8658-9954-4913-a435-b11628cafdf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.306 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 947d8658-9954-4913-a435-b11628cafdf2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 76 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.490 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 947d8658-9954-4913-a435-b11628cafdf2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.535 239549 DEBUG nova.storage.rbd_utils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] resizing rbd image 947d8658-9954-4913-a435-b11628cafdf2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.596 239549 DEBUG nova.objects.instance [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lazy-loading 'migration_context' on Instance uuid 947d8658-9954-4913-a435-b11628cafdf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.617 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.618 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Ensure instance console log exists: /var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.618 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.618 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.618 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:19 compute-0 ceph-mon[75334]: osdmap e161: 3 total, 3 up, 3 in
Feb 02 15:32:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4154052752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:32:19 compute-0 ceph-mon[75334]: pgmap v920: 305 pgs: 305 active+clean; 76 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Feb 02 15:32:19 compute-0 nova_compute[239545]: 2026-02-02 15:32:19.891 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:20 compute-0 nova_compute[239545]: 2026-02-02 15:32:20.327 239549 DEBUG nova.network.neutron [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Successfully created port: 0df1eb3c-622f-404f-847d-1d6af54dfaac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:32:20 compute-0 nova_compute[239545]: 2026-02-02 15:32:20.970 239549 DEBUG nova.network.neutron [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Successfully updated port: 0df1eb3c-622f-404f-847d-1d6af54dfaac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:32:20 compute-0 nova_compute[239545]: 2026-02-02 15:32:20.987 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "refresh_cache-947d8658-9954-4913-a435-b11628cafdf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:32:20 compute-0 nova_compute[239545]: 2026-02-02 15:32:20.987 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquired lock "refresh_cache-947d8658-9954-4913-a435-b11628cafdf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:32:20 compute-0 nova_compute[239545]: 2026-02-02 15:32:20.988 239549 DEBUG nova.network.neutron [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:32:21 compute-0 nova_compute[239545]: 2026-02-02 15:32:21.158 239549 DEBUG nova.compute.manager [req-63110d65-6876-42e9-a74c-8d12ead0459e req-0db4dc69-a015-4729-8bdd-9b019407ea7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Received event network-changed-0df1eb3c-622f-404f-847d-1d6af54dfaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:21 compute-0 nova_compute[239545]: 2026-02-02 15:32:21.159 239549 DEBUG nova.compute.manager [req-63110d65-6876-42e9-a74c-8d12ead0459e req-0db4dc69-a015-4729-8bdd-9b019407ea7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Refreshing instance network info cache due to event network-changed-0df1eb3c-622f-404f-847d-1d6af54dfaac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:32:21 compute-0 nova_compute[239545]: 2026-02-02 15:32:21.159 239549 DEBUG oslo_concurrency.lockutils [req-63110d65-6876-42e9-a74c-8d12ead0459e req-0db4dc69-a015-4729-8bdd-9b019407ea7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-947d8658-9954-4913-a435-b11628cafdf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:32:21 compute-0 nova_compute[239545]: 2026-02-02 15:32:21.274 239549 DEBUG nova.network.neutron [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:32:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 74 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Feb 02 15:32:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:22 compute-0 podman[246497]: 2026-02-02 15:32:22.325669262 +0000 UTC m=+0.060174491 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 02 15:32:22 compute-0 podman[246496]: 2026-02-02 15:32:22.341444982 +0000 UTC m=+0.079313284 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 15:32:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Feb 02 15:32:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Feb 02 15:32:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Feb 02 15:32:22 compute-0 ceph-mon[75334]: pgmap v921: 305 pgs: 305 active+clean; 74 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.519 239549 DEBUG nova.network.neutron [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Updating instance_info_cache with network_info: [{"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.543 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Releasing lock "refresh_cache-947d8658-9954-4913-a435-b11628cafdf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.544 239549 DEBUG nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Instance network_info: |[{"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.544 239549 DEBUG oslo_concurrency.lockutils [req-63110d65-6876-42e9-a74c-8d12ead0459e req-0db4dc69-a015-4729-8bdd-9b019407ea7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-947d8658-9954-4913-a435-b11628cafdf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.544 239549 DEBUG nova.network.neutron [req-63110d65-6876-42e9-a74c-8d12ead0459e req-0db4dc69-a015-4729-8bdd-9b019407ea7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Refreshing network info cache for port 0df1eb3c-622f-404f-847d-1d6af54dfaac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.549 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Start _get_guest_xml network_info=[{"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.554 239549 WARNING nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.560 239549 DEBUG nova.virt.libvirt.host [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.561 239549 DEBUG nova.virt.libvirt.host [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.568 239549 DEBUG nova.virt.libvirt.host [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.570 239549 DEBUG nova.virt.libvirt.host [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.570 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.570 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.571 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.571 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.572 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.572 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.572 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.573 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.573 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.573 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.574 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.574 239549 DEBUG nova.virt.hardware [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:32:22 compute-0 nova_compute[239545]: 2026-02-02 15:32:22.577 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.056 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:32:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341914672' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.144 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.175 239549 DEBUG nova.storage.rbd_utils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 947d8658-9954-4913-a435-b11628cafdf2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.179 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.2 MiB/s wr, 104 op/s
Feb 02 15:32:23 compute-0 ceph-mon[75334]: osdmap e162: 3 total, 3 up, 3 in
Feb 02 15:32:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/341914672' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:32:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117391865' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.725 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.728 239549 DEBUG nova.virt.libvirt.vif [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:32:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-897504087',display_name='tempest-VolumesActionsTest-instance-897504087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-897504087',id=2,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6977b6ce680b402a9c819ab435e57786',ramdisk_id='',reservation_id='r-huaj6a65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-571181442',owner_user_name='tempest-VolumesActionsTest-571181442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:32:19Z,user_data=None,user_id='83ee7fa03617458e9265b743f0ff61cb',uuid=947d8658-9954-4913-a435-b11628cafdf2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.729 239549 DEBUG nova.network.os_vif_util [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converting VIF {"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.730 239549 DEBUG nova.network.os_vif_util [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:e9:7d,bridge_name='br-int',has_traffic_filtering=True,id=0df1eb3c-622f-404f-847d-1d6af54dfaac,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df1eb3c-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.732 239549 DEBUG nova.objects.instance [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lazy-loading 'pci_devices' on Instance uuid 947d8658-9954-4913-a435-b11628cafdf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.748 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <uuid>947d8658-9954-4913-a435-b11628cafdf2</uuid>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <name>instance-00000002</name>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <nova:name>tempest-VolumesActionsTest-instance-897504087</nova:name>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:32:22</nova:creationTime>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <nova:user uuid="83ee7fa03617458e9265b743f0ff61cb">tempest-VolumesActionsTest-571181442-project-member</nova:user>
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <nova:project uuid="6977b6ce680b402a9c819ab435e57786">tempest-VolumesActionsTest-571181442</nova:project>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <nova:port uuid="0df1eb3c-622f-404f-847d-1d6af54dfaac">
Feb 02 15:32:23 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <system>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <entry name="serial">947d8658-9954-4913-a435-b11628cafdf2</entry>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <entry name="uuid">947d8658-9954-4913-a435-b11628cafdf2</entry>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     </system>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <os>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   </os>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <features>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   </features>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/947d8658-9954-4913-a435-b11628cafdf2_disk">
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       </source>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/947d8658-9954-4913-a435-b11628cafdf2_disk.config">
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       </source>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:32:23 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:4c:e9:7d"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <target dev="tap0df1eb3c-62"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2/console.log" append="off"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <video>
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     </video>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:32:23 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:32:23 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:32:23 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:32:23 compute-0 nova_compute[239545]: </domain>
Feb 02 15:32:23 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.749 239549 DEBUG nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Preparing to wait for external event network-vif-plugged-0df1eb3c-622f-404f-847d-1d6af54dfaac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.750 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "947d8658-9954-4913-a435-b11628cafdf2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.751 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.751 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.752 239549 DEBUG nova.virt.libvirt.vif [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:32:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-897504087',display_name='tempest-VolumesActionsTest-instance-897504087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-897504087',id=2,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6977b6ce680b402a9c819ab435e57786',ramdisk_id='',reservation_id='r-huaj6a65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-571181442',owner_user_name='tempest-VolumesActionsTest-571181442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:32:19Z,user_data=None,user_id='83ee7fa03617458e9265b743f0ff61cb',uuid=947d8658-9954-4913-a435-b11628cafdf2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.752 239549 DEBUG nova.network.os_vif_util [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converting VIF {"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.753 239549 DEBUG nova.network.os_vif_util [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:e9:7d,bridge_name='br-int',has_traffic_filtering=True,id=0df1eb3c-622f-404f-847d-1d6af54dfaac,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df1eb3c-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.753 239549 DEBUG os_vif [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:e9:7d,bridge_name='br-int',has_traffic_filtering=True,id=0df1eb3c-622f-404f-847d-1d6af54dfaac,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df1eb3c-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.754 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.754 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.755 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:32:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2407160806' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.758 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.759 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0df1eb3c-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.759 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0df1eb3c-62, col_values=(('external_ids', {'iface-id': '0df1eb3c-622f-404f-847d-1d6af54dfaac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:e9:7d', 'vm-uuid': '947d8658-9954-4913-a435-b11628cafdf2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2407160806' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.797 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:23 compute-0 NetworkManager[49171]: <info>  [1770046343.7994] manager: (tap0df1eb3c-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.802 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.803 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.805 239549 INFO os_vif [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:e9:7d,bridge_name='br-int',has_traffic_filtering=True,id=0df1eb3c-622f-404f-847d-1d6af54dfaac,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df1eb3c-62')
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.853 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.853 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.854 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] No VIF found with MAC fa:16:3e:4c:e9:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.854 239549 INFO nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Using config drive
Feb 02 15:32:23 compute-0 nova_compute[239545]: 2026-02-02 15:32:23.872 239549 DEBUG nova.storage.rbd_utils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 947d8658-9954-4913-a435-b11628cafdf2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.142 239549 INFO nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Creating config drive at /var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2/disk.config
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.146 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7yuaicqq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.265 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7yuaicqq" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.296 239549 DEBUG nova.storage.rbd_utils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 947d8658-9954-4913-a435-b11628cafdf2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.300 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2/disk.config 947d8658-9954-4913-a435-b11628cafdf2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.325 239549 DEBUG nova.network.neutron [req-63110d65-6876-42e9-a74c-8d12ead0459e req-0db4dc69-a015-4729-8bdd-9b019407ea7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Updated VIF entry in instance network info cache for port 0df1eb3c-622f-404f-847d-1d6af54dfaac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.326 239549 DEBUG nova.network.neutron [req-63110d65-6876-42e9-a74c-8d12ead0459e req-0db4dc69-a015-4729-8bdd-9b019407ea7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Updating instance_info_cache with network_info: [{"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.344 239549 DEBUG oslo_concurrency.lockutils [req-63110d65-6876-42e9-a74c-8d12ead0459e req-0db4dc69-a015-4729-8bdd-9b019407ea7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-947d8658-9954-4913-a435-b11628cafdf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.421 239549 DEBUG oslo_concurrency.processutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2/disk.config 947d8658-9954-4913-a435-b11628cafdf2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.421 239549 INFO nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Deleting local config drive /var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2/disk.config because it was imported into RBD.
Feb 02 15:32:24 compute-0 kernel: tap0df1eb3c-62: entered promiscuous mode
Feb 02 15:32:24 compute-0 NetworkManager[49171]: <info>  [1770046344.4640] manager: (tap0df1eb3c-62): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Feb 02 15:32:24 compute-0 ovn_controller[144995]: 2026-02-02T15:32:24Z|00035|binding|INFO|Claiming lport 0df1eb3c-622f-404f-847d-1d6af54dfaac for this chassis.
Feb 02 15:32:24 compute-0 ovn_controller[144995]: 2026-02-02T15:32:24Z|00036|binding|INFO|0df1eb3c-622f-404f-847d-1d6af54dfaac: Claiming fa:16:3e:4c:e9:7d 10.100.0.11
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.464 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.468 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.470 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.481 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:e9:7d 10.100.0.11'], port_security=['fa:16:3e:4c:e9:7d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '947d8658-9954-4913-a435-b11628cafdf2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6977b6ce680b402a9c819ab435e57786', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20729199-588f-4645-942f-59f3b180bde7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86150e6c-013a-46b4-b477-93d40ca051fb, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=0df1eb3c-622f-404f-847d-1d6af54dfaac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.482 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 0df1eb3c-622f-404f-847d-1d6af54dfaac in datapath 67e1b911-f9d9-4f65-ae5c-193b47a00180 bound to our chassis
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.484 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67e1b911-f9d9-4f65-ae5c-193b47a00180
Feb 02 15:32:24 compute-0 systemd-machined[207609]: New machine qemu-2-instance-00000002.
Feb 02 15:32:24 compute-0 systemd-udevd[246675]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:32:24 compute-0 ceph-mon[75334]: pgmap v923: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.2 MiB/s wr, 104 op/s
Feb 02 15:32:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/117391865' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2407160806' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2407160806' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.493 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[57af1a84-ba56-496e-9f9a-dcb6db22ead7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.494 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67e1b911-f1 in ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.494 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.495 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67e1b911-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.496 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[332e1aa4-66ab-4ed0-a1a1-4eec4779bc60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_controller[144995]: 2026-02-02T15:32:24Z|00037|binding|INFO|Setting lport 0df1eb3c-622f-404f-847d-1d6af54dfaac ovn-installed in OVS
Feb 02 15:32:24 compute-0 ovn_controller[144995]: 2026-02-02T15:32:24Z|00038|binding|INFO|Setting lport 0df1eb3c-622f-404f-847d-1d6af54dfaac up in Southbound
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.497 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.497 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[25ba2b73-df05-4b8e-87de-80aec01f482e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Feb 02 15:32:24 compute-0 NetworkManager[49171]: <info>  [1770046344.5037] device (tap0df1eb3c-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:32:24 compute-0 NetworkManager[49171]: <info>  [1770046344.5043] device (tap0df1eb3c-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.508 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[47403e0c-8d9b-451e-a656-40b8a256131b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.533 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffdda86-399c-45cc-880e-a8ccb78f0eee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.550 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae37b5a-b712-4910-bd94-c9df86957c24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.554 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f76a9b-df3a-4a99-86f2-15b25a853fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 NetworkManager[49171]: <info>  [1770046344.5554] manager: (tap67e1b911-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.579 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[973d849a-e5d2-4ef9-a12a-882bf2b559ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.584 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[cde4d553-cea8-492a-8ada-5f576a265551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 NetworkManager[49171]: <info>  [1770046344.6016] device (tap67e1b911-f0): carrier: link connected
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.606 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[8f420ff5-f039-491a-94f8-1ff7d0a8dfdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.621 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[98e35a9f-63c0-4145-83c6-bfbd63605cb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e1b911-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:7f:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383051, 'reachable_time': 18824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246708, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.632 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3a7b23-639d-4bf9-b722-64792ddcafcb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:7f71'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383051, 'tstamp': 383051}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246709, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.644 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[471c099e-f7c1-4fcc-8e68-fa5b3928c65f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e1b911-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:7f:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383051, 'reachable_time': 18824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246710, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.665 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[17ec699f-7723-4f41-bebb-054d2ebc96ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.707 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d42a2e-6cd7-4939-8bb7-c795a0d52c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.714 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e1b911-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.715 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.715 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67e1b911-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.717 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:24 compute-0 NetworkManager[49171]: <info>  [1770046344.7191] manager: (tap67e1b911-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Feb 02 15:32:24 compute-0 kernel: tap67e1b911-f0: entered promiscuous mode
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.720 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.722 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67e1b911-f0, col_values=(('external_ids', {'iface-id': '15b5741d-fc0b-4bac-96bb-fb617a54e450'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.723 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:24 compute-0 ovn_controller[144995]: 2026-02-02T15:32:24Z|00039|binding|INFO|Releasing lport 15b5741d-fc0b-4bac-96bb-fb617a54e450 from this chassis (sb_readonly=0)
Feb 02 15:32:24 compute-0 nova_compute[239545]: 2026-02-02 15:32:24.732 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.734 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67e1b911-f9d9-4f65-ae5c-193b47a00180.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67e1b911-f9d9-4f65-ae5c-193b47a00180.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.735 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebaabb4-b777-48cb-a969-4f7dc4a091bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.735 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-67e1b911-f9d9-4f65-ae5c-193b47a00180
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/67e1b911-f9d9-4f65-ae5c-193b47a00180.pid.haproxy
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 67e1b911-f9d9-4f65-ae5c-193b47a00180
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:32:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:24.738 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'env', 'PROCESS_TAG=haproxy-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67e1b911-f9d9-4f65-ae5c-193b47a00180.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.057 239549 DEBUG nova.compute.manager [req-0a1c8350-cb63-4b50-a4df-3ed54760fe12 req-7387e760-2e1d-4832-8bc5-c5c541a64800 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Received event network-vif-plugged-0df1eb3c-622f-404f-847d-1d6af54dfaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.058 239549 DEBUG oslo_concurrency.lockutils [req-0a1c8350-cb63-4b50-a4df-3ed54760fe12 req-7387e760-2e1d-4832-8bc5-c5c541a64800 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "947d8658-9954-4913-a435-b11628cafdf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.058 239549 DEBUG oslo_concurrency.lockutils [req-0a1c8350-cb63-4b50-a4df-3ed54760fe12 req-7387e760-2e1d-4832-8bc5-c5c541a64800 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.058 239549 DEBUG oslo_concurrency.lockutils [req-0a1c8350-cb63-4b50-a4df-3ed54760fe12 req-7387e760-2e1d-4832-8bc5-c5c541a64800 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.059 239549 DEBUG nova.compute.manager [req-0a1c8350-cb63-4b50-a4df-3ed54760fe12 req-7387e760-2e1d-4832-8bc5-c5c541a64800 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Processing event network-vif-plugged-0df1eb3c-622f-404f-847d-1d6af54dfaac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:32:25 compute-0 podman[246742]: 2026-02-02 15:32:25.100084723 +0000 UTC m=+0.043018643 container create 8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 02 15:32:25 compute-0 systemd[1]: Started libpod-conmon-8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228.scope.
Feb 02 15:32:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392e1f254c37402767e17a93ffc9cc09d5e4d8ff37a311c9744fc3bab54483f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:25 compute-0 podman[246742]: 2026-02-02 15:32:25.169269829 +0000 UTC m=+0.112203819 container init 8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:32:25 compute-0 podman[246742]: 2026-02-02 15:32:25.077628341 +0000 UTC m=+0.020562321 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:32:25 compute-0 podman[246742]: 2026-02-02 15:32:25.174174613 +0000 UTC m=+0.117108543 container start 8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:32:25 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[246758]: [NOTICE]   (246762) : New worker (246764) forked
Feb 02 15:32:25 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[246758]: [NOTICE]   (246762) : Loading success.
Feb 02 15:32:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 3.6 MiB/s wr, 130 op/s
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.851 239549 DEBUG nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.852 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046345.8525586, 947d8658-9954-4913-a435-b11628cafdf2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.853 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] VM Started (Lifecycle Event)
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.855 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.858 239549 INFO nova.virt.libvirt.driver [-] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Instance spawned successfully.
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.858 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.882 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.887 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.891 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.891 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.892 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.892 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.893 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.894 239549 DEBUG nova.virt.libvirt.driver [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.921 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.921 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046345.853104, 947d8658-9954-4913-a435-b11628cafdf2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.921 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] VM Paused (Lifecycle Event)
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.951 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.954 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046345.8550265, 947d8658-9954-4913-a435-b11628cafdf2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.954 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] VM Resumed (Lifecycle Event)
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.964 239549 INFO nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Took 6.80 seconds to spawn the instance on the hypervisor.
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.964 239549 DEBUG nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.990 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:25 compute-0 nova_compute[239545]: 2026-02-02 15:32:25.994 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:32:26 compute-0 nova_compute[239545]: 2026-02-02 15:32:26.016 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:32:26 compute-0 nova_compute[239545]: 2026-02-02 15:32:26.029 239549 INFO nova.compute.manager [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Took 7.88 seconds to build instance.
Feb 02 15:32:26 compute-0 nova_compute[239545]: 2026-02-02 15:32:26.046 239549 DEBUG oslo_concurrency.lockutils [None req-ea13baa3-eacb-40b5-ae62-516ac390d4a7 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:26 compute-0 ceph-mon[75334]: pgmap v924: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 3.6 MiB/s wr, 130 op/s
Feb 02 15:32:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:27 compute-0 nova_compute[239545]: 2026-02-02 15:32:27.187 239549 DEBUG nova.compute.manager [req-3dab9c93-3b60-400a-a079-8d5e5189e75d req-82a0d54e-1098-40ad-97e2-6c9f10fb62ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Received event network-vif-plugged-0df1eb3c-622f-404f-847d-1d6af54dfaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:27 compute-0 nova_compute[239545]: 2026-02-02 15:32:27.187 239549 DEBUG oslo_concurrency.lockutils [req-3dab9c93-3b60-400a-a079-8d5e5189e75d req-82a0d54e-1098-40ad-97e2-6c9f10fb62ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "947d8658-9954-4913-a435-b11628cafdf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:27 compute-0 nova_compute[239545]: 2026-02-02 15:32:27.188 239549 DEBUG oslo_concurrency.lockutils [req-3dab9c93-3b60-400a-a079-8d5e5189e75d req-82a0d54e-1098-40ad-97e2-6c9f10fb62ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:27 compute-0 nova_compute[239545]: 2026-02-02 15:32:27.188 239549 DEBUG oslo_concurrency.lockutils [req-3dab9c93-3b60-400a-a079-8d5e5189e75d req-82a0d54e-1098-40ad-97e2-6c9f10fb62ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:27 compute-0 nova_compute[239545]: 2026-02-02 15:32:27.188 239549 DEBUG nova.compute.manager [req-3dab9c93-3b60-400a-a079-8d5e5189e75d req-82a0d54e-1098-40ad-97e2-6c9f10fb62ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] No waiting events found dispatching network-vif-plugged-0df1eb3c-622f-404f-847d-1d6af54dfaac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:32:27 compute-0 nova_compute[239545]: 2026-02-02 15:32:27.188 239549 WARNING nova.compute.manager [req-3dab9c93-3b60-400a-a079-8d5e5189e75d req-82a0d54e-1098-40ad-97e2-6c9f10fb62ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Received unexpected event network-vif-plugged-0df1eb3c-622f-404f-847d-1d6af54dfaac for instance with vm_state active and task_state None.
Feb 02 15:32:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 2.5 MiB/s wr, 107 op/s
Feb 02 15:32:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:32:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614307165' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2614307165' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.058 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.091 239549 DEBUG oslo_concurrency.lockutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "947d8658-9954-4913-a435-b11628cafdf2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.092 239549 DEBUG oslo_concurrency.lockutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.093 239549 DEBUG oslo_concurrency.lockutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "947d8658-9954-4913-a435-b11628cafdf2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.093 239549 DEBUG oslo_concurrency.lockutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.094 239549 DEBUG oslo_concurrency.lockutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.096 239549 INFO nova.compute.manager [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Terminating instance
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.097 239549 DEBUG nova.compute.manager [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:32:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/234864633' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/234864633' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:28 compute-0 kernel: tap0df1eb3c-62 (unregistering): left promiscuous mode
Feb 02 15:32:28 compute-0 NetworkManager[49171]: <info>  [1770046348.1387] device (tap0df1eb3c-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:32:28 compute-0 ovn_controller[144995]: 2026-02-02T15:32:28Z|00040|binding|INFO|Releasing lport 0df1eb3c-622f-404f-847d-1d6af54dfaac from this chassis (sb_readonly=0)
Feb 02 15:32:28 compute-0 ovn_controller[144995]: 2026-02-02T15:32:28Z|00041|binding|INFO|Setting lport 0df1eb3c-622f-404f-847d-1d6af54dfaac down in Southbound
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.141 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:28 compute-0 ovn_controller[144995]: 2026-02-02T15:32:28Z|00042|binding|INFO|Removing iface tap0df1eb3c-62 ovn-installed in OVS
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.144 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.152 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:e9:7d 10.100.0.11'], port_security=['fa:16:3e:4c:e9:7d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '947d8658-9954-4913-a435-b11628cafdf2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6977b6ce680b402a9c819ab435e57786', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20729199-588f-4645-942f-59f3b180bde7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86150e6c-013a-46b4-b477-93d40ca051fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=0df1eb3c-622f-404f-847d-1d6af54dfaac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.154 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 0df1eb3c-622f-404f-847d-1d6af54dfaac in datapath 67e1b911-f9d9-4f65-ae5c-193b47a00180 unbound from our chassis
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.157 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e1b911-f9d9-4f65-ae5c-193b47a00180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.158 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[af0d3567-b69d-473f-a486-7c499d414bcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.159 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 namespace which is not needed anymore
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.160 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:28 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Feb 02 15:32:28 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 3.632s CPU time.
Feb 02 15:32:28 compute-0 systemd-machined[207609]: Machine qemu-2-instance-00000002 terminated.
Feb 02 15:32:28 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[246758]: [NOTICE]   (246762) : haproxy version is 2.8.14-c23fe91
Feb 02 15:32:28 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[246758]: [NOTICE]   (246762) : path to executable is /usr/sbin/haproxy
Feb 02 15:32:28 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[246758]: [WARNING]  (246762) : Exiting Master process...
Feb 02 15:32:28 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[246758]: [ALERT]    (246762) : Current worker (246764) exited with code 143 (Terminated)
Feb 02 15:32:28 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[246758]: [WARNING]  (246762) : All workers exited. Exiting... (0)
Feb 02 15:32:28 compute-0 systemd[1]: libpod-8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228.scope: Deactivated successfully.
Feb 02 15:32:28 compute-0 podman[246839]: 2026-02-02 15:32:28.288379108 +0000 UTC m=+0.039918150 container died 8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228-userdata-shm.mount: Deactivated successfully.
Feb 02 15:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-392e1f254c37402767e17a93ffc9cc09d5e4d8ff37a311c9744fc3bab54483f7-merged.mount: Deactivated successfully.
Feb 02 15:32:28 compute-0 podman[246839]: 2026-02-02 15:32:28.327564045 +0000 UTC m=+0.079103017 container cleanup 8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.331 239549 INFO nova.virt.libvirt.driver [-] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Instance destroyed successfully.
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.332 239549 DEBUG nova.objects.instance [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lazy-loading 'resources' on Instance uuid 947d8658-9954-4913-a435-b11628cafdf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:32:28 compute-0 systemd[1]: libpod-conmon-8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228.scope: Deactivated successfully.
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.345 239549 DEBUG nova.virt.libvirt.vif [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:32:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-897504087',display_name='tempest-VolumesActionsTest-instance-897504087',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-897504087',id=2,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:32:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6977b6ce680b402a9c819ab435e57786',ramdisk_id='',reservation_id='r-huaj6a65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-571181442',owner_user_name='tempest-VolumesActionsTest-571181442-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:32:26Z,user_data=None,user_id='83ee7fa03617458e9265b743f0ff61cb',uuid=947d8658-9954-4913-a435-b11628cafdf2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.345 239549 DEBUG nova.network.os_vif_util [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converting VIF {"id": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "address": "fa:16:3e:4c:e9:7d", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df1eb3c-62", "ovs_interfaceid": "0df1eb3c-622f-404f-847d-1d6af54dfaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.346 239549 DEBUG nova.network.os_vif_util [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:e9:7d,bridge_name='br-int',has_traffic_filtering=True,id=0df1eb3c-622f-404f-847d-1d6af54dfaac,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df1eb3c-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.346 239549 DEBUG os_vif [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:e9:7d,bridge_name='br-int',has_traffic_filtering=True,id=0df1eb3c-622f-404f-847d-1d6af54dfaac,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df1eb3c-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.347 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.347 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0df1eb3c-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.348 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.350 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.352 239549 INFO os_vif [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:e9:7d,bridge_name='br-int',has_traffic_filtering=True,id=0df1eb3c-622f-404f-847d-1d6af54dfaac,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df1eb3c-62')
Feb 02 15:32:28 compute-0 podman[246879]: 2026-02-02 15:32:28.38568556 +0000 UTC m=+0.042578041 container remove 8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.389 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[388c4152-aa18-4f99-9b42-1e5a61a9a581]: (4, ('Mon Feb  2 03:32:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 (8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228)\n8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228\nMon Feb  2 03:32:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 (8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228)\n8030cf6bcd77029d371ae2112cd0c68022a599f898626e17a76c3d984a9c9228\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.390 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9198459b-7a8d-4713-84bd-41b71c851b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.391 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e1b911-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:28 compute-0 kernel: tap67e1b911-f0: left promiscuous mode
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.394 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.399 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.403 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[62098498-6fb7-4ca9-ba5c-8e05bc808a52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.417 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cee16789-e611-4b4f-9ac2-dce014cdbc83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.418 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[16124a80-e312-4b0c-83b4-6910d9dc9f5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.427 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[85ecc06c-7f61-4f8d-bf23-57d602cc064a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383046, 'reachable_time': 43443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246913, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d67e1b911\x2df9d9\x2d4f65\x2dae5c\x2d193b47a00180.mount: Deactivated successfully.
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.431 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:32:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:28.431 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3cca3e-99c5-4aa4-9eec-f1a28344465c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Feb 02 15:32:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Feb 02 15:32:28 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 02 15:32:28 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 02 15:32:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Feb 02 15:32:28 compute-0 ceph-mon[75334]: pgmap v925: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 2.5 MiB/s wr, 107 op/s
Feb 02 15:32:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/234864633' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/234864633' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.596 239549 INFO nova.virt.libvirt.driver [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Deleting instance files /var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2_del
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.597 239549 INFO nova.virt.libvirt.driver [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Deletion of /var/lib/nova/instances/947d8658-9954-4913-a435-b11628cafdf2_del complete
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.899 239549 INFO nova.compute.manager [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Took 0.80 seconds to destroy the instance on the hypervisor.
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.899 239549 DEBUG oslo.service.loopingcall [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.900 239549 DEBUG nova.compute.manager [-] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:32:28 compute-0 nova_compute[239545]: 2026-02-02 15:32:28.900 239549 DEBUG nova.network.neutron [-] [instance: 947d8658-9954-4913-a435-b11628cafdf2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.340 239549 DEBUG nova.compute.manager [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Received event network-vif-unplugged-0df1eb3c-622f-404f-847d-1d6af54dfaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.341 239549 DEBUG oslo_concurrency.lockutils [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "947d8658-9954-4913-a435-b11628cafdf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.341 239549 DEBUG oslo_concurrency.lockutils [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.342 239549 DEBUG oslo_concurrency.lockutils [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.342 239549 DEBUG nova.compute.manager [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] No waiting events found dispatching network-vif-unplugged-0df1eb3c-622f-404f-847d-1d6af54dfaac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.343 239549 DEBUG nova.compute.manager [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Received event network-vif-unplugged-0df1eb3c-622f-404f-847d-1d6af54dfaac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.343 239549 DEBUG nova.compute.manager [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Received event network-vif-plugged-0df1eb3c-622f-404f-847d-1d6af54dfaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.343 239549 DEBUG oslo_concurrency.lockutils [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "947d8658-9954-4913-a435-b11628cafdf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.344 239549 DEBUG oslo_concurrency.lockutils [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.344 239549 DEBUG oslo_concurrency.lockutils [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.345 239549 DEBUG nova.compute.manager [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] No waiting events found dispatching network-vif-plugged-0df1eb3c-622f-404f-847d-1d6af54dfaac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:32:29 compute-0 nova_compute[239545]: 2026-02-02 15:32:29.345 239549 WARNING nova.compute.manager [req-e46b4399-1ecd-403a-99f5-1142528823ec req-dee8a9e7-8700-4516-91aa-0f9c756204ae d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Received unexpected event network-vif-plugged-0df1eb3c-622f-404f-847d-1d6af54dfaac for instance with vm_state active and task_state deleting.
Feb 02 15:32:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 536 KiB/s rd, 401 KiB/s wr, 90 op/s
Feb 02 15:32:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Feb 02 15:32:29 compute-0 ceph-mon[75334]: osdmap e163: 3 total, 3 up, 3 in
Feb 02 15:32:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Feb 02 15:32:29 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Feb 02 15:32:30 compute-0 ceph-mon[75334]: pgmap v927: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 536 KiB/s rd, 401 KiB/s wr, 90 op/s
Feb 02 15:32:30 compute-0 ceph-mon[75334]: osdmap e164: 3 total, 3 up, 3 in
Feb 02 15:32:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Feb 02 15:32:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Feb 02 15:32:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Feb 02 15:32:30 compute-0 nova_compute[239545]: 2026-02-02 15:32:30.976 239549 DEBUG nova.network.neutron [-] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:32:30 compute-0 nova_compute[239545]: 2026-02-02 15:32:30.990 239549 INFO nova.compute.manager [-] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Took 2.09 seconds to deallocate network for instance.
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.041 239549 DEBUG oslo_concurrency.lockutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.041 239549 DEBUG oslo_concurrency.lockutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.096 239549 DEBUG oslo_concurrency.processutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 55 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 KiB/s wr, 190 op/s
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.509 239549 DEBUG nova.compute.manager [req-a6e733be-7aa5-41e2-98a0-c086d063ab03 req-3c39ebf0-cb0b-4460-bee3-fb0c43a7e8be d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Received event network-vif-deleted-0df1eb3c-622f-404f-847d-1d6af54dfaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Feb 02 15:32:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Feb 02 15:32:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Feb 02 15:32:31 compute-0 ceph-mon[75334]: osdmap e165: 3 total, 3 up, 3 in
Feb 02 15:32:31 compute-0 ceph-mon[75334]: pgmap v930: 305 pgs: 305 active+clean; 55 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 KiB/s wr, 190 op/s
Feb 02 15:32:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:32:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2284294430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.703 239549 DEBUG oslo_concurrency.processutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.709 239549 DEBUG nova.compute.provider_tree [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.733 239549 DEBUG nova.scheduler.client.report [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.770 239549 DEBUG oslo_concurrency.lockutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.804 239549 INFO nova.scheduler.client.report [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Deleted allocations for instance 947d8658-9954-4913-a435-b11628cafdf2
Feb 02 15:32:31 compute-0 nova_compute[239545]: 2026-02-02 15:32:31.865 239549 DEBUG oslo_concurrency.lockutils [None req-eb7630c2-6b5e-4a6d-adc6-e932f02ad650 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "947d8658-9954-4913-a435-b11628cafdf2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Feb 02 15:32:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Feb 02 15:32:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Feb 02 15:32:32 compute-0 ceph-mon[75334]: osdmap e166: 3 total, 3 up, 3 in
Feb 02 15:32:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2284294430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:32:32 compute-0 ceph-mon[75334]: osdmap e167: 3 total, 3 up, 3 in
Feb 02 15:32:33 compute-0 nova_compute[239545]: 2026-02-02 15:32:33.060 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:33 compute-0 nova_compute[239545]: 2026-02-02 15:32:33.348 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.0 KiB/s wr, 255 op/s
Feb 02 15:32:33 compute-0 ceph-mon[75334]: pgmap v933: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.0 KiB/s wr, 255 op/s
Feb 02 15:32:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:32:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4137136587' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Feb 02 15:32:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4137136587' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Feb 02 15:32:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Feb 02 15:32:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.9 KiB/s wr, 179 op/s
Feb 02 15:32:35 compute-0 nova_compute[239545]: 2026-02-02 15:32:35.572 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "5cf71182-38c6-439e-bbee-d685c1ab0822" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:35 compute-0 nova_compute[239545]: 2026-02-02 15:32:35.573 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Feb 02 15:32:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Feb 02 15:32:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Feb 02 15:32:35 compute-0 nova_compute[239545]: 2026-02-02 15:32:35.664 239549 DEBUG nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:32:35 compute-0 ceph-mon[75334]: osdmap e168: 3 total, 3 up, 3 in
Feb 02 15:32:35 compute-0 ceph-mon[75334]: pgmap v935: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.9 KiB/s wr, 179 op/s
Feb 02 15:32:35 compute-0 nova_compute[239545]: 2026-02-02 15:32:35.843 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:35 compute-0 nova_compute[239545]: 2026-02-02 15:32:35.843 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:35 compute-0 nova_compute[239545]: 2026-02-02 15:32:35.850 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:32:35 compute-0 nova_compute[239545]: 2026-02-02 15:32:35.850 239549 INFO nova.compute.claims [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.125 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:32:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843801510' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.675 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.679 239549 DEBUG nova.compute.provider_tree [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.695 239549 DEBUG nova.scheduler.client.report [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:32:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Feb 02 15:32:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.720 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.721 239549 DEBUG nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:32:36 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Feb 02 15:32:36 compute-0 ceph-mon[75334]: osdmap e169: 3 total, 3 up, 3 in
Feb 02 15:32:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3843801510' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.774 239549 DEBUG nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.775 239549 DEBUG nova.network.neutron [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.792 239549 INFO nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.816 239549 DEBUG nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.940 239549 DEBUG nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.942 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.942 239549 INFO nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Creating image(s)
Feb 02 15:32:36 compute-0 sudo[246960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:32:36 compute-0 sudo[246960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:32:36 compute-0 sudo[246960]: pam_unix(sudo:session): session closed for user root
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.963 239549 DEBUG nova.storage.rbd_utils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 5cf71182-38c6-439e-bbee-d685c1ab0822_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:36 compute-0 nova_compute[239545]: 2026-02-02 15:32:36.991 239549 DEBUG nova.storage.rbd_utils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 5cf71182-38c6-439e-bbee-d685c1ab0822_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:37 compute-0 sudo[247003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:32:37 compute-0 sudo[247003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.021 239549 DEBUG nova.storage.rbd_utils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 5cf71182-38c6-439e-bbee-d685c1ab0822_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.025 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.041 239549 DEBUG nova.policy [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '83ee7fa03617458e9265b743f0ff61cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6977b6ce680b402a9c819ab435e57786', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:32:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Feb 02 15:32:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Feb 02 15:32:37 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.084 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.085 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.086 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.086 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.121 239549 DEBUG nova.storage.rbd_utils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 5cf71182-38c6-439e-bbee-d685c1ab0822_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.125 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 5cf71182-38c6-439e-bbee-d685c1ab0822_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.364 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 5cf71182-38c6-439e-bbee-d685c1ab0822_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 4.0 KiB/s wr, 99 op/s
Feb 02 15:32:37 compute-0 sudo[247003]: pam_unix(sudo:session): session closed for user root
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.452 239549 DEBUG nova.storage.rbd_utils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] resizing rbd image 5cf71182-38c6-439e-bbee-d685c1ab0822_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:32:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:32:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:32:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:32:37 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:32:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:32:37 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:32:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:32:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:32:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:32:37 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:32:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:32:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:32:37 compute-0 sudo[247189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:32:37 compute-0 sudo[247189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:32:37 compute-0 sudo[247189]: pam_unix(sudo:session): session closed for user root
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.541 239549 DEBUG nova.objects.instance [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lazy-loading 'migration_context' on Instance uuid 5cf71182-38c6-439e-bbee-d685c1ab0822 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.557 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.558 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Ensure instance console log exists: /var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.558 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.559 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.559 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:37 compute-0 sudo[247230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:32:37 compute-0 sudo[247230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:32:37 compute-0 ceph-mon[75334]: osdmap e170: 3 total, 3 up, 3 in
Feb 02 15:32:37 compute-0 ceph-mon[75334]: osdmap e171: 3 total, 3 up, 3 in
Feb 02 15:32:37 compute-0 ceph-mon[75334]: pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 4.0 KiB/s wr, 99 op/s
Feb 02 15:32:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:32:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:32:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:32:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:32:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:32:37 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:32:37 compute-0 nova_compute[239545]: 2026-02-02 15:32:37.841 239549 DEBUG nova.network.neutron [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Successfully created port: a75a771e-79fe-4f64-9385-15eb483f0c4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:32:37 compute-0 podman[247269]: 2026-02-02 15:32:37.851171813 +0000 UTC m=+0.044410791 container create 497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb 02 15:32:37 compute-0 systemd[1]: Started libpod-conmon-497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82.scope.
Feb 02 15:32:37 compute-0 podman[247269]: 2026-02-02 15:32:37.829604285 +0000 UTC m=+0.022843273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:32:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:32:37 compute-0 podman[247269]: 2026-02-02 15:32:37.945162225 +0000 UTC m=+0.138401213 container init 497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:32:37 compute-0 podman[247269]: 2026-02-02 15:32:37.951222001 +0000 UTC m=+0.144460969 container start 497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nobel, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:32:37 compute-0 flamboyant_nobel[247285]: 167 167
Feb 02 15:32:37 compute-0 systemd[1]: libpod-497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82.scope: Deactivated successfully.
Feb 02 15:32:37 compute-0 conmon[247285]: conmon 497e111a406adf9eeb7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82.scope/container/memory.events
Feb 02 15:32:37 compute-0 podman[247269]: 2026-02-02 15:32:37.956960607 +0000 UTC m=+0.150199595 container attach 497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:32:37 compute-0 podman[247269]: 2026-02-02 15:32:37.958041297 +0000 UTC m=+0.151280275 container died 497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nobel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:32:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f0bfb9b351598ea01f6654709e8f4aa52f73319912df96c76b5888407f03f65-merged.mount: Deactivated successfully.
Feb 02 15:32:38 compute-0 podman[247269]: 2026-02-02 15:32:38.003803873 +0000 UTC m=+0.197042841 container remove 497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_nobel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:32:38 compute-0 systemd[1]: libpod-conmon-497e111a406adf9eeb7eda6ca6956a847c8b943e1e87e70665abf4dfee54bc82.scope: Deactivated successfully.
Feb 02 15:32:38 compute-0 nova_compute[239545]: 2026-02-02 15:32:38.062 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:38 compute-0 podman[247309]: 2026-02-02 15:32:38.110296586 +0000 UTC m=+0.033425031 container create c9b532a8e4cf8e6fd04995ee08ed8b34473220d9ede8c6bcd950208630b39c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:32:38 compute-0 systemd[1]: Started libpod-conmon-c9b532a8e4cf8e6fd04995ee08ed8b34473220d9ede8c6bcd950208630b39c9a.scope.
Feb 02 15:32:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dbe8ed77339e2e3d67834524a3d25bd747a1877eb9478d18f259b159e85ee0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dbe8ed77339e2e3d67834524a3d25bd747a1877eb9478d18f259b159e85ee0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dbe8ed77339e2e3d67834524a3d25bd747a1877eb9478d18f259b159e85ee0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dbe8ed77339e2e3d67834524a3d25bd747a1877eb9478d18f259b159e85ee0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dbe8ed77339e2e3d67834524a3d25bd747a1877eb9478d18f259b159e85ee0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:38 compute-0 podman[247309]: 2026-02-02 15:32:38.190194254 +0000 UTC m=+0.113322739 container init c9b532a8e4cf8e6fd04995ee08ed8b34473220d9ede8c6bcd950208630b39c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heisenberg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:32:38 compute-0 podman[247309]: 2026-02-02 15:32:38.096196302 +0000 UTC m=+0.019324787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:32:38 compute-0 podman[247309]: 2026-02-02 15:32:38.196654121 +0000 UTC m=+0.119782566 container start c9b532a8e4cf8e6fd04995ee08ed8b34473220d9ede8c6bcd950208630b39c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heisenberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 15:32:38 compute-0 podman[247309]: 2026-02-02 15:32:38.204864585 +0000 UTC m=+0.127993090 container attach c9b532a8e4cf8e6fd04995ee08ed8b34473220d9ede8c6bcd950208630b39c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:32:38 compute-0 nova_compute[239545]: 2026-02-02 15:32:38.349 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Feb 02 15:32:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Feb 02 15:32:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Feb 02 15:32:38 compute-0 dazzling_heisenberg[247325]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:32:38 compute-0 dazzling_heisenberg[247325]: --> All data devices are unavailable
Feb 02 15:32:38 compute-0 systemd[1]: libpod-c9b532a8e4cf8e6fd04995ee08ed8b34473220d9ede8c6bcd950208630b39c9a.scope: Deactivated successfully.
Feb 02 15:32:38 compute-0 podman[247309]: 2026-02-02 15:32:38.615612092 +0000 UTC m=+0.538740567 container died c9b532a8e4cf8e6fd04995ee08ed8b34473220d9ede8c6bcd950208630b39c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-21dbe8ed77339e2e3d67834524a3d25bd747a1877eb9478d18f259b159e85ee0-merged.mount: Deactivated successfully.
Feb 02 15:32:38 compute-0 podman[247309]: 2026-02-02 15:32:38.696432765 +0000 UTC m=+0.619561250 container remove c9b532a8e4cf8e6fd04995ee08ed8b34473220d9ede8c6bcd950208630b39c9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:32:38 compute-0 systemd[1]: libpod-conmon-c9b532a8e4cf8e6fd04995ee08ed8b34473220d9ede8c6bcd950208630b39c9a.scope: Deactivated successfully.
Feb 02 15:32:38 compute-0 sudo[247230]: pam_unix(sudo:session): session closed for user root
Feb 02 15:32:38 compute-0 sudo[247357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:32:38 compute-0 sudo[247357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:32:38 compute-0 sudo[247357]: pam_unix(sudo:session): session closed for user root
Feb 02 15:32:38 compute-0 sudo[247382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:32:38 compute-0 sudo[247382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:32:38 compute-0 nova_compute[239545]: 2026-02-02 15:32:38.926 239549 DEBUG nova.network.neutron [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Successfully updated port: a75a771e-79fe-4f64-9385-15eb483f0c4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:32:38 compute-0 nova_compute[239545]: 2026-02-02 15:32:38.954 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "refresh_cache-5cf71182-38c6-439e-bbee-d685c1ab0822" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:32:38 compute-0 nova_compute[239545]: 2026-02-02 15:32:38.955 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquired lock "refresh_cache-5cf71182-38c6-439e-bbee-d685c1ab0822" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:32:38 compute-0 nova_compute[239545]: 2026-02-02 15:32:38.955 239549 DEBUG nova.network.neutron [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:32:39 compute-0 nova_compute[239545]: 2026-02-02 15:32:39.059 239549 DEBUG nova.compute.manager [req-37472b0c-f3b3-4c5b-aae9-949f69fbf3df req-0254aba5-7b77-41d3-b689-02966ec7736e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Received event network-changed-a75a771e-79fe-4f64-9385-15eb483f0c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:39 compute-0 nova_compute[239545]: 2026-02-02 15:32:39.059 239549 DEBUG nova.compute.manager [req-37472b0c-f3b3-4c5b-aae9-949f69fbf3df req-0254aba5-7b77-41d3-b689-02966ec7736e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Refreshing instance network info cache due to event network-changed-a75a771e-79fe-4f64-9385-15eb483f0c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:32:39 compute-0 nova_compute[239545]: 2026-02-02 15:32:39.060 239549 DEBUG oslo_concurrency.lockutils [req-37472b0c-f3b3-4c5b-aae9-949f69fbf3df req-0254aba5-7b77-41d3-b689-02966ec7736e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-5cf71182-38c6-439e-bbee-d685c1ab0822" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:32:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:39 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2838807379' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:39 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2838807379' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:39 compute-0 podman[247418]: 2026-02-02 15:32:39.124643299 +0000 UTC m=+0.044158056 container create f6967e75405ea647530b5e7fd8acc6ebc192ef8ebd2ee7b93757d33c9c9ae6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:32:39 compute-0 nova_compute[239545]: 2026-02-02 15:32:39.127 239549 DEBUG nova.network.neutron [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:32:39 compute-0 systemd[1]: Started libpod-conmon-f6967e75405ea647530b5e7fd8acc6ebc192ef8ebd2ee7b93757d33c9c9ae6fd.scope.
Feb 02 15:32:39 compute-0 podman[247418]: 2026-02-02 15:32:39.099946095 +0000 UTC m=+0.019460862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:32:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:32:39 compute-0 podman[247418]: 2026-02-02 15:32:39.223999917 +0000 UTC m=+0.143514704 container init f6967e75405ea647530b5e7fd8acc6ebc192ef8ebd2ee7b93757d33c9c9ae6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:32:39 compute-0 podman[247418]: 2026-02-02 15:32:39.230461993 +0000 UTC m=+0.149976750 container start f6967e75405ea647530b5e7fd8acc6ebc192ef8ebd2ee7b93757d33c9c9ae6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:32:39 compute-0 epic_chatelet[247434]: 167 167
Feb 02 15:32:39 compute-0 systemd[1]: libpod-f6967e75405ea647530b5e7fd8acc6ebc192ef8ebd2ee7b93757d33c9c9ae6fd.scope: Deactivated successfully.
Feb 02 15:32:39 compute-0 podman[247418]: 2026-02-02 15:32:39.246230093 +0000 UTC m=+0.165744870 container attach f6967e75405ea647530b5e7fd8acc6ebc192ef8ebd2ee7b93757d33c9c9ae6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_chatelet, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:32:39 compute-0 podman[247418]: 2026-02-02 15:32:39.246649674 +0000 UTC m=+0.166164441 container died f6967e75405ea647530b5e7fd8acc6ebc192ef8ebd2ee7b93757d33c9c9ae6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b13ae3c5a9118063bee7dee7844176ca719a942822519d20824f0d2d14da1d-merged.mount: Deactivated successfully.
Feb 02 15:32:39 compute-0 podman[247418]: 2026-02-02 15:32:39.32244639 +0000 UTC m=+0.241961147 container remove f6967e75405ea647530b5e7fd8acc6ebc192ef8ebd2ee7b93757d33c9c9ae6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_chatelet, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 02 15:32:39 compute-0 systemd[1]: libpod-conmon-f6967e75405ea647530b5e7fd8acc6ebc192ef8ebd2ee7b93757d33c9c9ae6fd.scope: Deactivated successfully.
Feb 02 15:32:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 51 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 478 KiB/s wr, 72 op/s
Feb 02 15:32:39 compute-0 podman[247457]: 2026-02-02 15:32:39.473138699 +0000 UTC m=+0.052274157 container create e198959a8cb2e025c19bb4e66897921fe3802989612a9cbb02165a7a2a8eda7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_leakey, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:32:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Feb 02 15:32:39 compute-0 ceph-mon[75334]: osdmap e172: 3 total, 3 up, 3 in
Feb 02 15:32:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2838807379' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2838807379' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:39 compute-0 podman[247457]: 2026-02-02 15:32:39.442410611 +0000 UTC m=+0.021546089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:32:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Feb 02 15:32:39 compute-0 systemd[1]: Started libpod-conmon-e198959a8cb2e025c19bb4e66897921fe3802989612a9cbb02165a7a2a8eda7b.scope.
Feb 02 15:32:39 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Feb 02 15:32:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:32:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f76a742b7a1df198e6d3e4887d142b91104836417bc6cf4ef3081cc1b6abe18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f76a742b7a1df198e6d3e4887d142b91104836417bc6cf4ef3081cc1b6abe18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f76a742b7a1df198e6d3e4887d142b91104836417bc6cf4ef3081cc1b6abe18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f76a742b7a1df198e6d3e4887d142b91104836417bc6cf4ef3081cc1b6abe18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:39 compute-0 podman[247457]: 2026-02-02 15:32:39.609315591 +0000 UTC m=+0.188451079 container init e198959a8cb2e025c19bb4e66897921fe3802989612a9cbb02165a7a2a8eda7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_leakey, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:32:39 compute-0 podman[247457]: 2026-02-02 15:32:39.615084858 +0000 UTC m=+0.194220316 container start e198959a8cb2e025c19bb4e66897921fe3802989612a9cbb02165a7a2a8eda7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:32:39 compute-0 podman[247457]: 2026-02-02 15:32:39.638287001 +0000 UTC m=+0.217422459 container attach e198959a8cb2e025c19bb4e66897921fe3802989612a9cbb02165a7a2a8eda7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]: {
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:     "0": [
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:         {
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "devices": [
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "/dev/loop3"
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             ],
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_name": "ceph_lv0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_size": "21470642176",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "name": "ceph_lv0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "tags": {
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.cluster_name": "ceph",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.crush_device_class": "",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.encrypted": "0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.objectstore": "bluestore",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.osd_id": "0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.type": "block",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.vdo": "0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.with_tpm": "0"
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             },
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "type": "block",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "vg_name": "ceph_vg0"
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:         }
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:     ],
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:     "1": [
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:         {
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "devices": [
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "/dev/loop4"
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             ],
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_name": "ceph_lv1",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_size": "21470642176",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "name": "ceph_lv1",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "tags": {
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.cluster_name": "ceph",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.crush_device_class": "",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.encrypted": "0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.objectstore": "bluestore",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.osd_id": "1",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.type": "block",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.vdo": "0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.with_tpm": "0"
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             },
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "type": "block",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "vg_name": "ceph_vg1"
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:         }
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:     ],
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:     "2": [
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:         {
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "devices": [
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "/dev/loop5"
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             ],
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_name": "ceph_lv2",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_size": "21470642176",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "name": "ceph_lv2",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "tags": {
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.cluster_name": "ceph",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.crush_device_class": "",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.encrypted": "0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.objectstore": "bluestore",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.osd_id": "2",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.type": "block",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.vdo": "0",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:                 "ceph.with_tpm": "0"
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             },
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "type": "block",
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:             "vg_name": "ceph_vg2"
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:         }
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]:     ]
Feb 02 15:32:39 compute-0 hopeful_leakey[247474]: }
Feb 02 15:32:39 compute-0 systemd[1]: libpod-e198959a8cb2e025c19bb4e66897921fe3802989612a9cbb02165a7a2a8eda7b.scope: Deactivated successfully.
Feb 02 15:32:39 compute-0 podman[247457]: 2026-02-02 15:32:39.916294149 +0000 UTC m=+0.495429607 container died e198959a8cb2e025c19bb4e66897921fe3802989612a9cbb02165a7a2a8eda7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f76a742b7a1df198e6d3e4887d142b91104836417bc6cf4ef3081cc1b6abe18-merged.mount: Deactivated successfully.
Feb 02 15:32:39 compute-0 podman[247457]: 2026-02-02 15:32:39.997745489 +0000 UTC m=+0.576880947 container remove e198959a8cb2e025c19bb4e66897921fe3802989612a9cbb02165a7a2a8eda7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:32:40 compute-0 systemd[1]: libpod-conmon-e198959a8cb2e025c19bb4e66897921fe3802989612a9cbb02165a7a2a8eda7b.scope: Deactivated successfully.
Feb 02 15:32:40 compute-0 sudo[247382]: pam_unix(sudo:session): session closed for user root
Feb 02 15:32:40 compute-0 sudo[247496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:32:40 compute-0 sudo[247496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:32:40 compute-0 sudo[247496]: pam_unix(sudo:session): session closed for user root
Feb 02 15:32:40 compute-0 sudo[247521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:32:40 compute-0 sudo[247521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.231 239549 DEBUG nova.network.neutron [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Updating instance_info_cache with network_info: [{"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.257 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Releasing lock "refresh_cache-5cf71182-38c6-439e-bbee-d685c1ab0822" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.257 239549 DEBUG nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Instance network_info: |[{"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.258 239549 DEBUG oslo_concurrency.lockutils [req-37472b0c-f3b3-4c5b-aae9-949f69fbf3df req-0254aba5-7b77-41d3-b689-02966ec7736e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-5cf71182-38c6-439e-bbee-d685c1ab0822" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.258 239549 DEBUG nova.network.neutron [req-37472b0c-f3b3-4c5b-aae9-949f69fbf3df req-0254aba5-7b77-41d3-b689-02966ec7736e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Refreshing network info cache for port a75a771e-79fe-4f64-9385-15eb483f0c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.261 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Start _get_guest_xml network_info=[{"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.268 239549 WARNING nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.273 239549 DEBUG nova.virt.libvirt.host [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.275 239549 DEBUG nova.virt.libvirt.host [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.282 239549 DEBUG nova.virt.libvirt.host [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.283 239549 DEBUG nova.virt.libvirt.host [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.284 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.284 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.284 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.285 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.285 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.285 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.286 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.286 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.286 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.286 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.287 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.287 239549 DEBUG nova.virt.hardware [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.289 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:40 compute-0 podman[247559]: 2026-02-02 15:32:40.380843493 +0000 UTC m=+0.036344262 container create 789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_meitner, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:32:40 compute-0 systemd[1]: Started libpod-conmon-789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1.scope.
Feb 02 15:32:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:32:40 compute-0 podman[247559]: 2026-02-02 15:32:40.453144754 +0000 UTC m=+0.108645543 container init 789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:32:40 compute-0 podman[247559]: 2026-02-02 15:32:40.46110153 +0000 UTC m=+0.116602300 container start 789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_meitner, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:32:40 compute-0 podman[247559]: 2026-02-02 15:32:40.366443731 +0000 UTC m=+0.021944530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:32:40 compute-0 podman[247559]: 2026-02-02 15:32:40.466173409 +0000 UTC m=+0.121674178 container attach 789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:32:40 compute-0 youthful_meitner[247594]: 167 167
Feb 02 15:32:40 compute-0 systemd[1]: libpod-789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1.scope: Deactivated successfully.
Feb 02 15:32:40 compute-0 conmon[247594]: conmon 789aa0c589f17731d158 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1.scope/container/memory.events
Feb 02 15:32:40 compute-0 podman[247559]: 2026-02-02 15:32:40.468482002 +0000 UTC m=+0.123982781 container died 789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_meitner, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 15:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d2a45727e4ff09ed9fccc256106e677f8be86ca96528915fd144714948af0a0-merged.mount: Deactivated successfully.
Feb 02 15:32:40 compute-0 ceph-mon[75334]: pgmap v941: 305 pgs: 305 active+clean; 51 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 478 KiB/s wr, 72 op/s
Feb 02 15:32:40 compute-0 ceph-mon[75334]: osdmap e173: 3 total, 3 up, 3 in
Feb 02 15:32:40 compute-0 podman[247559]: 2026-02-02 15:32:40.524938451 +0000 UTC m=+0.180439220 container remove 789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:32:40 compute-0 systemd[1]: libpod-conmon-789aa0c589f17731d1586435c8c94739c37f535967067e0ad9c65938f12ec7f1.scope: Deactivated successfully.
Feb 02 15:32:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Feb 02 15:32:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Feb 02 15:32:40 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Feb 02 15:32:40 compute-0 podman[247619]: 2026-02-02 15:32:40.676443881 +0000 UTC m=+0.046705494 container create d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Feb 02 15:32:40 compute-0 systemd[1]: Started libpod-conmon-d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039.scope.
Feb 02 15:32:40 compute-0 podman[247619]: 2026-02-02 15:32:40.654641297 +0000 UTC m=+0.024902910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:32:40 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a120d3ecf7b441779e73a6feee3c58b7491b42ac3108d482f40ae2a4dce177/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a120d3ecf7b441779e73a6feee3c58b7491b42ac3108d482f40ae2a4dce177/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a120d3ecf7b441779e73a6feee3c58b7491b42ac3108d482f40ae2a4dce177/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a120d3ecf7b441779e73a6feee3c58b7491b42ac3108d482f40ae2a4dce177/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:40 compute-0 podman[247619]: 2026-02-02 15:32:40.779336636 +0000 UTC m=+0.149598249 container init d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhabha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:32:40 compute-0 podman[247619]: 2026-02-02 15:32:40.787347884 +0000 UTC m=+0.157609497 container start d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:32:40 compute-0 podman[247619]: 2026-02-02 15:32:40.792892855 +0000 UTC m=+0.163154478 container attach d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhabha, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:32:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:32:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564310664' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.840 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.861 239549 DEBUG nova.storage.rbd_utils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 5cf71182-38c6-439e-bbee-d685c1ab0822_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:40 compute-0 nova_compute[239545]: 2026-02-02 15:32:40.864 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.221 239549 DEBUG nova.network.neutron [req-37472b0c-f3b3-4c5b-aae9-949f69fbf3df req-0254aba5-7b77-41d3-b689-02966ec7736e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Updated VIF entry in instance network info cache for port a75a771e-79fe-4f64-9385-15eb483f0c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.222 239549 DEBUG nova.network.neutron [req-37472b0c-f3b3-4c5b-aae9-949f69fbf3df req-0254aba5-7b77-41d3-b689-02966ec7736e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Updating instance_info_cache with network_info: [{"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.240 239549 DEBUG oslo_concurrency.lockutils [req-37472b0c-f3b3-4c5b-aae9-949f69fbf3df req-0254aba5-7b77-41d3-b689-02966ec7736e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-5cf71182-38c6-439e-bbee-d685c1ab0822" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:32:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 4.9 MiB/s wr, 271 op/s
Feb 02 15:32:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:32:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2957354413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:41 compute-0 lvm[247753]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:32:41 compute-0 lvm[247753]: VG ceph_vg0 finished
Feb 02 15:32:41 compute-0 lvm[247754]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:32:41 compute-0 lvm[247754]: VG ceph_vg1 finished
Feb 02 15:32:41 compute-0 lvm[247758]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:32:41 compute-0 lvm[247758]: VG ceph_vg2 finished
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.459 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.461 239549 DEBUG nova.virt.libvirt.vif [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-333965738',display_name='tempest-VolumesActionsTest-instance-333965738',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-333965738',id=3,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6977b6ce680b402a9c819ab435e57786',ramdisk_id='',reservation_id='r-3bzp7los',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-571181442',owner_user_name='tempest-VolumesActionsTest-571181442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:32:36Z,user_data=None,user_id='83ee7fa03617458e9265b743f0ff61cb',uuid=5cf71182-38c6-439e-bbee-d685c1ab0822,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.462 239549 DEBUG nova.network.os_vif_util [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converting VIF {"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.462 239549 DEBUG nova.network.os_vif_util [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:74:c4,bridge_name='br-int',has_traffic_filtering=True,id=a75a771e-79fe-4f64-9385-15eb483f0c4f,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75a771e-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.464 239549 DEBUG nova.objects.instance [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5cf71182-38c6-439e-bbee-d685c1ab0822 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.484 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <uuid>5cf71182-38c6-439e-bbee-d685c1ab0822</uuid>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <name>instance-00000003</name>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <nova:name>tempest-VolumesActionsTest-instance-333965738</nova:name>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:32:40</nova:creationTime>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <nova:user uuid="83ee7fa03617458e9265b743f0ff61cb">tempest-VolumesActionsTest-571181442-project-member</nova:user>
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <nova:project uuid="6977b6ce680b402a9c819ab435e57786">tempest-VolumesActionsTest-571181442</nova:project>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <nova:port uuid="a75a771e-79fe-4f64-9385-15eb483f0c4f">
Feb 02 15:32:41 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <system>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <entry name="serial">5cf71182-38c6-439e-bbee-d685c1ab0822</entry>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <entry name="uuid">5cf71182-38c6-439e-bbee-d685c1ab0822</entry>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     </system>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <os>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   </os>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <features>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   </features>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/5cf71182-38c6-439e-bbee-d685c1ab0822_disk">
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       </source>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/5cf71182-38c6-439e-bbee-d685c1ab0822_disk.config">
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       </source>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:32:41 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:b4:74:c4"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <target dev="tapa75a771e-79"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822/console.log" append="off"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <video>
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     </video>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:32:41 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:32:41 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:32:41 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:32:41 compute-0 nova_compute[239545]: </domain>
Feb 02 15:32:41 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.485 239549 DEBUG nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Preparing to wait for external event network-vif-plugged-a75a771e-79fe-4f64-9385-15eb483f0c4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.485 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.486 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.486 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.487 239549 DEBUG nova.virt.libvirt.vif [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-333965738',display_name='tempest-VolumesActionsTest-instance-333965738',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-333965738',id=3,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6977b6ce680b402a9c819ab435e57786',ramdisk_id='',reservation_id='r-3bzp7los',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-571181442',owner_user_name='tempest-VolumesActionsTest-571181442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:32:36Z,user_data=None,user_id='83ee7fa03617458e9265b743f0ff61cb',uuid=5cf71182-38c6-439e-bbee-d685c1ab0822,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.487 239549 DEBUG nova.network.os_vif_util [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converting VIF {"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.488 239549 DEBUG nova.network.os_vif_util [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:74:c4,bridge_name='br-int',has_traffic_filtering=True,id=a75a771e-79fe-4f64-9385-15eb483f0c4f,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75a771e-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.488 239549 DEBUG os_vif [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:74:c4,bridge_name='br-int',has_traffic_filtering=True,id=a75a771e-79fe-4f64-9385-15eb483f0c4f,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75a771e-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.489 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.489 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.490 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.493 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.493 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa75a771e-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.494 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa75a771e-79, col_values=(('external_ids', {'iface-id': 'a75a771e-79fe-4f64-9385-15eb483f0c4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:74:c4', 'vm-uuid': '5cf71182-38c6-439e-bbee-d685c1ab0822'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:41 compute-0 NetworkManager[49171]: <info>  [1770046361.4960] manager: (tapa75a771e-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.495 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.497 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.503 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.504 239549 INFO os_vif [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:74:c4,bridge_name='br-int',has_traffic_filtering=True,id=a75a771e-79fe-4f64-9385-15eb483f0c4f,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75a771e-79')
Feb 02 15:32:41 compute-0 thirsty_bhabha[247635]: {}
Feb 02 15:32:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.558 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.558 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.558 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] No VIF found with MAC fa:16:3e:b4:74:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.559 239549 INFO nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Using config drive
Feb 02 15:32:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Feb 02 15:32:41 compute-0 ceph-mon[75334]: osdmap e174: 3 total, 3 up, 3 in
Feb 02 15:32:41 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3564310664' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:41 compute-0 ceph-mon[75334]: pgmap v944: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 4.9 MiB/s wr, 271 op/s
Feb 02 15:32:41 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2957354413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.579 239549 DEBUG nova.storage.rbd_utils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 5cf71182-38c6-439e-bbee-d685c1ab0822_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:41 compute-0 systemd[1]: libpod-d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039.scope: Deactivated successfully.
Feb 02 15:32:41 compute-0 systemd[1]: libpod-d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039.scope: Consumed 1.141s CPU time.
Feb 02 15:32:41 compute-0 podman[247619]: 2026-02-02 15:32:41.58211361 +0000 UTC m=+0.952375203 container died d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhabha, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:32:41 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Feb 02 15:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-15a120d3ecf7b441779e73a6feee3c58b7491b42ac3108d482f40ae2a4dce177-merged.mount: Deactivated successfully.
Feb 02 15:32:41 compute-0 podman[247619]: 2026-02-02 15:32:41.686026833 +0000 UTC m=+1.056288426 container remove d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:32:41 compute-0 systemd[1]: libpod-conmon-d271f2eeef78991cec342a4282ff93e76776aff7d50dde3c23ca496d72d84039.scope: Deactivated successfully.
Feb 02 15:32:41 compute-0 sudo[247521]: pam_unix(sudo:session): session closed for user root
Feb 02 15:32:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:32:41 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:32:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:32:41 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:32:41 compute-0 sudo[247794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:32:41 compute-0 sudo[247794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:32:41 compute-0 sudo[247794]: pam_unix(sudo:session): session closed for user root
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.856 239549 INFO nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Creating config drive at /var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822/disk.config
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.861 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmzjwz2vs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:41 compute-0 nova_compute[239545]: 2026-02-02 15:32:41.980 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmzjwz2vs" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.003 239549 DEBUG nova.storage.rbd_utils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] rbd image 5cf71182-38c6-439e-bbee-d685c1ab0822_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.007 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822/disk.config 5cf71182-38c6-439e-bbee-d685c1ab0822_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3919461113' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3919461113' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.061530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046362061558, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1021, "num_deletes": 251, "total_data_size": 1248853, "memory_usage": 1267608, "flush_reason": "Manual Compaction"}
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046362069616, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 901254, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18645, "largest_seqno": 19665, "table_properties": {"data_size": 896766, "index_size": 2012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11523, "raw_average_key_size": 20, "raw_value_size": 887138, "raw_average_value_size": 1607, "num_data_blocks": 88, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770046300, "oldest_key_time": 1770046300, "file_creation_time": 1770046362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 8126 microseconds, and 2153 cpu microseconds.
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.069652) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 901254 bytes OK
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.069668) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.073960) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.073989) EVENT_LOG_v1 {"time_micros": 1770046362073982, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.074011) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1243879, prev total WAL file size 1243879, number of live WAL files 2.
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.074471) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(880KB)], [41(9247KB)]
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046362074499, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10370769, "oldest_snapshot_seqno": -1}
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4623 keys, 7360750 bytes, temperature: kUnknown
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046362141258, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 7360750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7329148, "index_size": 18948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 112525, "raw_average_key_size": 24, "raw_value_size": 7244996, "raw_average_value_size": 1567, "num_data_blocks": 793, "num_entries": 4623, "num_filter_entries": 4623, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770046362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.141465) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 7360750 bytes
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.149139) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.2 rd, 110.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.0 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(19.7) write-amplify(8.2) OK, records in: 5116, records dropped: 493 output_compression: NoCompression
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.149169) EVENT_LOG_v1 {"time_micros": 1770046362149156, "job": 20, "event": "compaction_finished", "compaction_time_micros": 66822, "compaction_time_cpu_micros": 15637, "output_level": 6, "num_output_files": 1, "total_output_size": 7360750, "num_input_records": 5116, "num_output_records": 4623, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046362149361, "job": 20, "event": "table_file_deletion", "file_number": 43}
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046362150010, "job": 20, "event": "table_file_deletion", "file_number": 41}
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.074424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.150094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.150101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.150102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.150104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:42 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:42.150106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.211 239549 DEBUG oslo_concurrency.processutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822/disk.config 5cf71182-38c6-439e-bbee-d685c1ab0822_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.212 239549 INFO nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Deleting local config drive /var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822/disk.config because it was imported into RBD.
Feb 02 15:32:42 compute-0 kernel: tapa75a771e-79: entered promiscuous mode
Feb 02 15:32:42 compute-0 systemd-udevd[247752]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:32:42 compute-0 ovn_controller[144995]: 2026-02-02T15:32:42Z|00043|binding|INFO|Claiming lport a75a771e-79fe-4f64-9385-15eb483f0c4f for this chassis.
Feb 02 15:32:42 compute-0 NetworkManager[49171]: <info>  [1770046362.2526] manager: (tapa75a771e-79): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Feb 02 15:32:42 compute-0 ovn_controller[144995]: 2026-02-02T15:32:42Z|00044|binding|INFO|a75a771e-79fe-4f64-9385-15eb483f0c4f: Claiming fa:16:3e:b4:74:c4 10.100.0.13
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.253 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:42 compute-0 ovn_controller[144995]: 2026-02-02T15:32:42Z|00045|binding|INFO|Setting lport a75a771e-79fe-4f64-9385-15eb483f0c4f ovn-installed in OVS
Feb 02 15:32:42 compute-0 ovn_controller[144995]: 2026-02-02T15:32:42Z|00046|binding|INFO|Setting lport a75a771e-79fe-4f64-9385-15eb483f0c4f up in Southbound
Feb 02 15:32:42 compute-0 NetworkManager[49171]: <info>  [1770046362.2630] device (tapa75a771e-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.262 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:42 compute-0 NetworkManager[49171]: <info>  [1770046362.2636] device (tapa75a771e-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.262 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:74:c4 10.100.0.13'], port_security=['fa:16:3e:b4:74:c4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5cf71182-38c6-439e-bbee-d685c1ab0822', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6977b6ce680b402a9c819ab435e57786', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20729199-588f-4645-942f-59f3b180bde7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86150e6c-013a-46b4-b477-93d40ca051fb, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=a75a771e-79fe-4f64-9385-15eb483f0c4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.264 154982 INFO neutron.agent.ovn.metadata.agent [-] Port a75a771e-79fe-4f64-9385-15eb483f0c4f in datapath 67e1b911-f9d9-4f65-ae5c-193b47a00180 bound to our chassis
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.265 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67e1b911-f9d9-4f65-ae5c-193b47a00180
Feb 02 15:32:42 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.266 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.273 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4163f66-b851-4a7e-a093-2f5f72c97e1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.273 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67e1b911-f1 in ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:32:42 compute-0 systemd-machined[207609]: New machine qemu-3-instance-00000003.
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.275 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67e1b911-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.275 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[83235efd-242a-4822-b8d3-30a276e12b84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.275 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c3fe4977-967f-477d-8da4-807a97c7a7f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.283 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ca9c58-a3f8-45ac-8a39-e95121537e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.293 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[25d5ec74-1d73-48d2-ab11-342887dc56cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.312 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[3add8fbf-c878-4f3b-8047-a133bd1190a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 NetworkManager[49171]: <info>  [1770046362.3181] manager: (tap67e1b911-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.317 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4462bf53-03c8-47f4-85df-b0f6683e1bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.345 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[08d4efd3-193f-4a35-b2cd-8fd48e18292a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.348 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[a505eaba-b82c-4646-8470-71a63d25bd88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 NetworkManager[49171]: <info>  [1770046362.3683] device (tap67e1b911-f0): carrier: link connected
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.370 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[8b233d89-1464-48d7-ba6d-846eedd8a57d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.387 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c0dfb4ff-f878-4cab-9efd-575f9a41a1eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e1b911-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:7f:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384828, 'reachable_time': 18728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247902, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.402 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1008f7-c34e-4014-bb2f-376c4e99d506]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:7f71'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384828, 'tstamp': 384828}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247903, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.419 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1f9fd3-9955-4ef2-a09d-9b2707c5c289]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e1b911-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:7f:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384828, 'reachable_time': 18728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247904, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.447 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ba58ce6d-c479-4275-9a22-52bbe33741ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.498 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[434f91f2-86a6-44bd-9877-a248bc614dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.499 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e1b911-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.499 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.499 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67e1b911-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:42 compute-0 NetworkManager[49171]: <info>  [1770046362.5019] manager: (tap67e1b911-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Feb 02 15:32:42 compute-0 kernel: tap67e1b911-f0: entered promiscuous mode
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.501 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.506 239549 DEBUG nova.compute.manager [req-81e0941a-35f6-4066-b804-bc6c776c80dc req-e794a5f5-1cce-40df-849a-3281f74886f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Received event network-vif-plugged-a75a771e-79fe-4f64-9385-15eb483f0c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.506 239549 DEBUG oslo_concurrency.lockutils [req-81e0941a-35f6-4066-b804-bc6c776c80dc req-e794a5f5-1cce-40df-849a-3281f74886f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.506 239549 DEBUG oslo_concurrency.lockutils [req-81e0941a-35f6-4066-b804-bc6c776c80dc req-e794a5f5-1cce-40df-849a-3281f74886f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.507 239549 DEBUG oslo_concurrency.lockutils [req-81e0941a-35f6-4066-b804-bc6c776c80dc req-e794a5f5-1cce-40df-849a-3281f74886f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.507 239549 DEBUG nova.compute.manager [req-81e0941a-35f6-4066-b804-bc6c776c80dc req-e794a5f5-1cce-40df-849a-3281f74886f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Processing event network-vif-plugged-a75a771e-79fe-4f64-9385-15eb483f0c4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.507 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.512 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67e1b911-f0, col_values=(('external_ids', {'iface-id': '15b5741d-fc0b-4bac-96bb-fb617a54e450'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:42 compute-0 ovn_controller[144995]: 2026-02-02T15:32:42Z|00047|binding|INFO|Releasing lport 15b5741d-fc0b-4bac-96bb-fb617a54e450 from this chassis (sb_readonly=0)
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.514 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:42 compute-0 nova_compute[239545]: 2026-02-02 15:32:42.522 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.524 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67e1b911-f9d9-4f65-ae5c-193b47a00180.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67e1b911-f9d9-4f65-ae5c-193b47a00180.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.524 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cb626b20-c0fb-4371-8563-c63ddd45c8ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.525 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-67e1b911-f9d9-4f65-ae5c-193b47a00180
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/67e1b911-f9d9-4f65-ae5c-193b47a00180.pid.haproxy
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 67e1b911-f9d9-4f65-ae5c-193b47a00180
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:32:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:42.526 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'env', 'PROCESS_TAG=haproxy-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67e1b911-f9d9-4f65-ae5c-193b47a00180.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:32:42 compute-0 ceph-mon[75334]: osdmap e175: 3 total, 3 up, 3 in
Feb 02 15:32:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:32:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:32:42 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3919461113' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:42 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3919461113' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:32:42
Feb 02 15:32:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:32:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:32:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Feb 02 15:32:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:32:42 compute-0 podman[247936]: 2026-02-02 15:32:42.905973069 +0000 UTC m=+0.051738811 container create c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 02 15:32:42 compute-0 systemd[1]: Started libpod-conmon-c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068.scope.
Feb 02 15:32:42 compute-0 podman[247936]: 2026-02-02 15:32:42.874645555 +0000 UTC m=+0.020411317 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:32:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30cf85bf020345bcb1bba65211a32268492fe64c1f0d75da860e03615cc892d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:42 compute-0 podman[247936]: 2026-02-02 15:32:42.988963032 +0000 UTC m=+0.134728824 container init c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:32:42 compute-0 podman[247936]: 2026-02-02 15:32:42.998352418 +0000 UTC m=+0.144118160 container start c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:32:43 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[247966]: [NOTICE]   (247991) : New worker (247997) forked
Feb 02 15:32:43 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[247966]: [NOTICE]   (247991) : Loading success.
Feb 02 15:32:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.063 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Feb 02 15:32:43 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.101 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046363.1013432, 5cf71182-38c6-439e-bbee-d685c1ab0822 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.102 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] VM Started (Lifecycle Event)
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.104 239549 DEBUG nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.108 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.111 239549 INFO nova.virt.libvirt.driver [-] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Instance spawned successfully.
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.112 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.137 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.141 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.144 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.145 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.145 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.146 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.146 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.146 239549 DEBUG nova.virt.libvirt.driver [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.172 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.172 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046363.1015537, 5cf71182-38c6-439e-bbee-d685c1ab0822 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.172 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] VM Paused (Lifecycle Event)
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.199 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.203 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046363.1074588, 5cf71182-38c6-439e-bbee-d685c1ab0822 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.203 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] VM Resumed (Lifecycle Event)
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.217 239549 INFO nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Took 6.28 seconds to spawn the instance on the hypervisor.
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.218 239549 DEBUG nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.235 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.238 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.269 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.299 239549 INFO nova.compute.manager [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Took 7.49 seconds to build instance.
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.318 239549 DEBUG oslo_concurrency.lockutils [None req-34bf74b0-90d4-49d7-bf7f-21a4011c5428 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.330 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046348.3295124, 947d8658-9954-4913-a435-b11628cafdf2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.330 239549 INFO nova.compute.manager [-] [instance: 947d8658-9954-4913-a435-b11628cafdf2] VM Stopped (Lifecycle Event)
Feb 02 15:32:43 compute-0 nova_compute[239545]: 2026-02-02 15:32:43.354 239549 DEBUG nova.compute.manager [None req-8691cb11-b6bd-455e-babf-29ccfbbd2891 - - - - - -] [instance: 947d8658-9954-4913-a435-b11628cafdf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:32:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 4.9 MiB/s wr, 294 op/s
Feb 02 15:32:44 compute-0 ceph-mon[75334]: osdmap e176: 3 total, 3 up, 3 in
Feb 02 15:32:44 compute-0 ceph-mon[75334]: pgmap v947: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 4.9 MiB/s wr, 294 op/s
Feb 02 15:32:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:32:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1278865017' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.653 239549 DEBUG nova.compute.manager [req-6c0c546f-0494-4d31-acf3-d0c0f66f5ddc req-26f644a4-799b-4536-9c7e-f03ebbdafa13 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Received event network-vif-plugged-a75a771e-79fe-4f64-9385-15eb483f0c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.654 239549 DEBUG oslo_concurrency.lockutils [req-6c0c546f-0494-4d31-acf3-d0c0f66f5ddc req-26f644a4-799b-4536-9c7e-f03ebbdafa13 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.654 239549 DEBUG oslo_concurrency.lockutils [req-6c0c546f-0494-4d31-acf3-d0c0f66f5ddc req-26f644a4-799b-4536-9c7e-f03ebbdafa13 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.654 239549 DEBUG oslo_concurrency.lockutils [req-6c0c546f-0494-4d31-acf3-d0c0f66f5ddc req-26f644a4-799b-4536-9c7e-f03ebbdafa13 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.654 239549 DEBUG nova.compute.manager [req-6c0c546f-0494-4d31-acf3-d0c0f66f5ddc req-26f644a4-799b-4536-9c7e-f03ebbdafa13 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] No waiting events found dispatching network-vif-plugged-a75a771e-79fe-4f64-9385-15eb483f0c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.655 239549 WARNING nova.compute.manager [req-6c0c546f-0494-4d31-acf3-d0c0f66f5ddc req-26f644a4-799b-4536-9c7e-f03ebbdafa13 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Received unexpected event network-vif-plugged-a75a771e-79fe-4f64-9385-15eb483f0c4f for instance with vm_state active and task_state None.
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.867 239549 DEBUG oslo_concurrency.lockutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "5cf71182-38c6-439e-bbee-d685c1ab0822" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.868 239549 DEBUG oslo_concurrency.lockutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.869 239549 DEBUG oslo_concurrency.lockutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.870 239549 DEBUG oslo_concurrency.lockutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.870 239549 DEBUG oslo_concurrency.lockutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.873 239549 INFO nova.compute.manager [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Terminating instance
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.876 239549 DEBUG nova.compute.manager [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:32:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:32:44 compute-0 kernel: tapa75a771e-79 (unregistering): left promiscuous mode
Feb 02 15:32:44 compute-0 NetworkManager[49171]: <info>  [1770046364.9141] device (tapa75a771e-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.914 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:44 compute-0 ovn_controller[144995]: 2026-02-02T15:32:44Z|00048|binding|INFO|Releasing lport a75a771e-79fe-4f64-9385-15eb483f0c4f from this chassis (sb_readonly=0)
Feb 02 15:32:44 compute-0 ovn_controller[144995]: 2026-02-02T15:32:44Z|00049|binding|INFO|Setting lport a75a771e-79fe-4f64-9385-15eb483f0c4f down in Southbound
Feb 02 15:32:44 compute-0 ovn_controller[144995]: 2026-02-02T15:32:44Z|00050|binding|INFO|Removing iface tapa75a771e-79 ovn-installed in OVS
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.921 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:44.927 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:74:c4 10.100.0.13'], port_security=['fa:16:3e:b4:74:c4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5cf71182-38c6-439e-bbee-d685c1ab0822', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6977b6ce680b402a9c819ab435e57786', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20729199-588f-4645-942f-59f3b180bde7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86150e6c-013a-46b4-b477-93d40ca051fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=a75a771e-79fe-4f64-9385-15eb483f0c4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:32:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:44.928 154982 INFO neutron.agent.ovn.metadata.agent [-] Port a75a771e-79fe-4f64-9385-15eb483f0c4f in datapath 67e1b911-f9d9-4f65-ae5c-193b47a00180 unbound from our chassis
Feb 02 15:32:44 compute-0 nova_compute[239545]: 2026-02-02 15:32:44.928 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:44.930 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e1b911-f9d9-4f65-ae5c-193b47a00180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:32:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:44.930 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c1000c1a-2429-4bdc-961e-12f306f14f79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:44.931 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 namespace which is not needed anymore
Feb 02 15:32:44 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb 02 15:32:44 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 2.628s CPU time.
Feb 02 15:32:44 compute-0 systemd-machined[207609]: Machine qemu-3-instance-00000003 terminated.
Feb 02 15:32:45 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[247966]: [NOTICE]   (247991) : haproxy version is 2.8.14-c23fe91
Feb 02 15:32:45 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[247966]: [NOTICE]   (247991) : path to executable is /usr/sbin/haproxy
Feb 02 15:32:45 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[247966]: [WARNING]  (247991) : Exiting Master process...
Feb 02 15:32:45 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[247966]: [ALERT]    (247991) : Current worker (247997) exited with code 143 (Terminated)
Feb 02 15:32:45 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[247966]: [WARNING]  (247991) : All workers exited. Exiting... (0)
Feb 02 15:32:45 compute-0 systemd[1]: libpod-c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068.scope: Deactivated successfully.
Feb 02 15:32:45 compute-0 podman[248029]: 2026-02-02 15:32:45.042002468 +0000 UTC m=+0.038461589 container died c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 02 15:32:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Feb 02 15:32:45 compute-0 kernel: tapa75a771e-79: entered promiscuous mode
Feb 02 15:32:45 compute-0 kernel: tapa75a771e-79 (unregistering): left promiscuous mode
Feb 02 15:32:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068-userdata-shm.mount: Deactivated successfully.
Feb 02 15:32:45 compute-0 ovn_controller[144995]: 2026-02-02T15:32:45Z|00051|binding|INFO|Claiming lport a75a771e-79fe-4f64-9385-15eb483f0c4f for this chassis.
Feb 02 15:32:45 compute-0 ovn_controller[144995]: 2026-02-02T15:32:45Z|00052|binding|INFO|a75a771e-79fe-4f64-9385-15eb483f0c4f: Claiming fa:16:3e:b4:74:c4 10.100.0.13
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.098 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Feb 02 15:32:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30cf85bf020345bcb1bba65211a32268492fe64c1f0d75da860e03615cc892d-merged.mount: Deactivated successfully.
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.110 239549 INFO nova.virt.libvirt.driver [-] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Instance destroyed successfully.
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.111 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 ovn_controller[144995]: 2026-02-02T15:32:45Z|00053|if_status|INFO|Not setting lport a75a771e-79fe-4f64-9385-15eb483f0c4f down as sb is readonly
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.112 239549 DEBUG nova.objects.instance [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lazy-loading 'resources' on Instance uuid 5cf71182-38c6-439e-bbee-d685c1ab0822 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.113 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 ovn_controller[144995]: 2026-02-02T15:32:45Z|00054|binding|INFO|Releasing lport a75a771e-79fe-4f64-9385-15eb483f0c4f from this chassis (sb_readonly=1)
Feb 02 15:32:45 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.117 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:74:c4 10.100.0.13'], port_security=['fa:16:3e:b4:74:c4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5cf71182-38c6-439e-bbee-d685c1ab0822', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6977b6ce680b402a9c819ab435e57786', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20729199-588f-4645-942f-59f3b180bde7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86150e6c-013a-46b4-b477-93d40ca051fb, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=a75a771e-79fe-4f64-9385-15eb483f0c4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:32:45 compute-0 podman[248029]: 2026-02-02 15:32:45.121014962 +0000 UTC m=+0.117474073 container cleanup c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:32:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1278865017' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.123 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.126 239549 DEBUG nova.virt.libvirt.vif [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-333965738',display_name='tempest-VolumesActionsTest-instance-333965738',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-333965738',id=3,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:32:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6977b6ce680b402a9c819ab435e57786',ramdisk_id='',reservation_id='r-3bzp7los',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-571181442',owner_user_name='tempest-VolumesActionsTest-571181442-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:32:43Z,user_data=None,user_id='83ee7fa03617458e9265b743f0ff61cb',uuid=5cf71182-38c6-439e-bbee-d685c1ab0822,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.126 239549 DEBUG nova.network.os_vif_util [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converting VIF {"id": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "address": "fa:16:3e:b4:74:c4", "network": {"id": "67e1b911-f9d9-4f65-ae5c-193b47a00180", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1159292893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6977b6ce680b402a9c819ab435e57786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75a771e-79", "ovs_interfaceid": "a75a771e-79fe-4f64-9385-15eb483f0c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.127 239549 DEBUG nova.network.os_vif_util [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:74:c4,bridge_name='br-int',has_traffic_filtering=True,id=a75a771e-79fe-4f64-9385-15eb483f0c4f,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75a771e-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.127 239549 DEBUG os_vif [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:74:c4,bridge_name='br-int',has_traffic_filtering=True,id=a75a771e-79fe-4f64-9385-15eb483f0c4f,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75a771e-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:32:45 compute-0 systemd[1]: libpod-conmon-c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068.scope: Deactivated successfully.
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.130 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.131 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa75a771e-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.133 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.135 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.137 239549 INFO os_vif [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:74:c4,bridge_name='br-int',has_traffic_filtering=True,id=a75a771e-79fe-4f64-9385-15eb483f0c4f,network=Network(67e1b911-f9d9-4f65-ae5c-193b47a00180),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa75a771e-79')
Feb 02 15:32:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954073519' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954073519' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:45 compute-0 podman[248062]: 2026-02-02 15:32:45.202248987 +0000 UTC m=+0.057317344 container remove c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.207 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[662408c0-3c75-4e6b-8c90-fb8d1a98a392]: (4, ('Mon Feb  2 03:32:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 (c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068)\nc6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068\nMon Feb  2 03:32:45 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 (c6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068)\nc6140f00e5796280f9695bfde6136cb4c30226b86766ebef2fdecea0d460c068\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.209 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8d07d9d3-440f-4bf7-90c6-013a913d0bf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.210 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e1b911-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.211 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 kernel: tap67e1b911-f0: left promiscuous mode
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.218 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.220 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2d07fdd7-d439-406f-9aad-fa97de7c9fb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.237 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3f45a370-8201-41a8-a730-b087e2b62be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.238 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a25ed176-3027-4f74-9afd-e53dadc78d92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.249 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e75a901a-3887-4819-a1a7-489760fe45ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384822, 'reachable_time': 33712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248094, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.251 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.251 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd795ba-05fe-4c02-8b24-26ba865767c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.252 154982 INFO neutron.agent.ovn.metadata.agent [-] Port a75a771e-79fe-4f64-9385-15eb483f0c4f in datapath 67e1b911-f9d9-4f65-ae5c-193b47a00180 bound to our chassis
Feb 02 15:32:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d67e1b911\x2df9d9\x2d4f65\x2dae5c\x2d193b47a00180.mount: Deactivated successfully.
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.252 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67e1b911-f9d9-4f65-ae5c-193b47a00180
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.260 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[eb78735f-a52b-44db-af14-7586a343fadf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.261 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67e1b911-f1 in ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.263 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67e1b911-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.263 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c17968f5-f1d3-4a32-8350-a568057de56a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.263 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[daaea81e-21eb-46c5-929e-db056acfdaf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.273 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad56e9e-cd2e-467e-bca2-5b549d6bc136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.283 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4121ad86-61a8-44b9-8eca-e1002d21eabf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.302 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[794b332e-d96f-48c2-b4a8-8b118636d49f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 systemd-udevd[248015]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.306 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[552f453f-9896-4ba4-83a9-cea08583bea5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 NetworkManager[49171]: <info>  [1770046365.3080] manager: (tap67e1b911-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.328 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[0df154b6-ae6c-4601-b3f9-3a41f377d66c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.332 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[89e5edf9-40b2-416f-a9d6-fccf6af090f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 NetworkManager[49171]: <info>  [1770046365.3510] device (tap67e1b911-f0): carrier: link connected
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.353 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[4535b0d2-b30b-4fc6-96f2-da4ba6bc1d60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.366 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[09ce5a76-1b4a-4370-b9f2-ed980d08e7fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e1b911-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:7f:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 385126, 'reachable_time': 24980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248120, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.377 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[efff1282-c899-4ba8-b49c-c84a9b887644]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:7f71'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 385126, 'tstamp': 385126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248121, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.391 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4763d016-d09e-4b08-9e29-ae0fb7a330c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67e1b911-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:7f:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 385126, 'reachable_time': 24980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248122, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.6 MiB/s wr, 298 op/s
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.413 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6b483b-7ca4-429e-a8a9-45d2ccdeffee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.454 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2e446afd-0c42-4ad4-b786-d7d787496d5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.456 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e1b911-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.456 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.457 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67e1b911-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.458 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 kernel: tap67e1b911-f0: entered promiscuous mode
Feb 02 15:32:45 compute-0 NetworkManager[49171]: <info>  [1770046365.4605] manager: (tap67e1b911-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.460 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.462 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67e1b911-f0, col_values=(('external_ids', {'iface-id': '15b5741d-fc0b-4bac-96bb-fb617a54e450'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:45 compute-0 ovn_controller[144995]: 2026-02-02T15:32:45Z|00055|binding|INFO|Releasing lport 15b5741d-fc0b-4bac-96bb-fb617a54e450 from this chassis (sb_readonly=0)
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.464 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.465 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67e1b911-f9d9-4f65-ae5c-193b47a00180.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67e1b911-f9d9-4f65-ae5c-193b47a00180.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.466 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ced3fc06-f437-4ffd-8371-d76d6a66f005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.467 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-67e1b911-f9d9-4f65-ae5c-193b47a00180
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/67e1b911-f9d9-4f65-ae5c-193b47a00180.pid.haproxy
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 67e1b911-f9d9-4f65-ae5c-193b47a00180
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:32:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:45.467 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'env', 'PROCESS_TAG=haproxy-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67e1b911-f9d9-4f65-ae5c-193b47a00180.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.468 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.484 239549 INFO nova.virt.libvirt.driver [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Deleting instance files /var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822_del
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.484 239549 INFO nova.virt.libvirt.driver [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Deletion of /var/lib/nova/instances/5cf71182-38c6-439e-bbee-d685c1ab0822_del complete
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.528 239549 INFO nova.compute.manager [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.529 239549 DEBUG oslo.service.loopingcall [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.529 239549 DEBUG nova.compute.manager [-] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:32:45 compute-0 nova_compute[239545]: 2026-02-02 15:32:45.529 239549 DEBUG nova.network.neutron [-] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:32:45 compute-0 podman[248154]: 2026-02-02 15:32:45.834116091 +0000 UTC m=+0.098366242 container create 4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 02 15:32:45 compute-0 podman[248154]: 2026-02-02 15:32:45.755694564 +0000 UTC m=+0.019944765 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:32:45 compute-0 systemd[1]: Started libpod-conmon-4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0.scope.
Feb 02 15:32:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b9ec5f5a33466ea22a409a1e84be23b2c28edf4931eb3101ab0f09aa81d911/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:32:45 compute-0 podman[248154]: 2026-02-02 15:32:45.950382421 +0000 UTC m=+0.214632592 container init 4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:32:45 compute-0 podman[248154]: 2026-02-02 15:32:45.955077108 +0000 UTC m=+0.219327259 container start 4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:32:45 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[248169]: [NOTICE]   (248173) : New worker (248175) forked
Feb 02 15:32:45 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[248169]: [NOTICE]   (248173) : Loading success.
Feb 02 15:32:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Feb 02 15:32:46 compute-0 ceph-mon[75334]: osdmap e177: 3 total, 3 up, 3 in
Feb 02 15:32:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/954073519' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/954073519' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:46 compute-0 ceph-mon[75334]: pgmap v949: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.6 MiB/s wr, 298 op/s
Feb 02 15:32:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Feb 02 15:32:46 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.336 154982 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b9a72d0a-111b-41cc-9256-39607db21489 with type ""
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.337 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:74:c4 10.100.0.13'], port_security=['fa:16:3e:b4:74:c4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5cf71182-38c6-439e-bbee-d685c1ab0822', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6977b6ce680b402a9c819ab435e57786', 'neutron:revision_number': '5', 'neutron:security_group_ids': '20729199-588f-4645-942f-59f3b180bde7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86150e6c-013a-46b4-b477-93d40ca051fb, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=a75a771e-79fe-4f64-9385-15eb483f0c4f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.338 154982 INFO neutron.agent.ovn.metadata.agent [-] Port a75a771e-79fe-4f64-9385-15eb483f0c4f in datapath 67e1b911-f9d9-4f65-ae5c-193b47a00180 unbound from our chassis
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.339 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67e1b911-f9d9-4f65-ae5c-193b47a00180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.340 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9f97d3-c048-4d15-a43c-879c091a6b88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.341 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 namespace which is not needed anymore
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.352 239549 DEBUG nova.network.neutron [-] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.370 239549 INFO nova.compute.manager [-] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Took 0.84 seconds to deallocate network for instance.
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.417 239549 DEBUG oslo_concurrency.lockutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.417 239549 DEBUG oslo_concurrency.lockutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:46 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[248169]: [NOTICE]   (248173) : haproxy version is 2.8.14-c23fe91
Feb 02 15:32:46 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[248169]: [NOTICE]   (248173) : path to executable is /usr/sbin/haproxy
Feb 02 15:32:46 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[248169]: [WARNING]  (248173) : Exiting Master process...
Feb 02 15:32:46 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[248169]: [ALERT]    (248173) : Current worker (248175) exited with code 143 (Terminated)
Feb 02 15:32:46 compute-0 neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180[248169]: [WARNING]  (248173) : All workers exited. Exiting... (0)
Feb 02 15:32:46 compute-0 systemd[1]: libpod-4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0.scope: Deactivated successfully.
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.449 239549 DEBUG nova.compute.manager [req-4cb1f29a-e97e-4e39-9389-006a57c48f74 req-0dc3bb55-3f17-4e30-8fe0-1d2eca9578d1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Received event network-vif-deleted-a75a771e-79fe-4f64-9385-15eb483f0c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:46 compute-0 podman[248201]: 2026-02-02 15:32:46.454839572 +0000 UTC m=+0.049082259 container died 4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.471 239549 DEBUG oslo_concurrency.processutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:32:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0-userdata-shm.mount: Deactivated successfully.
Feb 02 15:32:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b9ec5f5a33466ea22a409a1e84be23b2c28edf4931eb3101ab0f09aa81d911-merged.mount: Deactivated successfully.
Feb 02 15:32:46 compute-0 podman[248201]: 2026-02-02 15:32:46.488659724 +0000 UTC m=+0.082902411 container cleanup 4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 15:32:46 compute-0 systemd[1]: libpod-conmon-4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0.scope: Deactivated successfully.
Feb 02 15:32:46 compute-0 podman[248231]: 2026-02-02 15:32:46.549345449 +0000 UTC m=+0.045865671 container remove 4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.553 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c814bcbb-790e-48c0-a587-f84832303935]: (4, ('Mon Feb  2 03:32:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 (4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0)\n4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0\nMon Feb  2 03:32:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 (4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0)\n4ab36d84443ccec53d36409bf6f9f578fafe1dc9146b5462d607c5dbcba000f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.554 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6feb5506-4d85-4958-8a46-e99eaa5feb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.555 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67e1b911-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.557 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:46 compute-0 kernel: tap67e1b911-f0: left promiscuous mode
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.564 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.567 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d97cd716-eb5d-40d9-b0df-fa13dcc70493]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.579 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b649b76-d1dd-473b-b764-f518d917a349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.580 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d080c3f8-1e57-4b97-9bf0-00be21ba3fde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.592 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8730abe2-6ce9-40a2-a5f6-880d2b1cbd69]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 385121, 'reachable_time': 32111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248267, 'error': None, 'target': 'ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.593 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67e1b911-f9d9-4f65-ae5c-193b47a00180 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:32:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:46.593 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[27bace57-5dfc-4ff3-be79-e14865a149bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:32:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d67e1b911\x2df9d9\x2d4f65\x2dae5c\x2d193b47a00180.mount: Deactivated successfully.
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.723 239549 DEBUG nova.compute.manager [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Received event network-vif-unplugged-a75a771e-79fe-4f64-9385-15eb483f0c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.724 239549 DEBUG oslo_concurrency.lockutils [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.725 239549 DEBUG oslo_concurrency.lockutils [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.725 239549 DEBUG oslo_concurrency.lockutils [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.726 239549 DEBUG nova.compute.manager [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] No waiting events found dispatching network-vif-unplugged-a75a771e-79fe-4f64-9385-15eb483f0c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.727 239549 WARNING nova.compute.manager [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Received unexpected event network-vif-unplugged-a75a771e-79fe-4f64-9385-15eb483f0c4f for instance with vm_state deleted and task_state None.
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.727 239549 DEBUG nova.compute.manager [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Received event network-vif-plugged-a75a771e-79fe-4f64-9385-15eb483f0c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.728 239549 DEBUG oslo_concurrency.lockutils [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.728 239549 DEBUG oslo_concurrency.lockutils [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.729 239549 DEBUG oslo_concurrency.lockutils [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.729 239549 DEBUG nova.compute.manager [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] No waiting events found dispatching network-vif-plugged-a75a771e-79fe-4f64-9385-15eb483f0c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.730 239549 WARNING nova.compute.manager [req-bffa2149-334d-4b7d-a593-7a4c1784cf86 req-323975ac-f9bf-4d3d-afb0-8430c9154d89 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Received unexpected event network-vif-plugged-a75a771e-79fe-4f64-9385-15eb483f0c4f for instance with vm_state deleted and task_state None.
Feb 02 15:32:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:32:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286214606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.984 239549 DEBUG oslo_concurrency.processutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:32:46 compute-0 nova_compute[239545]: 2026-02-02 15:32:46.991 239549 DEBUG nova.compute.provider_tree [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:32:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Feb 02 15:32:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Feb 02 15:32:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Feb 02 15:32:47 compute-0 ceph-mon[75334]: osdmap e178: 3 total, 3 up, 3 in
Feb 02 15:32:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4286214606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:32:47 compute-0 ceph-mon[75334]: osdmap e179: 3 total, 3 up, 3 in
Feb 02 15:32:47 compute-0 nova_compute[239545]: 2026-02-02 15:32:47.328 239549 DEBUG nova.scheduler.client.report [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:32:47 compute-0 nova_compute[239545]: 2026-02-02 15:32:47.356 239549 DEBUG oslo_concurrency.lockutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:47 compute-0 nova_compute[239545]: 2026-02-02 15:32:47.399 239549 INFO nova.scheduler.client.report [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Deleted allocations for instance 5cf71182-38c6-439e-bbee-d685c1ab0822
Feb 02 15:32:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 72 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 43 KiB/s wr, 389 op/s
Feb 02 15:32:47 compute-0 nova_compute[239545]: 2026-02-02 15:32:47.472 239549 DEBUG oslo_concurrency.lockutils [None req-e7882255-0fab-4473-8578-4ae7cb8285bf 83ee7fa03617458e9265b743f0ff61cb 6977b6ce680b402a9c819ab435e57786 - - default default] Lock "5cf71182-38c6-439e-bbee-d685c1ab0822" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:48 compute-0 nova_compute[239545]: 2026-02-02 15:32:48.065 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Feb 02 15:32:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Feb 02 15:32:48 compute-0 ceph-mon[75334]: pgmap v952: 305 pgs: 305 active+clean; 72 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 43 KiB/s wr, 389 op/s
Feb 02 15:32:48 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Feb 02 15:32:49 compute-0 ceph-mon[75334]: osdmap e180: 3 total, 3 up, 3 in
Feb 02 15:32:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 46 KiB/s wr, 446 op/s
Feb 02 15:32:50 compute-0 nova_compute[239545]: 2026-02-02 15:32:50.133 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Feb 02 15:32:50 compute-0 ceph-mon[75334]: pgmap v954: 305 pgs: 305 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 46 KiB/s wr, 446 op/s
Feb 02 15:32:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Feb 02 15:32:50 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Feb 02 15:32:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:32:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2461107716' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:32:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2461107716' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:51 compute-0 ceph-mon[75334]: osdmap e181: 3 total, 3 up, 3 in
Feb 02 15:32:51 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2461107716' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:32:51 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2461107716' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:32:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 4.4 KiB/s wr, 89 op/s
Feb 02 15:32:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Feb 02 15:32:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Feb 02 15:32:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.113907) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046372113939, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 433, "num_deletes": 261, "total_data_size": 249586, "memory_usage": 259608, "flush_reason": "Manual Compaction"}
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046372116695, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 246493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19666, "largest_seqno": 20098, "table_properties": {"data_size": 243937, "index_size": 592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6180, "raw_average_key_size": 18, "raw_value_size": 238701, "raw_average_value_size": 697, "num_data_blocks": 26, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770046362, "oldest_key_time": 1770046362, "file_creation_time": 1770046372, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 2843 microseconds, and 865 cpu microseconds.
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.116747) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 246493 bytes OK
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.116766) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.118989) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.119003) EVENT_LOG_v1 {"time_micros": 1770046372118998, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.119019) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 246810, prev total WAL file size 246810, number of live WAL files 2.
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.119319) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(240KB)], [44(7188KB)]
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046372119344, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7607243, "oldest_snapshot_seqno": -1}
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4433 keys, 7496797 bytes, temperature: kUnknown
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046372177228, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7496797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7465561, "index_size": 19030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 109874, "raw_average_key_size": 24, "raw_value_size": 7383839, "raw_average_value_size": 1665, "num_data_blocks": 792, "num_entries": 4433, "num_filter_entries": 4433, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770046372, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.177640) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7496797 bytes
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.181775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.8 rd, 128.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(61.3) write-amplify(30.4) OK, records in: 4965, records dropped: 532 output_compression: NoCompression
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.181797) EVENT_LOG_v1 {"time_micros": 1770046372181786, "job": 22, "event": "compaction_finished", "compaction_time_micros": 58165, "compaction_time_cpu_micros": 13405, "output_level": 6, "num_output_files": 1, "total_output_size": 7496797, "num_input_records": 4965, "num_output_records": 4433, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046372182070, "job": 22, "event": "table_file_deletion", "file_number": 46}
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046372182835, "job": 22, "event": "table_file_deletion", "file_number": 44}
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.119261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.182940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.182944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.182946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.182948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:52 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:32:52.182950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:32:52 compute-0 ceph-mon[75334]: pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 4.4 KiB/s wr, 89 op/s
Feb 02 15:32:52 compute-0 ceph-mon[75334]: osdmap e182: 3 total, 3 up, 3 in
Feb 02 15:32:53 compute-0 nova_compute[239545]: 2026-02-02 15:32:53.066 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:53 compute-0 nova_compute[239545]: 2026-02-02 15:32:53.187 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:53 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:53.188 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:32:53 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:53.189 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:32:53 compute-0 podman[248272]: 2026-02-02 15:32:53.317230836 +0000 UTC m=+0.060474864 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 02 15:32:53 compute-0 podman[248273]: 2026-02-02 15:32:53.332934113 +0000 UTC m=+0.073680023 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:32:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 121 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 357 KiB/s rd, 13 MiB/s wr, 134 op/s
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.255237938782243e-07 of space, bias 1.0, pg target 0.00015765713816346728 quantized to 32 (current 32)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.383999490113807e-06 of space, bias 1.0, pg target 0.0007151998470341421 quantized to 32 (current 32)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.6316853984224927e-07 of space, bias 1.0, pg target 4.895056195267478e-05 quantized to 32 (current 32)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002098670281834176 of space, bias 1.0, pg target 0.6296010845502529 quantized to 32 (current 32)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3883608955730566e-06 of space, bias 4.0, pg target 0.0016660330746876679 quantized to 16 (current 16)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:32:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:32:54 compute-0 ceph-mon[75334]: pgmap v958: 305 pgs: 305 active+clean; 121 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 357 KiB/s rd, 13 MiB/s wr, 134 op/s
Feb 02 15:32:54 compute-0 nova_compute[239545]: 2026-02-02 15:32:54.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:32:55 compute-0 nova_compute[239545]: 2026-02-02 15:32:55.136 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 201 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 22 MiB/s wr, 143 op/s
Feb 02 15:32:56 compute-0 ceph-mon[75334]: pgmap v959: 305 pgs: 305 active+clean; 201 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 22 MiB/s wr, 143 op/s
Feb 02 15:32:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:32:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Feb 02 15:32:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Feb 02 15:32:57 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Feb 02 15:32:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 305 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 37 MiB/s wr, 133 op/s
Feb 02 15:32:57 compute-0 nova_compute[239545]: 2026-02-02 15:32:57.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:32:57 compute-0 nova_compute[239545]: 2026-02-02 15:32:57.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:32:57 compute-0 nova_compute[239545]: 2026-02-02 15:32:57.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:32:57 compute-0 nova_compute[239545]: 2026-02-02 15:32:57.595 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:32:57 compute-0 nova_compute[239545]: 2026-02-02 15:32:57.596 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:32:58 compute-0 nova_compute[239545]: 2026-02-02 15:32:58.067 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:32:58 compute-0 ceph-mon[75334]: osdmap e183: 3 total, 3 up, 3 in
Feb 02 15:32:58 compute-0 ceph-mon[75334]: pgmap v961: 305 pgs: 305 active+clean; 305 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 37 MiB/s wr, 133 op/s
Feb 02 15:32:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:59.245 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:32:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:59.245 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:32:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:32:59.246 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:32:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 417 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 47 MiB/s wr, 111 op/s
Feb 02 15:32:59 compute-0 nova_compute[239545]: 2026-02-02 15:32:59.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.107 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046365.1051297, 5cf71182-38c6-439e-bbee-d685c1ab0822 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.107 239549 INFO nova.compute.manager [-] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] VM Stopped (Lifecycle Event)
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.139 239549 DEBUG nova.compute.manager [None req-1795a38c-6c84-41d8-85b0-4a525ac1d447 - - - - - -] [instance: 5cf71182-38c6-439e-bbee-d685c1ab0822] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.139 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.539 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:00 compute-0 ceph-mon[75334]: pgmap v962: 305 pgs: 305 active+clean; 417 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 47 MiB/s wr, 111 op/s
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.620 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.621 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.621 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.621 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:33:00 compute-0 nova_compute[239545]: 2026-02-02 15:33:00.621 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:33:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701236620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.151 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.294 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.296 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4731MB free_disk=59.988249748945236GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.296 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.296 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.400 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.401 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.416 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 609 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 61 MiB/s wr, 130 op/s
Feb 02 15:33:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3701236620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:33:01 compute-0 ceph-mon[75334]: pgmap v963: 305 pgs: 305 active+clean; 609 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 61 MiB/s wr, 130 op/s
Feb 02 15:33:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:33:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236718961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.974 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:01 compute-0 nova_compute[239545]: 2026-02-02 15:33:01.979 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:33:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:02 compute-0 nova_compute[239545]: 2026-02-02 15:33:02.131 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:33:02 compute-0 nova_compute[239545]: 2026-02-02 15:33:02.245 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:33:02 compute-0 nova_compute[239545]: 2026-02-02 15:33:02.246 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3236718961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:33:03 compute-0 nova_compute[239545]: 2026-02-02 15:33:03.069 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:03 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:03.192 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:33:03 compute-0 nova_compute[239545]: 2026-02-02 15:33:03.247 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:03 compute-0 nova_compute[239545]: 2026-02-02 15:33:03.247 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:03 compute-0 nova_compute[239545]: 2026-02-02 15:33:03.248 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:03 compute-0 nova_compute[239545]: 2026-02-02 15:33:03.248 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:33:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 657 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 54 MiB/s wr, 88 op/s
Feb 02 15:33:04 compute-0 ceph-mon[75334]: pgmap v964: 305 pgs: 305 active+clean; 657 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 54 MiB/s wr, 88 op/s
Feb 02 15:33:05 compute-0 nova_compute[239545]: 2026-02-02 15:33:05.142 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 729 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 53 MiB/s wr, 58 op/s
Feb 02 15:33:05 compute-0 ceph-mon[75334]: pgmap v965: 305 pgs: 305 active+clean; 729 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 53 MiB/s wr, 58 op/s
Feb 02 15:33:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 761 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 44 MiB/s wr, 54 op/s
Feb 02 15:33:07 compute-0 ceph-mon[75334]: pgmap v966: 305 pgs: 305 active+clean; 761 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 44 MiB/s wr, 54 op/s
Feb 02 15:33:08 compute-0 nova_compute[239545]: 2026-02-02 15:33:08.106 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457061424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457061424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3699353731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3699353731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/457061424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/457061424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3699353731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3699353731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 825 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 43 MiB/s wr, 47 op/s
Feb 02 15:33:09 compute-0 ceph-mon[75334]: pgmap v967: 305 pgs: 305 active+clean; 825 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 43 MiB/s wr, 47 op/s
Feb 02 15:33:10 compute-0 nova_compute[239545]: 2026-02-02 15:33:10.145 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3072480325' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3072480325' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3072480325' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3072480325' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 953 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 45 MiB/s wr, 105 op/s
Feb 02 15:33:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Feb 02 15:33:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Feb 02 15:33:12 compute-0 ceph-mon[75334]: pgmap v968: 305 pgs: 305 active+clean; 953 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 45 MiB/s wr, 105 op/s
Feb 02 15:33:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Feb 02 15:33:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:13 compute-0 ceph-mon[75334]: osdmap e184: 3 total, 3 up, 3 in
Feb 02 15:33:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/494383188' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/494383188' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:13 compute-0 nova_compute[239545]: 2026-02-02 15:33:13.156 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 38 MiB/s wr, 94 op/s
Feb 02 15:33:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/494383188' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/494383188' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:14 compute-0 ceph-mon[75334]: pgmap v970: 305 pgs: 305 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 38 MiB/s wr, 94 op/s
Feb 02 15:33:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:33:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:33:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:33:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:33:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:33:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:33:15 compute-0 nova_compute[239545]: 2026-02-02 15:33:15.148 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Feb 02 15:33:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Feb 02 15:33:15 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Feb 02 15:33:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 38 MiB/s wr, 140 op/s
Feb 02 15:33:16 compute-0 ceph-mon[75334]: osdmap e185: 3 total, 3 up, 3 in
Feb 02 15:33:16 compute-0 ceph-mon[75334]: pgmap v972: 305 pgs: 305 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 38 MiB/s wr, 140 op/s
Feb 02 15:33:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 30 MiB/s wr, 141 op/s
Feb 02 15:33:18 compute-0 nova_compute[239545]: 2026-02-02 15:33:18.195 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2278045789' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2278045789' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:18 compute-0 ceph-mon[75334]: pgmap v973: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 30 MiB/s wr, 141 op/s
Feb 02 15:33:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2278045789' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2278045789' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 681 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 14 MiB/s wr, 70 op/s
Feb 02 15:33:19 compute-0 ceph-mon[75334]: pgmap v974: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 681 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 14 MiB/s wr, 70 op/s
Feb 02 15:33:20 compute-0 nova_compute[239545]: 2026-02-02 15:33:20.152 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 12 MiB/s wr, 103 op/s
Feb 02 15:33:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Feb 02 15:33:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Feb 02 15:33:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Feb 02 15:33:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:33:22 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3350988800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:22 compute-0 ceph-mon[75334]: pgmap v975: 305 pgs: 305 active+clean; 41 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 12 MiB/s wr, 103 op/s
Feb 02 15:33:22 compute-0 ceph-mon[75334]: osdmap e186: 3 total, 3 up, 3 in
Feb 02 15:33:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3350988800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:22 compute-0 nova_compute[239545]: 2026-02-02 15:33:22.809 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:23 compute-0 nova_compute[239545]: 2026-02-02 15:33:23.230 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 51 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 949 KiB/s wr, 88 op/s
Feb 02 15:33:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Feb 02 15:33:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Feb 02 15:33:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Feb 02 15:33:24 compute-0 podman[248366]: 2026-02-02 15:33:24.312281492 +0000 UTC m=+0.047012618 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 02 15:33:24 compute-0 podman[248365]: 2026-02-02 15:33:24.379695101 +0000 UTC m=+0.122830786 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:33:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Feb 02 15:33:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Feb 02 15:33:24 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Feb 02 15:33:24 compute-0 ceph-mon[75334]: pgmap v977: 305 pgs: 305 active+clean; 51 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 949 KiB/s wr, 88 op/s
Feb 02 15:33:24 compute-0 ceph-mon[75334]: osdmap e187: 3 total, 3 up, 3 in
Feb 02 15:33:25 compute-0 nova_compute[239545]: 2026-02-02 15:33:25.155 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 74 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.3 MiB/s wr, 152 op/s
Feb 02 15:33:25 compute-0 ceph-mon[75334]: osdmap e188: 3 total, 3 up, 3 in
Feb 02 15:33:26 compute-0 ceph-mon[75334]: pgmap v980: 305 pgs: 305 active+clean; 74 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.3 MiB/s wr, 152 op/s
Feb 02 15:33:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:33:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1228473469' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 115 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 99 op/s
Feb 02 15:33:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1228473469' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282691929' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282691929' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:33:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3560586818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:28 compute-0 nova_compute[239545]: 2026-02-02 15:33:28.232 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Feb 02 15:33:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Feb 02 15:33:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Feb 02 15:33:28 compute-0 ceph-mon[75334]: pgmap v981: 305 pgs: 305 active+clean; 115 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 99 op/s
Feb 02 15:33:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3282691929' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3282691929' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3560586818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 144 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 6.7 MiB/s wr, 115 op/s
Feb 02 15:33:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Feb 02 15:33:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Feb 02 15:33:29 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Feb 02 15:33:29 compute-0 ceph-mon[75334]: osdmap e189: 3 total, 3 up, 3 in
Feb 02 15:33:29 compute-0 ceph-mon[75334]: pgmap v983: 305 pgs: 305 active+clean; 144 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 6.7 MiB/s wr, 115 op/s
Feb 02 15:33:30 compute-0 nova_compute[239545]: 2026-02-02 15:33:30.157 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:30 compute-0 ceph-mon[75334]: osdmap e190: 3 total, 3 up, 3 in
Feb 02 15:33:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 8.1 MiB/s wr, 187 op/s
Feb 02 15:33:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Feb 02 15:33:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Feb 02 15:33:31 compute-0 ceph-mon[75334]: pgmap v985: 305 pgs: 305 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 9.2 MiB/s rd, 8.1 MiB/s wr, 187 op/s
Feb 02 15:33:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Feb 02 15:33:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747670785' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:32 compute-0 ceph-mon[75334]: osdmap e191: 3 total, 3 up, 3 in
Feb 02 15:33:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747670785' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:33 compute-0 nova_compute[239545]: 2026-02-02 15:33:33.281 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 178 op/s
Feb 02 15:33:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3747670785' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3747670785' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:33 compute-0 ceph-mon[75334]: pgmap v987: 305 pgs: 305 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 178 op/s
Feb 02 15:33:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Feb 02 15:33:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Feb 02 15:33:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Feb 02 15:33:35 compute-0 nova_compute[239545]: 2026-02-02 15:33:35.175 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.4 MiB/s wr, 178 op/s
Feb 02 15:33:36 compute-0 ceph-mon[75334]: osdmap e192: 3 total, 3 up, 3 in
Feb 02 15:33:36 compute-0 ceph-mon[75334]: pgmap v989: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.4 MiB/s wr, 178 op/s
Feb 02 15:33:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1358752757' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1358752757' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1358752757' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1358752757' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.8 MiB/s wr, 183 op/s
Feb 02 15:33:38 compute-0 ceph-mon[75334]: pgmap v990: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.8 MiB/s wr, 183 op/s
Feb 02 15:33:38 compute-0 nova_compute[239545]: 2026-02-02 15:33:38.282 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:38 compute-0 nova_compute[239545]: 2026-02-02 15:33:38.478 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquiring lock "9abd1d7f-3714-46ec-acde-e1d5f8158018" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:38 compute-0 nova_compute[239545]: 2026-02-02 15:33:38.479 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:38 compute-0 nova_compute[239545]: 2026-02-02 15:33:38.506 239549 DEBUG nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:33:38 compute-0 nova_compute[239545]: 2026-02-02 15:33:38.696 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:38 compute-0 nova_compute[239545]: 2026-02-02 15:33:38.697 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:38 compute-0 nova_compute[239545]: 2026-02-02 15:33:38.704 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:33:38 compute-0 nova_compute[239545]: 2026-02-02 15:33:38.704 239549 INFO nova.compute.claims [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:33:38 compute-0 nova_compute[239545]: 2026-02-02 15:33:38.835 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:33:39 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224624771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.366 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.371 239549 DEBUG nova.compute.provider_tree [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.402 239549 DEBUG nova.scheduler.client.report [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:33:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2224624771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.424 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.425 239549 DEBUG nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:33:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 2.7 MiB/s wr, 127 op/s
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.480 239549 DEBUG nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.480 239549 DEBUG nova.network.neutron [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.499 239549 INFO nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.522 239549 DEBUG nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.567 239549 INFO nova.virt.block_device [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Booting with volume 827587f6-b1cc-4f62-a981-dde5bf81a403 at /dev/vda
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.708 239549 DEBUG os_brick.utils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.709 239549 INFO oslo.privsep.daemon [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpe3ho5uvi/privsep.sock']
Feb 02 15:33:39 compute-0 nova_compute[239545]: 2026-02-02 15:33:39.751 239549 DEBUG nova.policy [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '07aa2f7c7016411b8d5fbeb3f4688083', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e81756eb6c234f0ea96b5432c7bdfe28', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.207 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.330 239549 INFO oslo.privsep.daemon [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Spawned new privsep daemon via rootwrap
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.233 248437 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.237 248437 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.239 248437 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.239 248437 INFO oslo.privsep.daemon [-] privsep daemon running as pid 248437
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.333 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[32fa41e9-0f89-4636-8c68-2ee317cccd07]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Feb 02 15:33:40 compute-0 ceph-mon[75334]: pgmap v991: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 2.7 MiB/s wr, 127 op/s
Feb 02 15:33:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Feb 02 15:33:40 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.429 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.441 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.441 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[9577bfca-1b74-4405-9c32-c95c5f09cff6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.443 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.448 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.449 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[58688cb0-f528-405c-9912-431ea365df98]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.450 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.460 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.460 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[58824bf1-65da-4258-a5d2-0ff0a3b224f6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.463 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[456fd3cb-bf49-43e4-a7c5-f35f1f85bb0f]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.464 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.475 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] CMD "nvme version" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.477 239549 DEBUG os_brick.initiator.connectors.lightos [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.478 239549 DEBUG os_brick.initiator.connectors.lightos [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.478 239549 DEBUG os_brick.initiator.connectors.lightos [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.478 239549 DEBUG os_brick.utils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] <== get_connector_properties: return (769ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.478 239549 DEBUG nova.virt.block_device [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updating existing volume attachment record: 72e99f83-7d36-4e89-bc97-f68ca5015c81 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:33:40 compute-0 nova_compute[239545]: 2026-02-02 15:33:40.494 239549 DEBUG nova.network.neutron [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Successfully created port: 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.175 239549 DEBUG nova.network.neutron [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Successfully updated port: 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.198 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquiring lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.198 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquired lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.199 239549 DEBUG nova.network.neutron [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:33:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577372580' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577372580' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:33:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2702421616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.296 239549 DEBUG nova.compute.manager [req-0836ae94-c6f5-4d7b-b3f3-e000be86876c req-7c25ed27-c089-46dc-8889-3cba485e2c7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Received event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.296 239549 DEBUG nova.compute.manager [req-0836ae94-c6f5-4d7b-b3f3-e000be86876c req-7c25ed27-c089-46dc-8889-3cba485e2c7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing instance network info cache due to event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.296 239549 DEBUG oslo_concurrency.lockutils [req-0836ae94-c6f5-4d7b-b3f3-e000be86876c req-7c25ed27-c089-46dc-8889-3cba485e2c7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.390 239549 DEBUG nova.network.neutron [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:33:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 1.0 MiB/s wr, 111 op/s
Feb 02 15:33:41 compute-0 ceph-mon[75334]: osdmap e193: 3 total, 3 up, 3 in
Feb 02 15:33:41 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3577372580' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:41 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3577372580' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:41 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2702421616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.737 239549 DEBUG nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.738 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.739 239549 INFO nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Creating image(s)
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.739 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.740 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Ensure instance console log exists: /var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.740 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.740 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:41 compute-0 nova_compute[239545]: 2026-02-02 15:33:41.741 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:41 compute-0 sudo[248446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:33:41 compute-0 sudo[248446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:33:41 compute-0 sudo[248446]: pam_unix(sudo:session): session closed for user root
Feb 02 15:33:41 compute-0 sudo[248471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:33:41 compute-0 sudo[248471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.048 239549 DEBUG nova.network.neutron [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updating instance_info_cache with network_info: [{"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.089 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Releasing lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.090 239549 DEBUG nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Instance network_info: |[{"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.090 239549 DEBUG oslo_concurrency.lockutils [req-0836ae94-c6f5-4d7b-b3f3-e000be86876c req-7c25ed27-c089-46dc-8889-3cba485e2c7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.090 239549 DEBUG nova.network.neutron [req-0836ae94-c6f5-4d7b-b3f3-e000be86876c req-7c25ed27-c089-46dc-8889-3cba485e2c7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.093 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Start _get_guest_xml network_info=[{"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '72e99f83-7d36-4e89-bc97-f68ca5015c81', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-827587f6-b1cc-4f62-a981-dde5bf81a403', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '827587f6-b1cc-4f62-a981-dde5bf81a403', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9abd1d7f-3714-46ec-acde-e1d5f8158018', 'attached_at': '', 'detached_at': '', 'volume_id': '827587f6-b1cc-4f62-a981-dde5bf81a403', 'serial': '827587f6-b1cc-4f62-a981-dde5bf81a403'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.098 239549 WARNING nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.104 239549 DEBUG nova.virt.libvirt.host [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.105 239549 DEBUG nova.virt.libvirt.host [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.107 239549 DEBUG nova.virt.libvirt.host [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.108 239549 DEBUG nova.virt.libvirt.host [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.108 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.108 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.109 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.109 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.109 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.109 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.110 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.110 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.110 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.110 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.110 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.111 239549 DEBUG nova.virt.hardware [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.168 239549 DEBUG nova.storage.rbd_utils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] rbd image 9abd1d7f-3714-46ec-acde-e1d5f8158018_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.172 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Feb 02 15:33:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Feb 02 15:33:42 compute-0 sudo[248471]: pam_unix(sudo:session): session closed for user root
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:33:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:33:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:33:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:33:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:33:42 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:33:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:33:42 compute-0 sudo[248565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:33:42 compute-0 sudo[248565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:33:42 compute-0 sudo[248565]: pam_unix(sudo:session): session closed for user root
Feb 02 15:33:42 compute-0 sudo[248590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:33:42 compute-0 sudo[248590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:33:42 compute-0 ceph-mon[75334]: pgmap v993: 305 pgs: 305 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 1.0 MiB/s wr, 111 op/s
Feb 02 15:33:42 compute-0 ceph-mon[75334]: osdmap e194: 3 total, 3 up, 3 in
Feb 02 15:33:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:33:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:33:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:33:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:33:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:33:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:33:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:33:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3632480985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.717 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.718 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.719 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.720 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:42 compute-0 podman[248627]: 2026-02-02 15:33:42.708161334 +0000 UTC m=+0.019314956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:33:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:33:42
Feb 02 15:33:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:33:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:33:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta']
Feb 02 15:33:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:33:42 compute-0 podman[248627]: 2026-02-02 15:33:42.828437051 +0000 UTC m=+0.139590693 container create bd38d134fa032df0a6c5f73ebc3330f17e393c3d0c618c3fb6fc27f5b8dd0a0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.830 239549 DEBUG nova.virt.libvirt.vif [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:33:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-115644651',display_name='tempest-TestVolumeBackupRestore-server-115644651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-115644651',id=4,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIQfg2/0RYg2eAV/vIq+rNPUg1YrMnqEzqnfIAtnx4xt0moOfoMfDaMykJWQbiW8SwAGYiu1/oKW1u+OsHJhdYcu8tC7VYu1m20zqMc5Qmh+h06zOtNaiFOn6Oc8092GWQ==',key_name='tempest-TestVolumeBackupRestore-1043946044',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e81756eb6c234f0ea96b5432c7bdfe28',ramdisk_id='',reservation_id='r-0lup900q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1787293559',owner_user_name='tempest-TestVolumeBackupRestore-1787293559-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:33:39Z,user_data=None,user_id='07aa2f7c7016411b8d5fbeb3f4688083',uuid=9abd1d7f-3714-46ec-acde-e1d5f8158018,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.830 239549 DEBUG nova.network.os_vif_util [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Converting VIF {"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.831 239549 DEBUG nova.network.os_vif_util [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:a2:94,bridge_name='br-int',has_traffic_filtering=True,id=9ac0ec6e-22ac-4358-9076-c075cc2bffb4,network=Network(36ab2541-df17-414d-a404-c3329b6705f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac0ec6e-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.832 239549 DEBUG nova.objects.instance [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9abd1d7f-3714-46ec-acde-e1d5f8158018 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.845 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <uuid>9abd1d7f-3714-46ec-acde-e1d5f8158018</uuid>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <name>instance-00000004</name>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <nova:name>tempest-TestVolumeBackupRestore-server-115644651</nova:name>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:33:42</nova:creationTime>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <nova:user uuid="07aa2f7c7016411b8d5fbeb3f4688083">tempest-TestVolumeBackupRestore-1787293559-project-member</nova:user>
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <nova:project uuid="e81756eb6c234f0ea96b5432c7bdfe28">tempest-TestVolumeBackupRestore-1787293559</nova:project>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <nova:port uuid="9ac0ec6e-22ac-4358-9076-c075cc2bffb4">
Feb 02 15:33:42 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <system>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <entry name="serial">9abd1d7f-3714-46ec-acde-e1d5f8158018</entry>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <entry name="uuid">9abd1d7f-3714-46ec-acde-e1d5f8158018</entry>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     </system>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <os>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   </os>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <features>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   </features>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/9abd1d7f-3714-46ec-acde-e1d5f8158018_disk.config">
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       </source>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-827587f6-b1cc-4f62-a981-dde5bf81a403">
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       </source>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:33:42 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <serial>827587f6-b1cc-4f62-a981-dde5bf81a403</serial>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:cf:a2:94"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <target dev="tap9ac0ec6e-22"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018/console.log" append="off"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <video>
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     </video>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:33:42 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:33:42 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:33:42 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:33:42 compute-0 nova_compute[239545]: </domain>
Feb 02 15:33:42 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.846 239549 DEBUG nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Preparing to wait for external event network-vif-plugged-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.846 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquiring lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.846 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.847 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.847 239549 DEBUG nova.virt.libvirt.vif [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:33:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-115644651',display_name='tempest-TestVolumeBackupRestore-server-115644651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-115644651',id=4,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIQfg2/0RYg2eAV/vIq+rNPUg1YrMnqEzqnfIAtnx4xt0moOfoMfDaMykJWQbiW8SwAGYiu1/oKW1u+OsHJhdYcu8tC7VYu1m20zqMc5Qmh+h06zOtNaiFOn6Oc8092GWQ==',key_name='tempest-TestVolumeBackupRestore-1043946044',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e81756eb6c234f0ea96b5432c7bdfe28',ramdisk_id='',reservation_id='r-0lup900q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1787293559',owner_user_name='tempest-TestVolumeBackupRestore-1787293559-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:33:39Z,user_data=None,user_id='07aa2f7c7016411b8d5fbeb3f4688083',uuid=9abd1d7f-3714-46ec-acde-e1d5f8158018,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.848 239549 DEBUG nova.network.os_vif_util [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Converting VIF {"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.848 239549 DEBUG nova.network.os_vif_util [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:a2:94,bridge_name='br-int',has_traffic_filtering=True,id=9ac0ec6e-22ac-4358-9076-c075cc2bffb4,network=Network(36ab2541-df17-414d-a404-c3329b6705f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac0ec6e-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.849 239549 DEBUG os_vif [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:a2:94,bridge_name='br-int',has_traffic_filtering=True,id=9ac0ec6e-22ac-4358-9076-c075cc2bffb4,network=Network(36ab2541-df17-414d-a404-c3329b6705f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac0ec6e-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.849 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.850 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.850 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.857 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.857 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ac0ec6e-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.858 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ac0ec6e-22, col_values=(('external_ids', {'iface-id': '9ac0ec6e-22ac-4358-9076-c075cc2bffb4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:a2:94', 'vm-uuid': '9abd1d7f-3714-46ec-acde-e1d5f8158018'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.860 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:42 compute-0 NetworkManager[49171]: <info>  [1770046422.8615] manager: (tap9ac0ec6e-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.862 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.868 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.869 239549 INFO os_vif [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:a2:94,bridge_name='br-int',has_traffic_filtering=True,id=9ac0ec6e-22ac-4358-9076-c075cc2bffb4,network=Network(36ab2541-df17-414d-a404-c3329b6705f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac0ec6e-22')
Feb 02 15:33:42 compute-0 systemd[1]: Started libpod-conmon-bd38d134fa032df0a6c5f73ebc3330f17e393c3d0c618c3fb6fc27f5b8dd0a0a.scope.
Feb 02 15:33:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:33:42 compute-0 podman[248627]: 2026-02-02 15:33:42.913555214 +0000 UTC m=+0.224708866 container init bd38d134fa032df0a6c5f73ebc3330f17e393c3d0c618c3fb6fc27f5b8dd0a0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hamilton, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:33:42 compute-0 podman[248627]: 2026-02-02 15:33:42.92334428 +0000 UTC m=+0.234497892 container start bd38d134fa032df0a6c5f73ebc3330f17e393c3d0c618c3fb6fc27f5b8dd0a0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hamilton, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.925 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.926 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.926 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] No VIF found with MAC fa:16:3e:cf:a2:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.926 239549 INFO nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Using config drive
Feb 02 15:33:42 compute-0 podman[248627]: 2026-02-02 15:33:42.929034665 +0000 UTC m=+0.240188307 container attach bd38d134fa032df0a6c5f73ebc3330f17e393c3d0c618c3fb6fc27f5b8dd0a0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hamilton, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:33:42 compute-0 angry_hamilton[248647]: 167 167
Feb 02 15:33:42 compute-0 systemd[1]: libpod-bd38d134fa032df0a6c5f73ebc3330f17e393c3d0c618c3fb6fc27f5b8dd0a0a.scope: Deactivated successfully.
Feb 02 15:33:42 compute-0 podman[248627]: 2026-02-02 15:33:42.930962987 +0000 UTC m=+0.242116589 container died bd38d134fa032df0a6c5f73ebc3330f17e393c3d0c618c3fb6fc27f5b8dd0a0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:33:42 compute-0 nova_compute[239545]: 2026-02-02 15:33:42.953 239549 DEBUG nova.storage.rbd_utils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] rbd image 9abd1d7f-3714-46ec-acde-e1d5f8158018_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-67fa3436ab692efe4fe222278647d102b9552d33fbdf1fe0d789fe714b45be67-merged.mount: Deactivated successfully.
Feb 02 15:33:42 compute-0 podman[248627]: 2026-02-02 15:33:42.973656487 +0000 UTC m=+0.284810089 container remove bd38d134fa032df0a6c5f73ebc3330f17e393c3d0c618c3fb6fc27f5b8dd0a0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hamilton, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:33:42 compute-0 systemd[1]: libpod-conmon-bd38d134fa032df0a6c5f73ebc3330f17e393c3d0c618c3fb6fc27f5b8dd0a0a.scope: Deactivated successfully.
Feb 02 15:33:43 compute-0 podman[248688]: 2026-02-02 15:33:43.08791068 +0000 UTC m=+0.033507531 container create 960ed96523df9b52baab53d16632d4efde4667b23cf59ff3c4c8ad23763a55a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:33:43 compute-0 systemd[1]: Started libpod-conmon-960ed96523df9b52baab53d16632d4efde4667b23cf59ff3c4c8ad23763a55a4.scope.
Feb 02 15:33:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33247a9654bfed319912e18b774387dfd8b7921c353bb3b3ed84976beb25f0b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33247a9654bfed319912e18b774387dfd8b7921c353bb3b3ed84976beb25f0b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33247a9654bfed319912e18b774387dfd8b7921c353bb3b3ed84976beb25f0b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33247a9654bfed319912e18b774387dfd8b7921c353bb3b3ed84976beb25f0b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33247a9654bfed319912e18b774387dfd8b7921c353bb3b3ed84976beb25f0b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:43 compute-0 podman[248688]: 2026-02-02 15:33:43.162361203 +0000 UTC m=+0.107958064 container init 960ed96523df9b52baab53d16632d4efde4667b23cf59ff3c4c8ad23763a55a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williamson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:33:43 compute-0 podman[248688]: 2026-02-02 15:33:43.071945377 +0000 UTC m=+0.017542248 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:33:43 compute-0 podman[248688]: 2026-02-02 15:33:43.172037096 +0000 UTC m=+0.117633937 container start 960ed96523df9b52baab53d16632d4efde4667b23cf59ff3c4c8ad23763a55a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:33:43 compute-0 podman[248688]: 2026-02-02 15:33:43.175383467 +0000 UTC m=+0.120980338 container attach 960ed96523df9b52baab53d16632d4efde4667b23cf59ff3c4c8ad23763a55a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.249 239549 INFO nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Creating config drive at /var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018/disk.config
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.253 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzo7ycrbv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.285 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.377 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzo7ycrbv" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.396 239549 DEBUG nova.storage.rbd_utils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] rbd image 9abd1d7f-3714-46ec-acde-e1d5f8158018_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.400 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018/disk.config 9abd1d7f-3714-46ec-acde-e1d5f8158018_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:33:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 589 KiB/s wr, 81 op/s
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.509 239549 DEBUG oslo_concurrency.processutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018/disk.config 9abd1d7f-3714-46ec-acde-e1d5f8158018_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.511 239549 INFO nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Deleting local config drive /var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018/disk.config because it was imported into RBD.
Feb 02 15:33:43 compute-0 sweet_williamson[248704]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:33:43 compute-0 sweet_williamson[248704]: --> All data devices are unavailable
Feb 02 15:33:43 compute-0 NetworkManager[49171]: <info>  [1770046423.5465] manager: (tap9ac0ec6e-22): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Feb 02 15:33:43 compute-0 kernel: tap9ac0ec6e-22: entered promiscuous mode
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.548 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:43 compute-0 ovn_controller[144995]: 2026-02-02T15:33:43Z|00056|binding|INFO|Claiming lport 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 for this chassis.
Feb 02 15:33:43 compute-0 ovn_controller[144995]: 2026-02-02T15:33:43Z|00057|binding|INFO|9ac0ec6e-22ac-4358-9076-c075cc2bffb4: Claiming fa:16:3e:cf:a2:94 10.100.0.4
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.552 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.564 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:a2:94 10.100.0.4'], port_security=['fa:16:3e:cf:a2:94 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9abd1d7f-3714-46ec-acde-e1d5f8158018', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ab2541-df17-414d-a404-c3329b6705f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e81756eb6c234f0ea96b5432c7bdfe28', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17f370c7-abab-49e9-8590-db490ae34a40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e55b7f01-d1f9-480d-a5f3-a48c5995032f, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=9ac0ec6e-22ac-4358-9076-c075cc2bffb4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.565 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 in datapath 36ab2541-df17-414d-a404-c3329b6705f0 bound to our chassis
Feb 02 15:33:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.567 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 36ab2541-df17-414d-a404-c3329b6705f0
Feb 02 15:33:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3632480985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:33:43 compute-0 systemd-udevd[248776]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:33:43 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.576 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a7949764-4ec9-4f4e-a57e-f53c41c1cb5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.577 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap36ab2541-d1 in ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.579 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap36ab2541-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.579 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[55232039-d85d-4b99-b948-b017dfae82e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.580 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[dc70efc8-8a3a-4c1b-a78c-62e4001d133c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 NetworkManager[49171]: <info>  [1770046423.5829] device (tap9ac0ec6e-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:33:43 compute-0 NetworkManager[49171]: <info>  [1770046423.5837] device (tap9ac0ec6e-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:33:43 compute-0 systemd-machined[207609]: New machine qemu-4-instance-00000004.
Feb 02 15:33:43 compute-0 systemd[1]: libpod-960ed96523df9b52baab53d16632d4efde4667b23cf59ff3c4c8ad23763a55a4.scope: Deactivated successfully.
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.593 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.590 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[ca65640b-8aef-4c3c-91f4-dd4a2ceb8433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 ovn_controller[144995]: 2026-02-02T15:33:43Z|00058|binding|INFO|Setting lport 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 ovn-installed in OVS
Feb 02 15:33:43 compute-0 ovn_controller[144995]: 2026-02-02T15:33:43Z|00059|binding|INFO|Setting lport 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 up in Southbound
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.598 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:43 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.614 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b73139-2479-44df-a409-64c3001e815d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 podman[248781]: 2026-02-02 15:33:43.629036342 +0000 UTC m=+0.029364389 container died 960ed96523df9b52baab53d16632d4efde4667b23cf59ff3c4c8ad23763a55a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.640 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[f5318df5-2f5d-48f4-ac44-71a01d34eb78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 NetworkManager[49171]: <info>  [1770046423.6501] manager: (tap36ab2541-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Feb 02 15:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-33247a9654bfed319912e18b774387dfd8b7921c353bb3b3ed84976beb25f0b0-merged.mount: Deactivated successfully.
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.649 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[348da2f9-2801-4587-accf-97d0e69d07f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 systemd-udevd[248779]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:33:43 compute-0 podman[248781]: 2026-02-02 15:33:43.673103099 +0000 UTC m=+0.073431106 container remove 960ed96523df9b52baab53d16632d4efde4667b23cf59ff3c4c8ad23763a55a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williamson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.673 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[facb3ab7-8724-40b1-9b5f-467a445c6d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.677 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[df269cbd-4bea-4390-87b4-3f65133cc8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 systemd[1]: libpod-conmon-960ed96523df9b52baab53d16632d4efde4667b23cf59ff3c4c8ad23763a55a4.scope: Deactivated successfully.
Feb 02 15:33:43 compute-0 NetworkManager[49171]: <info>  [1770046423.6987] device (tap36ab2541-d0): carrier: link connected
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.703 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[9db0fa7d-87a4-4c5a-a157-0901bf7c0f30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 sudo[248590]: pam_unix(sudo:session): session closed for user root
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.717 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5d1527-83a6-4c29-aa40-669f0e647c39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36ab2541-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:b1:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390961, 'reachable_time': 18931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248823, 'error': None, 'target': 'ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.729 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[03775937-9fea-499f-979c-e9c48856e293]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:b1f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 390961, 'tstamp': 390961}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248825, 'error': None, 'target': 'ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.735 239549 DEBUG nova.network.neutron [req-0836ae94-c6f5-4d7b-b3f3-e000be86876c req-7c25ed27-c089-46dc-8889-3cba485e2c7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updated VIF entry in instance network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.736 239549 DEBUG nova.network.neutron [req-0836ae94-c6f5-4d7b-b3f3-e000be86876c req-7c25ed27-c089-46dc-8889-3cba485e2c7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updating instance_info_cache with network_info: [{"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.745 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6159ec6b-2564-4e31-af3e-98fece7b793d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36ab2541-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:b1:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390961, 'reachable_time': 18931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248833, 'error': None, 'target': 'ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 sudo[248824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.771 239549 DEBUG oslo_concurrency.lockutils [req-0836ae94-c6f5-4d7b-b3f3-e000be86876c req-7c25ed27-c089-46dc-8889-3cba485e2c7d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:33:43 compute-0 sudo[248824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:33:43 compute-0 sudo[248824]: pam_unix(sudo:session): session closed for user root
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.779 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c11085-667f-4281-b5cc-64d2e795ac1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 sudo[248853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:33:43 compute-0 sudo[248853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.818 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[292412e7-d237-4321-bf91-cf1fb09cf402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.820 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36ab2541-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.821 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.821 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36ab2541-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:33:43 compute-0 kernel: tap36ab2541-d0: entered promiscuous mode
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.822 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:43 compute-0 NetworkManager[49171]: <info>  [1770046423.8236] manager: (tap36ab2541-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.828 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap36ab2541-d0, col_values=(('external_ids', {'iface-id': '63a4799a-a69f-44ce-9ec5-9069a13f2870'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.829 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:43 compute-0 ovn_controller[144995]: 2026-02-02T15:33:43Z|00060|binding|INFO|Releasing lport 63a4799a-a69f-44ce-9ec5-9069a13f2870 from this chassis (sb_readonly=0)
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.833 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/36ab2541-df17-414d-a404-c3329b6705f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/36ab2541-df17-414d-a404-c3329b6705f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.833 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdf3e0d-bd13-4a4e-affe-be02ea6ef15f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.834 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-36ab2541-df17-414d-a404-c3329b6705f0
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/36ab2541-df17-414d-a404-c3329b6705f0.pid.haproxy
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 36ab2541-df17-414d-a404-c3329b6705f0
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:33:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:43.834 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0', 'env', 'PROCESS_TAG=haproxy-36ab2541-df17-414d-a404-c3329b6705f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/36ab2541-df17-414d-a404-c3329b6705f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.839 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.913 239549 DEBUG nova.compute.manager [req-cda23b5d-819f-4b50-980c-dc35959171d5 req-fe3e8a9c-2d40-4255-88ca-420347f77882 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Received event network-vif-plugged-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.913 239549 DEBUG oslo_concurrency.lockutils [req-cda23b5d-819f-4b50-980c-dc35959171d5 req-fe3e8a9c-2d40-4255-88ca-420347f77882 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.913 239549 DEBUG oslo_concurrency.lockutils [req-cda23b5d-819f-4b50-980c-dc35959171d5 req-fe3e8a9c-2d40-4255-88ca-420347f77882 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.914 239549 DEBUG oslo_concurrency.lockutils [req-cda23b5d-819f-4b50-980c-dc35959171d5 req-fe3e8a9c-2d40-4255-88ca-420347f77882 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:43 compute-0 nova_compute[239545]: 2026-02-02 15:33:43.914 239549 DEBUG nova.compute.manager [req-cda23b5d-819f-4b50-980c-dc35959171d5 req-fe3e8a9c-2d40-4255-88ca-420347f77882 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Processing event network-vif-plugged-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:33:44 compute-0 podman[248897]: 2026-02-02 15:33:44.07029165 +0000 UTC m=+0.037137110 container create 438f79803a67f06bc9a78c8dacf6be4c8c29e9cfb1fcca5322eb4c1593f4a4cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Feb 02 15:33:44 compute-0 systemd[1]: Started libpod-conmon-438f79803a67f06bc9a78c8dacf6be4c8c29e9cfb1fcca5322eb4c1593f4a4cd.scope.
Feb 02 15:33:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:33:44 compute-0 podman[248897]: 2026-02-02 15:33:44.050916334 +0000 UTC m=+0.017761814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:33:44 compute-0 podman[248897]: 2026-02-02 15:33:44.184234705 +0000 UTC m=+0.151080155 container init 438f79803a67f06bc9a78c8dacf6be4c8c29e9cfb1fcca5322eb4c1593f4a4cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 15:33:44 compute-0 podman[248897]: 2026-02-02 15:33:44.189395476 +0000 UTC m=+0.156240936 container start 438f79803a67f06bc9a78c8dacf6be4c8c29e9cfb1fcca5322eb4c1593f4a4cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:33:44 compute-0 dreamy_snyder[248934]: 167 167
Feb 02 15:33:44 compute-0 systemd[1]: libpod-438f79803a67f06bc9a78c8dacf6be4c8c29e9cfb1fcca5322eb4c1593f4a4cd.scope: Deactivated successfully.
Feb 02 15:33:44 compute-0 podman[248897]: 2026-02-02 15:33:44.214804586 +0000 UTC m=+0.181650076 container attach 438f79803a67f06bc9a78c8dacf6be4c8c29e9cfb1fcca5322eb4c1593f4a4cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:33:44 compute-0 podman[248897]: 2026-02-02 15:33:44.215436454 +0000 UTC m=+0.182281954 container died 438f79803a67f06bc9a78c8dacf6be4c8c29e9cfb1fcca5322eb4c1593f4a4cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:33:44 compute-0 podman[248935]: 2026-02-02 15:33:44.14797263 +0000 UTC m=+0.022834201 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:33:44 compute-0 podman[248935]: 2026-02-02 15:33:44.292386874 +0000 UTC m=+0.167248415 container create 0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 02 15:33:44 compute-0 systemd[1]: Started libpod-conmon-0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8.scope.
Feb 02 15:33:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4968ad82f9ed99305d821244d59a07d0591e09b492f782541f3806e6eaf1a50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b74c25536bf97bdde4ce75afc806921709ed57c956b705107ac97072bdc62051-merged.mount: Deactivated successfully.
Feb 02 15:33:44 compute-0 podman[248935]: 2026-02-02 15:33:44.439911252 +0000 UTC m=+0.314772783 container init 0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:33:44 compute-0 podman[248935]: 2026-02-02 15:33:44.444957409 +0000 UTC m=+0.319818930 container start 0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:33:44 compute-0 neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0[249002]: [NOTICE]   (249011) : New worker (249014) forked
Feb 02 15:33:44 compute-0 neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0[249002]: [NOTICE]   (249011) : Loading success.
Feb 02 15:33:44 compute-0 podman[248897]: 2026-02-02 15:33:44.471366326 +0000 UTC m=+0.438211786 container remove 438f79803a67f06bc9a78c8dacf6be4c8c29e9cfb1fcca5322eb4c1593f4a4cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:33:44 compute-0 systemd[1]: libpod-conmon-438f79803a67f06bc9a78c8dacf6be4c8c29e9cfb1fcca5322eb4c1593f4a4cd.scope: Deactivated successfully.
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.492 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046424.491949, 9abd1d7f-3714-46ec-acde-e1d5f8158018 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.492 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] VM Started (Lifecycle Event)
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.494 239549 DEBUG nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.497 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.500 239549 INFO nova.virt.libvirt.driver [-] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Instance spawned successfully.
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.500 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.523 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.528 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.528 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.528 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.529 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.529 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.530 239549 DEBUG nova.virt.libvirt.driver [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.535 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.564 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.564 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046424.4922194, 9abd1d7f-3714-46ec-acde-e1d5f8158018 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.564 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] VM Paused (Lifecycle Event)
Feb 02 15:33:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Feb 02 15:33:44 compute-0 ceph-mon[75334]: pgmap v995: 305 pgs: 305 active+clean; 180 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 589 KiB/s wr, 81 op/s
Feb 02 15:33:44 compute-0 ceph-mon[75334]: osdmap e195: 3 total, 3 up, 3 in
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.588 239549 INFO nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Took 2.85 seconds to spawn the instance on the hypervisor.
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.588 239549 DEBUG nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.595 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.597 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046424.4966648, 9abd1d7f-3714-46ec-acde-e1d5f8158018 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.597 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] VM Resumed (Lifecycle Event)
Feb 02 15:33:44 compute-0 podman[249031]: 2026-02-02 15:33:44.616450888 +0000 UTC m=+0.059695503 container create cc239356a86a7da35c72611609e1801af0fe5d4694f9ea722dec5f9f9060e16b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_northcutt, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:33:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Feb 02 15:33:44 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.652 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:33:44 compute-0 systemd[1]: Started libpod-conmon-cc239356a86a7da35c72611609e1801af0fe5d4694f9ea722dec5f9f9060e16b.scope.
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.655 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:33:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:33:44 compute-0 podman[249031]: 2026-02-02 15:33:44.577031238 +0000 UTC m=+0.020275873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74fb0788391038604bceff9598b13395d2c10a7d600b752b539a178c8406358/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.678 239549 INFO nova.compute.manager [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Took 6.01 seconds to build instance.
Feb 02 15:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74fb0788391038604bceff9598b13395d2c10a7d600b752b539a178c8406358/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74fb0788391038604bceff9598b13395d2c10a7d600b752b539a178c8406358/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74fb0788391038604bceff9598b13395d2c10a7d600b752b539a178c8406358/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:44 compute-0 podman[249031]: 2026-02-02 15:33:44.700157323 +0000 UTC m=+0.143401958 container init cc239356a86a7da35c72611609e1801af0fe5d4694f9ea722dec5f9f9060e16b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:33:44 compute-0 nova_compute[239545]: 2026-02-02 15:33:44.704 239549 DEBUG oslo_concurrency.lockutils [None req-fbdcf163-a8a8-4ed0-b5d9-13944ee74eb7 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:44 compute-0 podman[249031]: 2026-02-02 15:33:44.708374486 +0000 UTC m=+0.151619101 container start cc239356a86a7da35c72611609e1801af0fe5d4694f9ea722dec5f9f9060e16b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_northcutt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:33:44 compute-0 podman[249031]: 2026-02-02 15:33:44.726044746 +0000 UTC m=+0.169289391 container attach cc239356a86a7da35c72611609e1801af0fe5d4694f9ea722dec5f9f9060e16b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:33:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:33:44 compute-0 cool_northcutt[249047]: {
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:     "0": [
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:         {
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "devices": [
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "/dev/loop3"
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             ],
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_name": "ceph_lv0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_size": "21470642176",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "name": "ceph_lv0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "tags": {
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.cluster_name": "ceph",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.crush_device_class": "",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.encrypted": "0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.objectstore": "bluestore",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.osd_id": "0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.type": "block",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.vdo": "0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.with_tpm": "0"
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             },
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "type": "block",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "vg_name": "ceph_vg0"
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:         }
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:     ],
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:     "1": [
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:         {
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "devices": [
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "/dev/loop4"
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             ],
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_name": "ceph_lv1",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_size": "21470642176",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "name": "ceph_lv1",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "tags": {
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.cluster_name": "ceph",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.crush_device_class": "",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.encrypted": "0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.objectstore": "bluestore",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.osd_id": "1",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.type": "block",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.vdo": "0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.with_tpm": "0"
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             },
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "type": "block",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "vg_name": "ceph_vg1"
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:         }
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:     ],
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:     "2": [
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:         {
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "devices": [
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "/dev/loop5"
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             ],
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_name": "ceph_lv2",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_size": "21470642176",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "name": "ceph_lv2",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "tags": {
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.cluster_name": "ceph",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.crush_device_class": "",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.encrypted": "0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.objectstore": "bluestore",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.osd_id": "2",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.type": "block",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.vdo": "0",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:                 "ceph.with_tpm": "0"
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             },
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "type": "block",
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:             "vg_name": "ceph_vg2"
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:         }
Feb 02 15:33:44 compute-0 cool_northcutt[249047]:     ]
Feb 02 15:33:44 compute-0 cool_northcutt[249047]: }
Feb 02 15:33:44 compute-0 systemd[1]: libpod-cc239356a86a7da35c72611609e1801af0fe5d4694f9ea722dec5f9f9060e16b.scope: Deactivated successfully.
Feb 02 15:33:44 compute-0 podman[249031]: 2026-02-02 15:33:44.970782445 +0000 UTC m=+0.414027100 container died cc239356a86a7da35c72611609e1801af0fe5d4694f9ea722dec5f9f9060e16b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:33:45 compute-0 podman[249031]: 2026-02-02 15:33:45.023409755 +0000 UTC m=+0.466654370 container remove cc239356a86a7da35c72611609e1801af0fe5d4694f9ea722dec5f9f9060e16b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_northcutt, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:33:45 compute-0 systemd[1]: libpod-conmon-cc239356a86a7da35c72611609e1801af0fe5d4694f9ea722dec5f9f9060e16b.scope: Deactivated successfully.
Feb 02 15:33:45 compute-0 sudo[248853]: pam_unix(sudo:session): session closed for user root
Feb 02 15:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a74fb0788391038604bceff9598b13395d2c10a7d600b752b539a178c8406358-merged.mount: Deactivated successfully.
Feb 02 15:33:45 compute-0 sudo[249068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:33:45 compute-0 sudo[249068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:33:45 compute-0 sudo[249068]: pam_unix(sudo:session): session closed for user root
Feb 02 15:33:45 compute-0 sudo[249093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:33:45 compute-0 sudo[249093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:33:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/243561025' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/243561025' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:45 compute-0 podman[249130]: 2026-02-02 15:33:45.435592933 +0000 UTC m=+0.038915849 container create 44a108071c038d279b1102565dcdbff439a542d45783b479c09d4bf566d9bd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_galois, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:33:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 34 KiB/s wr, 138 op/s
Feb 02 15:33:45 compute-0 systemd[1]: Started libpod-conmon-44a108071c038d279b1102565dcdbff439a542d45783b479c09d4bf566d9bd58.scope.
Feb 02 15:33:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:33:45 compute-0 podman[249130]: 2026-02-02 15:33:45.503616881 +0000 UTC m=+0.106939797 container init 44a108071c038d279b1102565dcdbff439a542d45783b479c09d4bf566d9bd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:33:45 compute-0 podman[249130]: 2026-02-02 15:33:45.508884534 +0000 UTC m=+0.112207440 container start 44a108071c038d279b1102565dcdbff439a542d45783b479c09d4bf566d9bd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_galois, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:33:45 compute-0 bold_galois[249146]: 167 167
Feb 02 15:33:45 compute-0 systemd[1]: libpod-44a108071c038d279b1102565dcdbff439a542d45783b479c09d4bf566d9bd58.scope: Deactivated successfully.
Feb 02 15:33:45 compute-0 podman[249130]: 2026-02-02 15:33:45.416290948 +0000 UTC m=+0.019613894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:33:45 compute-0 podman[249130]: 2026-02-02 15:33:45.523496691 +0000 UTC m=+0.126819637 container attach 44a108071c038d279b1102565dcdbff439a542d45783b479c09d4bf566d9bd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:33:45 compute-0 podman[249130]: 2026-02-02 15:33:45.525216178 +0000 UTC m=+0.128539094 container died 44a108071c038d279b1102565dcdbff439a542d45783b479c09d4bf566d9bd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_galois, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0906665d701d98c94d4d52fc8dc7ef12375c9087861810b1e2aae6725c4fa7d-merged.mount: Deactivated successfully.
Feb 02 15:33:45 compute-0 podman[249130]: 2026-02-02 15:33:45.55768234 +0000 UTC m=+0.161005266 container remove 44a108071c038d279b1102565dcdbff439a542d45783b479c09d4bf566d9bd58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:33:45 compute-0 systemd[1]: libpod-conmon-44a108071c038d279b1102565dcdbff439a542d45783b479c09d4bf566d9bd58.scope: Deactivated successfully.
Feb 02 15:33:45 compute-0 ceph-mon[75334]: osdmap e196: 3 total, 3 up, 3 in
Feb 02 15:33:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/243561025' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/243561025' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:45 compute-0 ceph-mon[75334]: pgmap v998: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 34 KiB/s wr, 138 op/s
Feb 02 15:33:45 compute-0 podman[249172]: 2026-02-02 15:33:45.680163638 +0000 UTC m=+0.036011060 container create d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:33:45 compute-0 systemd[1]: Started libpod-conmon-d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3.scope.
Feb 02 15:33:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ad797843611824e2f63e3cc57151783aff5e2401a4b5760743747336845865/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ad797843611824e2f63e3cc57151783aff5e2401a4b5760743747336845865/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ad797843611824e2f63e3cc57151783aff5e2401a4b5760743747336845865/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ad797843611824e2f63e3cc57151783aff5e2401a4b5760743747336845865/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:33:45 compute-0 podman[249172]: 2026-02-02 15:33:45.660954916 +0000 UTC m=+0.016802318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:33:45 compute-0 podman[249172]: 2026-02-02 15:33:45.776097924 +0000 UTC m=+0.131945316 container init d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hopper, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:33:45 compute-0 podman[249172]: 2026-02-02 15:33:45.782015314 +0000 UTC m=+0.137862736 container start d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:33:45 compute-0 podman[249172]: 2026-02-02 15:33:45.794855924 +0000 UTC m=+0.150703326 container attach d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:33:46 compute-0 nova_compute[239545]: 2026-02-02 15:33:46.004 239549 DEBUG nova.compute.manager [req-a04dbb54-c2f5-4e71-b92f-9a7f0e4d5794 req-8445278f-3cb3-4ee9-ac40-59e93a752231 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Received event network-vif-plugged-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:33:46 compute-0 nova_compute[239545]: 2026-02-02 15:33:46.005 239549 DEBUG oslo_concurrency.lockutils [req-a04dbb54-c2f5-4e71-b92f-9a7f0e4d5794 req-8445278f-3cb3-4ee9-ac40-59e93a752231 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:46 compute-0 nova_compute[239545]: 2026-02-02 15:33:46.005 239549 DEBUG oslo_concurrency.lockutils [req-a04dbb54-c2f5-4e71-b92f-9a7f0e4d5794 req-8445278f-3cb3-4ee9-ac40-59e93a752231 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:46 compute-0 nova_compute[239545]: 2026-02-02 15:33:46.006 239549 DEBUG oslo_concurrency.lockutils [req-a04dbb54-c2f5-4e71-b92f-9a7f0e4d5794 req-8445278f-3cb3-4ee9-ac40-59e93a752231 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:46 compute-0 nova_compute[239545]: 2026-02-02 15:33:46.006 239549 DEBUG nova.compute.manager [req-a04dbb54-c2f5-4e71-b92f-9a7f0e4d5794 req-8445278f-3cb3-4ee9-ac40-59e93a752231 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] No waiting events found dispatching network-vif-plugged-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:33:46 compute-0 nova_compute[239545]: 2026-02-02 15:33:46.006 239549 WARNING nova.compute.manager [req-a04dbb54-c2f5-4e71-b92f-9a7f0e4d5794 req-8445278f-3cb3-4ee9-ac40-59e93a752231 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Received unexpected event network-vif-plugged-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 for instance with vm_state active and task_state None.
Feb 02 15:33:46 compute-0 lvm[249264]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:33:46 compute-0 lvm[249264]: VG ceph_vg0 finished
Feb 02 15:33:46 compute-0 lvm[249266]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:33:46 compute-0 lvm[249266]: VG ceph_vg1 finished
Feb 02 15:33:46 compute-0 lvm[249267]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:33:46 compute-0 lvm[249267]: VG ceph_vg2 finished
Feb 02 15:33:46 compute-0 charming_hopper[249189]: {}
Feb 02 15:33:46 compute-0 systemd[1]: libpod-d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3.scope: Deactivated successfully.
Feb 02 15:33:46 compute-0 systemd[1]: libpod-d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3.scope: Consumed 1.006s CPU time.
Feb 02 15:33:46 compute-0 podman[249172]: 2026-02-02 15:33:46.548241492 +0000 UTC m=+0.904088874 container died d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 15:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6ad797843611824e2f63e3cc57151783aff5e2401a4b5760743747336845865-merged.mount: Deactivated successfully.
Feb 02 15:33:46 compute-0 podman[249172]: 2026-02-02 15:33:46.727658666 +0000 UTC m=+1.083506088 container remove d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:33:46 compute-0 systemd[1]: libpod-conmon-d02ff734b11b412e7fc959347d0d34ac6028bbb61965ac32e46d4d9921130ef3.scope: Deactivated successfully.
Feb 02 15:33:46 compute-0 sudo[249093]: pam_unix(sudo:session): session closed for user root
Feb 02 15:33:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:33:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:33:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:33:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:33:46 compute-0 sudo[249283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:33:46 compute-0 sudo[249283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:33:46 compute-0 sudo[249283]: pam_unix(sudo:session): session closed for user root
Feb 02 15:33:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 29 KiB/s wr, 182 op/s
Feb 02 15:33:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:33:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:33:47 compute-0 ceph-mon[75334]: pgmap v999: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 29 KiB/s wr, 182 op/s
Feb 02 15:33:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Feb 02 15:33:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Feb 02 15:33:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Feb 02 15:33:47 compute-0 nova_compute[239545]: 2026-02-02 15:33:47.861 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:47 compute-0 nova_compute[239545]: 2026-02-02 15:33:47.996 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <info>  [1770046428.0087] manager: (patch-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/39)
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <info>  [1770046428.0094] device (patch-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <warn>  [1770046428.0096] device (patch-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <info>  [1770046428.0103] manager: (patch-br-int-to-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/40)
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <info>  [1770046428.0107] device (patch-br-int-to-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <warn>  [1770046428.0107] device (patch-br-int-to-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <info>  [1770046428.0115] manager: (patch-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <info>  [1770046428.0122] manager: (patch-br-int-to-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <info>  [1770046428.0126] device (patch-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 02 15:33:48 compute-0 NetworkManager[49171]: <info>  [1770046428.0129] device (patch-br-int-to-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 02 15:33:48 compute-0 ovn_controller[144995]: 2026-02-02T15:33:48Z|00061|binding|INFO|Releasing lport 63a4799a-a69f-44ce-9ec5-9069a13f2870 from this chassis (sb_readonly=0)
Feb 02 15:33:48 compute-0 nova_compute[239545]: 2026-02-02 15:33:48.027 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:48 compute-0 nova_compute[239545]: 2026-02-02 15:33:48.033 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:48 compute-0 nova_compute[239545]: 2026-02-02 15:33:48.311 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:48 compute-0 nova_compute[239545]: 2026-02-02 15:33:48.374 239549 DEBUG nova.compute.manager [req-0ad625e9-44c0-4987-9758-3a2b491db7ea req-d58ba7fc-ae57-4f4f-81cc-3bb80d2ec31d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Received event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:33:48 compute-0 nova_compute[239545]: 2026-02-02 15:33:48.374 239549 DEBUG nova.compute.manager [req-0ad625e9-44c0-4987-9758-3a2b491db7ea req-d58ba7fc-ae57-4f4f-81cc-3bb80d2ec31d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing instance network info cache due to event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:33:48 compute-0 nova_compute[239545]: 2026-02-02 15:33:48.374 239549 DEBUG oslo_concurrency.lockutils [req-0ad625e9-44c0-4987-9758-3a2b491db7ea req-d58ba7fc-ae57-4f4f-81cc-3bb80d2ec31d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:33:48 compute-0 nova_compute[239545]: 2026-02-02 15:33:48.374 239549 DEBUG oslo_concurrency.lockutils [req-0ad625e9-44c0-4987-9758-3a2b491db7ea req-d58ba7fc-ae57-4f4f-81cc-3bb80d2ec31d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:33:48 compute-0 nova_compute[239545]: 2026-02-02 15:33:48.375 239549 DEBUG nova.network.neutron [req-0ad625e9-44c0-4987-9758-3a2b491db7ea req-d58ba7fc-ae57-4f4f-81cc-3bb80d2ec31d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:33:48 compute-0 ceph-mon[75334]: osdmap e197: 3 total, 3 up, 3 in
Feb 02 15:33:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 30 KiB/s wr, 226 op/s
Feb 02 15:33:49 compute-0 ceph-mon[75334]: pgmap v1001: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 30 KiB/s wr, 226 op/s
Feb 02 15:33:50 compute-0 nova_compute[239545]: 2026-02-02 15:33:50.089 239549 DEBUG nova.network.neutron [req-0ad625e9-44c0-4987-9758-3a2b491db7ea req-d58ba7fc-ae57-4f4f-81cc-3bb80d2ec31d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updated VIF entry in instance network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:33:50 compute-0 nova_compute[239545]: 2026-02-02 15:33:50.090 239549 DEBUG nova.network.neutron [req-0ad625e9-44c0-4987-9758-3a2b491db7ea req-d58ba7fc-ae57-4f4f-81cc-3bb80d2ec31d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updating instance_info_cache with network_info: [{"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:33:50 compute-0 nova_compute[239545]: 2026-02-02 15:33:50.139 239549 DEBUG oslo_concurrency.lockutils [req-0ad625e9-44c0-4987-9758-3a2b491db7ea req-d58ba7fc-ae57-4f4f-81cc-3bb80d2ec31d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:33:50 compute-0 nova_compute[239545]: 2026-02-02 15:33:50.457 239549 DEBUG nova.compute.manager [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Received event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:33:50 compute-0 nova_compute[239545]: 2026-02-02 15:33:50.458 239549 DEBUG nova.compute.manager [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing instance network info cache due to event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:33:50 compute-0 nova_compute[239545]: 2026-02-02 15:33:50.458 239549 DEBUG oslo_concurrency.lockutils [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:33:50 compute-0 nova_compute[239545]: 2026-02-02 15:33:50.458 239549 DEBUG oslo_concurrency.lockutils [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:33:50 compute-0 nova_compute[239545]: 2026-02-02 15:33:50.459 239549 DEBUG nova.network.neutron [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:33:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Feb 02 15:33:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Feb 02 15:33:51 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Feb 02 15:33:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 30 KiB/s wr, 256 op/s
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.010 239549 DEBUG nova.network.neutron [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updated VIF entry in instance network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.011 239549 DEBUG nova.network.neutron [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updating instance_info_cache with network_info: [{"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.030 239549 DEBUG oslo_concurrency.lockutils [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.030 239549 DEBUG nova.compute.manager [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Received event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.030 239549 DEBUG nova.compute.manager [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing instance network info cache due to event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.030 239549 DEBUG oslo_concurrency.lockutils [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.031 239549 DEBUG oslo_concurrency.lockutils [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.031 239549 DEBUG nova.network.neutron [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:33:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048124293' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2048124293' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Feb 02 15:33:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Feb 02 15:33:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Feb 02 15:33:52 compute-0 ceph-mon[75334]: osdmap e198: 3 total, 3 up, 3 in
Feb 02 15:33:52 compute-0 ceph-mon[75334]: pgmap v1003: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 30 KiB/s wr, 256 op/s
Feb 02 15:33:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2048124293' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2048124293' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:52 compute-0 ceph-mon[75334]: osdmap e199: 3 total, 3 up, 3 in
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 15:33:52 compute-0 nova_compute[239545]: 2026-02-02 15:33:52.919 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:53 compute-0 nova_compute[239545]: 2026-02-02 15:33:53.298 239549 DEBUG nova.network.neutron [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updated VIF entry in instance network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:33:53 compute-0 nova_compute[239545]: 2026-02-02 15:33:53.299 239549 DEBUG nova.network.neutron [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updating instance_info_cache with network_info: [{"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:33:53 compute-0 nova_compute[239545]: 2026-02-02 15:33:53.313 239549 DEBUG oslo_concurrency.lockutils [req-cafb72ea-dfea-41c2-826f-9cb526ab5f98 req-274cbd6d-2ac2-44f0-befb-d4375ebdf758 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:33:53 compute-0 nova_compute[239545]: 2026-02-02 15:33:53.313 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.3 KiB/s wr, 190 op/s
Feb 02 15:33:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/733577653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/733577653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/241703842' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/241703842' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.9175963230708104e-06 of space, bias 1.0, pg target 0.0008752788969212432 quantized to 32 (current 32)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006949980603443068 of space, bias 1.0, pg target 0.20849941810329203 quantized to 32 (current 32)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.00034622828724592807 of space, bias 1.0, pg target 0.10386848617377842 quantized to 32 (current 32)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660564636480857 of space, bias 1.0, pg target 0.19981693909442572 quantized to 32 (current 32)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3974896397621378e-06 of space, bias 4.0, pg target 0.0016769875677145653 quantized to 16 (current 16)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:33:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:33:54 compute-0 ceph-mon[75334]: pgmap v1005: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.3 KiB/s wr, 190 op/s
Feb 02 15:33:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/733577653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/733577653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/241703842' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/241703842' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1912049372' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1912049372' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:55 compute-0 podman[249313]: 2026-02-02 15:33:55.336505507 +0000 UTC m=+0.069400171 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:33:55 compute-0 podman[249312]: 2026-02-02 15:33:55.36042928 +0000 UTC m=+0.093378545 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:33:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.1 KiB/s wr, 214 op/s
Feb 02 15:33:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1912049372' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1912049372' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:55 compute-0 nova_compute[239545]: 2026-02-02 15:33:55.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:56 compute-0 ovn_controller[144995]: 2026-02-02T15:33:56Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:a2:94 10.100.0.4
Feb 02 15:33:56 compute-0 ovn_controller[144995]: 2026-02-02T15:33:56Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:a2:94 10.100.0.4
Feb 02 15:33:56 compute-0 ceph-mon[75334]: pgmap v1006: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.1 KiB/s wr, 214 op/s
Feb 02 15:33:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:33:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3370133833' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:33:56 compute-0 nova_compute[239545]: 2026-02-02 15:33:56.555 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3370133833' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:56 compute-0 nova_compute[239545]: 2026-02-02 15:33:56.686 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:56.687 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:33:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:56.688 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:33:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:33:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 195 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 213 op/s
Feb 02 15:33:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3370133833' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:33:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3370133833' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:33:57 compute-0 nova_compute[239545]: 2026-02-02 15:33:57.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:57 compute-0 nova_compute[239545]: 2026-02-02 15:33:57.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:33:57 compute-0 nova_compute[239545]: 2026-02-02 15:33:57.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:33:57 compute-0 nova_compute[239545]: 2026-02-02 15:33:57.921 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:57 compute-0 nova_compute[239545]: 2026-02-02 15:33:57.986 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:33:57 compute-0 nova_compute[239545]: 2026-02-02 15:33:57.987 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:33:57 compute-0 nova_compute[239545]: 2026-02-02 15:33:57.987 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:33:57 compute-0 nova_compute[239545]: 2026-02-02 15:33:57.987 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9abd1d7f-3714-46ec-acde-e1d5f8158018 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:33:58 compute-0 nova_compute[239545]: 2026-02-02 15:33:58.315 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:33:58 compute-0 ceph-mon[75334]: pgmap v1007: 305 pgs: 305 active+clean; 195 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 213 op/s
Feb 02 15:33:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:59.246 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:33:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:59.246 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:33:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:59.247 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:33:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 203 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 181 op/s
Feb 02 15:33:59 compute-0 nova_compute[239545]: 2026-02-02 15:33:59.656 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updating instance_info_cache with network_info: [{"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:33:59 compute-0 nova_compute[239545]: 2026-02-02 15:33:59.673 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:33:59 compute-0 nova_compute[239545]: 2026-02-02 15:33:59.673 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:33:59 compute-0 nova_compute[239545]: 2026-02-02 15:33:59.674 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:59 compute-0 nova_compute[239545]: 2026-02-02 15:33:59.674 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:33:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:33:59.690 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:00 compute-0 ceph-mon[75334]: pgmap v1008: 305 pgs: 305 active+clean; 203 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 181 op/s
Feb 02 15:34:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 502 KiB/s rd, 2.6 MiB/s wr, 210 op/s
Feb 02 15:34:01 compute-0 nova_compute[239545]: 2026-02-02 15:34:01.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:01 compute-0 nova_compute[239545]: 2026-02-02 15:34:01.577 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:01 compute-0 nova_compute[239545]: 2026-02-02 15:34:01.577 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:01 compute-0 nova_compute[239545]: 2026-02-02 15:34:01.577 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:01 compute-0 nova_compute[239545]: 2026-02-02 15:34:01.577 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:34:01 compute-0 nova_compute[239545]: 2026-02-02 15:34:01.577 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:34:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118554714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.077 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Feb 02 15:34:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Feb 02 15:34:02 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.160 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.161 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.315 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.316 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4571MB free_disk=59.98810801375657GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.316 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.317 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:02 compute-0 ceph-mon[75334]: pgmap v1009: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 502 KiB/s rd, 2.6 MiB/s wr, 210 op/s
Feb 02 15:34:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/118554714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:02 compute-0 ceph-mon[75334]: osdmap e200: 3 total, 3 up, 3 in
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.557 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 9abd1d7f-3714-46ec-acde-e1d5f8158018 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.558 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.558 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.719 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:02 compute-0 nova_compute[239545]: 2026-02-02 15:34:02.925 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:34:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612512547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.210 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.215 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.236 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.263 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.264 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.317 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 465 KiB/s rd, 2.6 MiB/s wr, 164 op/s
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/612512547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.565 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.566 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.566 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.567 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.567 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 15:34:03 compute-0 nova_compute[239545]: 2026-02-02 15:34:03.582 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 15:34:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2535779534' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2535779534' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.168 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.235 239549 DEBUG nova.compute.manager [req-5839aee3-070a-4fb2-ae77-d0dae7826aff req-1bb9f6cd-76d2-4beb-8b67-3b1d6062f332 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Received event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.236 239549 DEBUG nova.compute.manager [req-5839aee3-070a-4fb2-ae77-d0dae7826aff req-1bb9f6cd-76d2-4beb-8b67-3b1d6062f332 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing instance network info cache due to event network-changed-9ac0ec6e-22ac-4358-9076-c075cc2bffb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.236 239549 DEBUG oslo_concurrency.lockutils [req-5839aee3-070a-4fb2-ae77-d0dae7826aff req-1bb9f6cd-76d2-4beb-8b67-3b1d6062f332 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.236 239549 DEBUG oslo_concurrency.lockutils [req-5839aee3-070a-4fb2-ae77-d0dae7826aff req-1bb9f6cd-76d2-4beb-8b67-3b1d6062f332 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.236 239549 DEBUG nova.network.neutron [req-5839aee3-070a-4fb2-ae77-d0dae7826aff req-1bb9f6cd-76d2-4beb-8b67-3b1d6062f332 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Refreshing network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.513 239549 DEBUG oslo_concurrency.lockutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquiring lock "9abd1d7f-3714-46ec-acde-e1d5f8158018" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.514 239549 DEBUG oslo_concurrency.lockutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.514 239549 DEBUG oslo_concurrency.lockutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquiring lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.514 239549 DEBUG oslo_concurrency.lockutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.515 239549 DEBUG oslo_concurrency.lockutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.516 239549 INFO nova.compute.manager [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Terminating instance
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.517 239549 DEBUG nova.compute.manager [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:34:04 compute-0 kernel: tap9ac0ec6e-22 (unregistering): left promiscuous mode
Feb 02 15:34:04 compute-0 ceph-mon[75334]: pgmap v1011: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 465 KiB/s rd, 2.6 MiB/s wr, 164 op/s
Feb 02 15:34:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2535779534' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2535779534' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:04 compute-0 NetworkManager[49171]: <info>  [1770046444.5602] device (tap9ac0ec6e-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.561 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:04 compute-0 ovn_controller[144995]: 2026-02-02T15:34:04Z|00062|binding|INFO|Releasing lport 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 from this chassis (sb_readonly=0)
Feb 02 15:34:04 compute-0 ovn_controller[144995]: 2026-02-02T15:34:04Z|00063|binding|INFO|Setting lport 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 down in Southbound
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.569 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:04 compute-0 ovn_controller[144995]: 2026-02-02T15:34:04Z|00064|binding|INFO|Removing iface tap9ac0ec6e-22 ovn-installed in OVS
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.573 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.583 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.584 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:a2:94 10.100.0.4'], port_security=['fa:16:3e:cf:a2:94 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9abd1d7f-3714-46ec-acde-e1d5f8158018', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ab2541-df17-414d-a404-c3329b6705f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e81756eb6c234f0ea96b5432c7bdfe28', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17f370c7-abab-49e9-8590-db490ae34a40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e55b7f01-d1f9-480d-a5f3-a48c5995032f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=9ac0ec6e-22ac-4358-9076-c075cc2bffb4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.585 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4 in datapath 36ab2541-df17-414d-a404-c3329b6705f0 unbound from our chassis
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.586 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36ab2541-df17-414d-a404-c3329b6705f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.587 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[000dea17-5fba-42ac-8661-fde737b1ac6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.588 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0 namespace which is not needed anymore
Feb 02 15:34:04 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Feb 02 15:34:04 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 13.102s CPU time.
Feb 02 15:34:04 compute-0 systemd-machined[207609]: Machine qemu-4-instance-00000004 terminated.
Feb 02 15:34:04 compute-0 neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0[249002]: [NOTICE]   (249011) : haproxy version is 2.8.14-c23fe91
Feb 02 15:34:04 compute-0 neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0[249002]: [NOTICE]   (249011) : path to executable is /usr/sbin/haproxy
Feb 02 15:34:04 compute-0 neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0[249002]: [WARNING]  (249011) : Exiting Master process...
Feb 02 15:34:04 compute-0 neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0[249002]: [ALERT]    (249011) : Current worker (249014) exited with code 143 (Terminated)
Feb 02 15:34:04 compute-0 neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0[249002]: [WARNING]  (249011) : All workers exited. Exiting... (0)
Feb 02 15:34:04 compute-0 systemd[1]: libpod-0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8.scope: Deactivated successfully.
Feb 02 15:34:04 compute-0 podman[249427]: 2026-02-02 15:34:04.694381194 +0000 UTC m=+0.038020333 container died 0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8-userdata-shm.mount: Deactivated successfully.
Feb 02 15:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4968ad82f9ed99305d821244d59a07d0591e09b492f782541f3806e6eaf1a50-merged.mount: Deactivated successfully.
Feb 02 15:34:04 compute-0 podman[249427]: 2026-02-02 15:34:04.734821977 +0000 UTC m=+0.078461116 container cleanup 0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:34:04 compute-0 systemd[1]: libpod-conmon-0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8.scope: Deactivated successfully.
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.745 239549 INFO nova.virt.libvirt.driver [-] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Instance destroyed successfully.
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.746 239549 DEBUG nova.objects.instance [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lazy-loading 'resources' on Instance uuid 9abd1d7f-3714-46ec-acde-e1d5f8158018 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.761 239549 DEBUG nova.virt.libvirt.vif [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:33:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-115644651',display_name='tempest-TestVolumeBackupRestore-server-115644651',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-115644651',id=4,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIQfg2/0RYg2eAV/vIq+rNPUg1YrMnqEzqnfIAtnx4xt0moOfoMfDaMykJWQbiW8SwAGYiu1/oKW1u+OsHJhdYcu8tC7VYu1m20zqMc5Qmh+h06zOtNaiFOn6Oc8092GWQ==',key_name='tempest-TestVolumeBackupRestore-1043946044',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:33:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e81756eb6c234f0ea96b5432c7bdfe28',ramdisk_id='',reservation_id='r-0lup900q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1787293559',owner_user_name='tempest-TestVolumeBackupRestore-1787293559-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:33:44Z,user_data=None,user_id='07aa2f7c7016411b8d5fbeb3f4688083',uuid=9abd1d7f-3714-46ec-acde-e1d5f8158018,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.762 239549 DEBUG nova.network.os_vif_util [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Converting VIF {"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.762 239549 DEBUG nova.network.os_vif_util [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:a2:94,bridge_name='br-int',has_traffic_filtering=True,id=9ac0ec6e-22ac-4358-9076-c075cc2bffb4,network=Network(36ab2541-df17-414d-a404-c3329b6705f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac0ec6e-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.763 239549 DEBUG os_vif [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:a2:94,bridge_name='br-int',has_traffic_filtering=True,id=9ac0ec6e-22ac-4358-9076-c075cc2bffb4,network=Network(36ab2541-df17-414d-a404-c3329b6705f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac0ec6e-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.764 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.765 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ac0ec6e-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.769 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.772 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.774 239549 INFO os_vif [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:a2:94,bridge_name='br-int',has_traffic_filtering=True,id=9ac0ec6e-22ac-4358-9076-c075cc2bffb4,network=Network(36ab2541-df17-414d-a404-c3329b6705f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ac0ec6e-22')
Feb 02 15:34:04 compute-0 podman[249469]: 2026-02-02 15:34:04.785011721 +0000 UTC m=+0.035450711 container remove 0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.788 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8acdea71-6982-4996-aa14-a31e652d7955]: (4, ('Mon Feb  2 03:34:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0 (0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8)\n0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8\nMon Feb  2 03:34:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0 (0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8)\n0a5f33cac47daf5d9eab27ee5e410622271129d5678a2845ababb0d5251668c8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.789 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcb78ca-91e4-4b22-842c-3e3c13485ad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.790 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36ab2541-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:04 compute-0 kernel: tap36ab2541-d0: left promiscuous mode
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.792 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.798 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.801 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d13c02-4a8a-42c2-9d66-820aaffd4582]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.816 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[208855a6-85bf-4022-a594-302f9e0341ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.817 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3cafbc9d-af58-444e-bead-9347d499e63a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.831 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d74e9cc8-f784-4382-892b-51d8529c1d02]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390955, 'reachable_time': 25724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249503, 'error': None, 'target': 'ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d36ab2541\x2ddf17\x2d414d\x2da404\x2dc3329b6705f0.mount: Deactivated successfully.
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.835 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-36ab2541-df17-414d-a404-c3329b6705f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:34:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:04.835 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[a2456bd6-f3be-4cbd-8435-2d706938598b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.903 239549 INFO nova.virt.libvirt.driver [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Deleting instance files /var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018_del
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.904 239549 INFO nova.virt.libvirt.driver [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Deletion of /var/lib/nova/instances/9abd1d7f-3714-46ec-acde-e1d5f8158018_del complete
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.958 239549 INFO nova.compute.manager [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Took 0.44 seconds to destroy the instance on the hypervisor.
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.959 239549 DEBUG oslo.service.loopingcall [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.959 239549 DEBUG nova.compute.manager [-] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:34:04 compute-0 nova_compute[239545]: 2026-02-02 15:34:04.959 239549 DEBUG nova.network.neutron [-] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:34:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 440 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Feb 02 15:34:05 compute-0 nova_compute[239545]: 2026-02-02 15:34:05.869 239549 DEBUG nova.network.neutron [req-5839aee3-070a-4fb2-ae77-d0dae7826aff req-1bb9f6cd-76d2-4beb-8b67-3b1d6062f332 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updated VIF entry in instance network info cache for port 9ac0ec6e-22ac-4358-9076-c075cc2bffb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:34:05 compute-0 nova_compute[239545]: 2026-02-02 15:34:05.870 239549 DEBUG nova.network.neutron [req-5839aee3-070a-4fb2-ae77-d0dae7826aff req-1bb9f6cd-76d2-4beb-8b67-3b1d6062f332 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updating instance_info_cache with network_info: [{"id": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "address": "fa:16:3e:cf:a2:94", "network": {"id": "36ab2541-df17-414d-a404-c3329b6705f0", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1910195856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e81756eb6c234f0ea96b5432c7bdfe28", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ac0ec6e-22", "ovs_interfaceid": "9ac0ec6e-22ac-4358-9076-c075cc2bffb4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:34:05 compute-0 nova_compute[239545]: 2026-02-02 15:34:05.892 239549 DEBUG oslo_concurrency.lockutils [req-5839aee3-070a-4fb2-ae77-d0dae7826aff req-1bb9f6cd-76d2-4beb-8b67-3b1d6062f332 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-9abd1d7f-3714-46ec-acde-e1d5f8158018" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:34:05 compute-0 nova_compute[239545]: 2026-02-02 15:34:05.911 239549 DEBUG nova.network.neutron [-] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:34:05 compute-0 nova_compute[239545]: 2026-02-02 15:34:05.933 239549 INFO nova.compute.manager [-] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Took 0.97 seconds to deallocate network for instance.
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.022 239549 DEBUG nova.compute.manager [req-1bd0d7f6-6967-4211-845c-dded51bbcc4d req-a005b2b5-76c7-4dfc-9c30-41f201df3821 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Received event network-vif-deleted-9ac0ec6e-22ac-4358-9076-c075cc2bffb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.168 239549 INFO nova.compute.manager [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Took 0.23 seconds to detach 1 volumes for instance.
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.229 239549 DEBUG oslo_concurrency.lockutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.230 239549 DEBUG oslo_concurrency.lockutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.286 239549 DEBUG oslo_concurrency.processutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:06 compute-0 ceph-mon[75334]: pgmap v1012: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 440 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Feb 02 15:34:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:34:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1416890844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.828 239549 DEBUG oslo_concurrency.processutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.835 239549 DEBUG nova.compute.provider_tree [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.857 239549 DEBUG nova.scheduler.client.report [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.878 239549 DEBUG oslo_concurrency.lockutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.903 239549 INFO nova.scheduler.client.report [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Deleted allocations for instance 9abd1d7f-3714-46ec-acde-e1d5f8158018
Feb 02 15:34:06 compute-0 nova_compute[239545]: 2026-02-02 15:34:06.960 239549 DEBUG oslo_concurrency.lockutils [None req-32810e57-633e-4c55-8e07-4e1e9daa92b8 07aa2f7c7016411b8d5fbeb3f4688083 e81756eb6c234f0ea96b5432c7bdfe28 - - default default] Lock "9abd1d7f-3714-46ec-acde-e1d5f8158018" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2167936616' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2167936616' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 1.4 MiB/s wr, 112 op/s
Feb 02 15:34:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1416890844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2167936616' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2167936616' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:07 compute-0 ceph-mon[75334]: pgmap v1013: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 1.4 MiB/s wr, 112 op/s
Feb 02 15:34:08 compute-0 nova_compute[239545]: 2026-02-02 15:34:08.320 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:08 compute-0 nova_compute[239545]: 2026-02-02 15:34:08.364 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Feb 02 15:34:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Feb 02 15:34:09 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Feb 02 15:34:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 21 KiB/s wr, 43 op/s
Feb 02 15:34:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514872404' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514872404' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:09 compute-0 nova_compute[239545]: 2026-02-02 15:34:09.771 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Feb 02 15:34:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Feb 02 15:34:10 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Feb 02 15:34:10 compute-0 ceph-mon[75334]: osdmap e201: 3 total, 3 up, 3 in
Feb 02 15:34:10 compute-0 ceph-mon[75334]: pgmap v1015: 305 pgs: 305 active+clean; 223 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 21 KiB/s wr, 43 op/s
Feb 02 15:34:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1514872404' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1514872404' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/413837375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/413837375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Feb 02 15:34:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Feb 02 15:34:11 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Feb 02 15:34:11 compute-0 ceph-mon[75334]: osdmap e202: 3 total, 3 up, 3 in
Feb 02 15:34:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/413837375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/413837375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 105 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 6.7 KiB/s wr, 144 op/s
Feb 02 15:34:12 compute-0 ceph-mon[75334]: osdmap e203: 3 total, 3 up, 3 in
Feb 02 15:34:12 compute-0 ceph-mon[75334]: pgmap v1018: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 105 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 6.7 KiB/s wr, 144 op/s
Feb 02 15:34:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:34:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2297835966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2297835966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:13 compute-0 nova_compute[239545]: 2026-02-02 15:34:13.321 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 59 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 7.3 KiB/s wr, 142 op/s
Feb 02 15:34:14 compute-0 ceph-mon[75334]: pgmap v1019: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 59 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 7.3 KiB/s wr, 142 op/s
Feb 02 15:34:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:34:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:34:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:34:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:34:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:34:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:34:14 compute-0 nova_compute[239545]: 2026-02-02 15:34:14.774 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:14 compute-0 nova_compute[239545]: 2026-02-02 15:34:14.873 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:14 compute-0 nova_compute[239545]: 2026-02-02 15:34:14.960 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 129 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 14 MiB/s wr, 296 op/s
Feb 02 15:34:16 compute-0 ceph-mon[75334]: pgmap v1020: 305 pgs: 305 active+clean; 129 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 14 MiB/s wr, 296 op/s
Feb 02 15:34:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Feb 02 15:34:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Feb 02 15:34:17 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Feb 02 15:34:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 257 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 29 MiB/s wr, 263 op/s
Feb 02 15:34:18 compute-0 ceph-mon[75334]: osdmap e204: 3 total, 3 up, 3 in
Feb 02 15:34:18 compute-0 ceph-mon[75334]: pgmap v1022: 305 pgs: 305 active+clean; 257 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 29 MiB/s wr, 263 op/s
Feb 02 15:34:18 compute-0 nova_compute[239545]: 2026-02-02 15:34:18.324 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 441 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 48 MiB/s wr, 153 op/s
Feb 02 15:34:19 compute-0 nova_compute[239545]: 2026-02-02 15:34:19.748 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046444.7438538, 9abd1d7f-3714-46ec-acde-e1d5f8158018 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:34:19 compute-0 nova_compute[239545]: 2026-02-02 15:34:19.748 239549 INFO nova.compute.manager [-] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] VM Stopped (Lifecycle Event)
Feb 02 15:34:19 compute-0 nova_compute[239545]: 2026-02-02 15:34:19.800 239549 DEBUG nova.compute.manager [None req-aabbec70-1c38-418e-aa86-b23bbbcdc2ee - - - - - -] [instance: 9abd1d7f-3714-46ec-acde-e1d5f8158018] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:34:19 compute-0 nova_compute[239545]: 2026-02-02 15:34:19.802 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:20 compute-0 ceph-mon[75334]: pgmap v1023: 305 pgs: 305 active+clean; 441 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 48 MiB/s wr, 153 op/s
Feb 02 15:34:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 969 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 93 MiB/s wr, 283 op/s
Feb 02 15:34:21 compute-0 ceph-mon[75334]: pgmap v1024: 305 pgs: 305 active+clean; 969 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 93 MiB/s wr, 283 op/s
Feb 02 15:34:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Feb 02 15:34:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Feb 02 15:34:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Feb 02 15:34:23 compute-0 nova_compute[239545]: 2026-02-02 15:34:23.325 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 785 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 117 MiB/s wr, 224 op/s
Feb 02 15:34:24 compute-0 ceph-mon[75334]: osdmap e205: 3 total, 3 up, 3 in
Feb 02 15:34:24 compute-0 ceph-mon[75334]: pgmap v1026: 305 pgs: 305 active+clean; 785 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 117 MiB/s wr, 224 op/s
Feb 02 15:34:24 compute-0 nova_compute[239545]: 2026-02-02 15:34:24.804 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 445 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 98 MiB/s wr, 274 op/s
Feb 02 15:34:26 compute-0 podman[249528]: 2026-02-02 15:34:26.329372422 +0000 UTC m=+0.070863528 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:34:26 compute-0 podman[249529]: 2026-02-02 15:34:26.329372572 +0000 UTC m=+0.071434422 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 02 15:34:26 compute-0 ceph-mon[75334]: pgmap v1027: 305 pgs: 305 active+clean; 445 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 98 MiB/s wr, 274 op/s
Feb 02 15:34:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 81 MiB/s wr, 239 op/s
Feb 02 15:34:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:34:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1380132012' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2274225367' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2274225367' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:28 compute-0 nova_compute[239545]: 2026-02-02 15:34:28.327 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Feb 02 15:34:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Feb 02 15:34:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Feb 02 15:34:28 compute-0 nova_compute[239545]: 2026-02-02 15:34:28.560 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:28 compute-0 nova_compute[239545]: 2026-02-02 15:34:28.560 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:28 compute-0 ceph-mon[75334]: pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 81 MiB/s wr, 239 op/s
Feb 02 15:34:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1380132012' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2274225367' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2274225367' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:28 compute-0 nova_compute[239545]: 2026-02-02 15:34:28.638 239549 DEBUG nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:34:28 compute-0 nova_compute[239545]: 2026-02-02 15:34:28.790 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:28 compute-0 nova_compute[239545]: 2026-02-02 15:34:28.791 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:28 compute-0 nova_compute[239545]: 2026-02-02 15:34:28.798 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:34:28 compute-0 nova_compute[239545]: 2026-02-02 15:34:28.798 239549 INFO nova.compute.claims [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:34:28 compute-0 nova_compute[239545]: 2026-02-02 15:34:28.989 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 12 MiB/s wr, 111 op/s
Feb 02 15:34:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:34:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/993946798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.519 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.524 239549 DEBUG nova.compute.provider_tree [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.555 239549 DEBUG nova.scheduler.client.report [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:34:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Feb 02 15:34:29 compute-0 ceph-mon[75334]: osdmap e206: 3 total, 3 up, 3 in
Feb 02 15:34:29 compute-0 ceph-mon[75334]: pgmap v1030: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 12 MiB/s wr, 111 op/s
Feb 02 15:34:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/993946798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.685 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.686 239549 DEBUG nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:34:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Feb 02 15:34:29 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.766211) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046469766239, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1419, "num_deletes": 257, "total_data_size": 1946892, "memory_usage": 1983904, "flush_reason": "Manual Compaction"}
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.772 239549 DEBUG nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.772 239549 DEBUG nova.network.neutron [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.806 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046469821833, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1923666, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20099, "largest_seqno": 21517, "table_properties": {"data_size": 1916670, "index_size": 4071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15254, "raw_average_key_size": 20, "raw_value_size": 1902454, "raw_average_value_size": 2598, "num_data_blocks": 181, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770046372, "oldest_key_time": 1770046372, "file_creation_time": 1770046469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 55665 microseconds, and 3163 cpu microseconds.
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.881 239549 INFO nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.821871) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1923666 bytes OK
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.821890) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.909995) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.910039) EVENT_LOG_v1 {"time_micros": 1770046469910030, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.910063) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1940362, prev total WAL file size 1940362, number of live WAL files 2.
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.910562) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1878KB)], [47(7321KB)]
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046469910597, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9420463, "oldest_snapshot_seqno": -1}
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.955 239549 DEBUG nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4639 keys, 7662858 bytes, temperature: kUnknown
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046469961871, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7662858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7629630, "index_size": 20510, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 115217, "raw_average_key_size": 24, "raw_value_size": 7543605, "raw_average_value_size": 1626, "num_data_blocks": 848, "num_entries": 4639, "num_filter_entries": 4639, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770046469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:34:29 compute-0 nova_compute[239545]: 2026-02-02 15:34:29.961 239549 DEBUG nova.policy [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b10e73971e784c20a0843cf9caf5cbbe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cd39cd97fc8041569e2a21b01b4ed0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.962320) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7662858 bytes
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.965110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.0 rd, 148.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.1 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.9) write-amplify(4.0) OK, records in: 5165, records dropped: 526 output_compression: NoCompression
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.965158) EVENT_LOG_v1 {"time_micros": 1770046469965139, "job": 24, "event": "compaction_finished", "compaction_time_micros": 51478, "compaction_time_cpu_micros": 16079, "output_level": 6, "num_output_files": 1, "total_output_size": 7662858, "num_input_records": 5165, "num_output_records": 4639, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046469965951, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046469967531, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.910501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.967657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.967665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.967666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.967668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:34:29 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:34:29.967669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.167 239549 DEBUG nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.168 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.168 239549 INFO nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Creating image(s)
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.186 239549 DEBUG nova.storage.rbd_utils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image df13eb08-f03e-43d5-a950-22b892d819af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.208 239549 DEBUG nova.storage.rbd_utils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image df13eb08-f03e-43d5-a950-22b892d819af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.230 239549 DEBUG nova.storage.rbd_utils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image df13eb08-f03e-43d5-a950-22b892d819af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.234 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.300 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.301 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.301 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.301 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.327 239549 DEBUG nova.storage.rbd_utils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image df13eb08-f03e-43d5-a950-22b892d819af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:30 compute-0 nova_compute[239545]: 2026-02-02 15:34:30.331 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb df13eb08-f03e-43d5-a950-22b892d819af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:31 compute-0 ceph-mon[75334]: osdmap e207: 3 total, 3 up, 3 in
Feb 02 15:34:31 compute-0 nova_compute[239545]: 2026-02-02 15:34:31.430 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb df13eb08-f03e-43d5-a950-22b892d819af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.2 KiB/s wr, 115 op/s
Feb 02 15:34:31 compute-0 nova_compute[239545]: 2026-02-02 15:34:31.486 239549 DEBUG nova.storage.rbd_utils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] resizing rbd image df13eb08-f03e-43d5-a950-22b892d819af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:34:31 compute-0 nova_compute[239545]: 2026-02-02 15:34:31.687 239549 DEBUG nova.network.neutron [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Successfully created port: fc0f8b6c-d0b6-4a4a-b130-67e31e204221 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:34:31 compute-0 nova_compute[239545]: 2026-02-02 15:34:31.916 239549 DEBUG nova.objects.instance [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'migration_context' on Instance uuid df13eb08-f03e-43d5-a950-22b892d819af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:34:32 compute-0 nova_compute[239545]: 2026-02-02 15:34:32.015 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:34:32 compute-0 nova_compute[239545]: 2026-02-02 15:34:32.015 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Ensure instance console log exists: /var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:34:32 compute-0 nova_compute[239545]: 2026-02-02 15:34:32.016 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:32 compute-0 nova_compute[239545]: 2026-02-02 15:34:32.016 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:32 compute-0 nova_compute[239545]: 2026-02-02 15:34:32.016 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:32 compute-0 ceph-mon[75334]: pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.2 KiB/s wr, 115 op/s
Feb 02 15:34:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Feb 02 15:34:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Feb 02 15:34:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Feb 02 15:34:33 compute-0 nova_compute[239545]: 2026-02-02 15:34:33.329 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 47 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 704 KiB/s wr, 99 op/s
Feb 02 15:34:33 compute-0 ceph-mon[75334]: osdmap e208: 3 total, 3 up, 3 in
Feb 02 15:34:33 compute-0 nova_compute[239545]: 2026-02-02 15:34:33.910 239549 DEBUG nova.network.neutron [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Successfully updated port: fc0f8b6c-d0b6-4a4a-b130-67e31e204221 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:34:34 compute-0 nova_compute[239545]: 2026-02-02 15:34:34.039 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:34:34 compute-0 nova_compute[239545]: 2026-02-02 15:34:34.040 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquired lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:34:34 compute-0 nova_compute[239545]: 2026-02-02 15:34:34.040 239549 DEBUG nova.network.neutron [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:34:34 compute-0 nova_compute[239545]: 2026-02-02 15:34:34.254 239549 DEBUG nova.compute.manager [req-5b2f471b-93d0-4bd0-a4b5-3b7b28892394 req-d6d960bf-8d38-403c-b146-0c15f6391481 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received event network-changed-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:34 compute-0 nova_compute[239545]: 2026-02-02 15:34:34.255 239549 DEBUG nova.compute.manager [req-5b2f471b-93d0-4bd0-a4b5-3b7b28892394 req-d6d960bf-8d38-403c-b146-0c15f6391481 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Refreshing instance network info cache due to event network-changed-fc0f8b6c-d0b6-4a4a-b130-67e31e204221. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:34:34 compute-0 nova_compute[239545]: 2026-02-02 15:34:34.255 239549 DEBUG oslo_concurrency.lockutils [req-5b2f471b-93d0-4bd0-a4b5-3b7b28892394 req-d6d960bf-8d38-403c-b146-0c15f6391481 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:34:34 compute-0 nova_compute[239545]: 2026-02-02 15:34:34.269 239549 DEBUG nova.network.neutron [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:34:34 compute-0 ceph-mon[75334]: pgmap v1034: 305 pgs: 305 active+clean; 47 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 704 KiB/s wr, 99 op/s
Feb 02 15:34:34 compute-0 nova_compute[239545]: 2026-02-02 15:34:34.808 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.156 239549 DEBUG nova.network.neutron [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Updating instance_info_cache with network_info: [{"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.343 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Releasing lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.343 239549 DEBUG nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Instance network_info: |[{"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.344 239549 DEBUG oslo_concurrency.lockutils [req-5b2f471b-93d0-4bd0-a4b5-3b7b28892394 req-d6d960bf-8d38-403c-b146-0c15f6391481 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.344 239549 DEBUG nova.network.neutron [req-5b2f471b-93d0-4bd0-a4b5-3b7b28892394 req-d6d960bf-8d38-403c-b146-0c15f6391481 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Refreshing network info cache for port fc0f8b6c-d0b6-4a4a-b130-67e31e204221 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.347 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Start _get_guest_xml network_info=[{"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.353 239549 WARNING nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.358 239549 DEBUG nova.virt.libvirt.host [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.360 239549 DEBUG nova.virt.libvirt.host [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.368 239549 DEBUG nova.virt.libvirt.host [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.369 239549 DEBUG nova.virt.libvirt.host [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.369 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.370 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.370 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.370 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.371 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.371 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.371 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.371 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.372 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.372 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.372 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.372 239549 DEBUG nova.virt.hardware [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.376 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 101 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 3.7 MiB/s wr, 112 op/s
Feb 02 15:34:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Feb 02 15:34:35 compute-0 ceph-mon[75334]: pgmap v1035: 305 pgs: 305 active+clean; 101 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 3.7 MiB/s wr, 112 op/s
Feb 02 15:34:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Feb 02 15:34:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Feb 02 15:34:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:34:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3718945832' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:35 compute-0 nova_compute[239545]: 2026-02-02 15:34:35.883 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.064 239549 DEBUG nova.storage.rbd_utils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image df13eb08-f03e-43d5-a950-22b892d819af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.068 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:34:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1384669657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.603 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.605 239549 DEBUG nova.virt.libvirt.vif [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:34:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-437458788',display_name='tempest-VolumesBackupsTest-instance-437458788',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-437458788',id=5,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjsax5FDb5fGUvyB8ABhrpMBNJUBKCrgcZrFiak24zHXTLIsVDZR1IDlBWePQfsstMPHqrf+Jx6Fe86XxqHlRK4lexDzhIFxvdGEa2SuYmMNSCyALH2/fgufYxMXTs6/Q==',key_name='tempest-keypair-358872520',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd39cd97fc8041569e2a21b01b4ed0db',ramdisk_id='',reservation_id='r-i8xwotfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1207235356',owner_user_name='tempest-VolumesBackupsTest-1207235356-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:34:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b10e73971e784c20a0843cf9caf5cbbe',uuid=df13eb08-f03e-43d5-a950-22b892d819af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.605 239549 DEBUG nova.network.os_vif_util [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converting VIF {"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.606 239549 DEBUG nova.network.os_vif_util [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:02:ef,bridge_name='br-int',has_traffic_filtering=True,id=fc0f8b6c-d0b6-4a4a-b130-67e31e204221,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc0f8b6c-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.607 239549 DEBUG nova.objects.instance [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'pci_devices' on Instance uuid df13eb08-f03e-43d5-a950-22b892d819af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.620 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <uuid>df13eb08-f03e-43d5-a950-22b892d819af</uuid>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <name>instance-00000005</name>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <nova:name>tempest-VolumesBackupsTest-instance-437458788</nova:name>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:34:35</nova:creationTime>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <nova:user uuid="b10e73971e784c20a0843cf9caf5cbbe">tempest-VolumesBackupsTest-1207235356-project-member</nova:user>
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <nova:project uuid="cd39cd97fc8041569e2a21b01b4ed0db">tempest-VolumesBackupsTest-1207235356</nova:project>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <nova:port uuid="fc0f8b6c-d0b6-4a4a-b130-67e31e204221">
Feb 02 15:34:36 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <system>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <entry name="serial">df13eb08-f03e-43d5-a950-22b892d819af</entry>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <entry name="uuid">df13eb08-f03e-43d5-a950-22b892d819af</entry>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     </system>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <os>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   </os>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <features>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   </features>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/df13eb08-f03e-43d5-a950-22b892d819af_disk">
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       </source>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/df13eb08-f03e-43d5-a950-22b892d819af_disk.config">
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       </source>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:34:36 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:e6:02:ef"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <target dev="tapfc0f8b6c-d0"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af/console.log" append="off"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <video>
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     </video>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:34:36 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:34:36 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:34:36 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:34:36 compute-0 nova_compute[239545]: </domain>
Feb 02 15:34:36 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.621 239549 DEBUG nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Preparing to wait for external event network-vif-plugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.622 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.622 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.622 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.623 239549 DEBUG nova.virt.libvirt.vif [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:34:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-437458788',display_name='tempest-VolumesBackupsTest-instance-437458788',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-437458788',id=5,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjsax5FDb5fGUvyB8ABhrpMBNJUBKCrgcZrFiak24zHXTLIsVDZR1IDlBWePQfsstMPHqrf+Jx6Fe86XxqHlRK4lexDzhIFxvdGEa2SuYmMNSCyALH2/fgufYxMXTs6/Q==',key_name='tempest-keypair-358872520',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd39cd97fc8041569e2a21b01b4ed0db',ramdisk_id='',reservation_id='r-i8xwotfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1207235356',owner_user_name='tempest-VolumesBackupsTest-1207235356-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:34:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b10e73971e784c20a0843cf9caf5cbbe',uuid=df13eb08-f03e-43d5-a950-22b892d819af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.623 239549 DEBUG nova.network.os_vif_util [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converting VIF {"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.624 239549 DEBUG nova.network.os_vif_util [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:02:ef,bridge_name='br-int',has_traffic_filtering=True,id=fc0f8b6c-d0b6-4a4a-b130-67e31e204221,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc0f8b6c-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.624 239549 DEBUG os_vif [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:02:ef,bridge_name='br-int',has_traffic_filtering=True,id=fc0f8b6c-d0b6-4a4a-b130-67e31e204221,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc0f8b6c-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.627 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.628 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.628 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.631 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.631 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc0f8b6c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.632 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc0f8b6c-d0, col_values=(('external_ids', {'iface-id': 'fc0f8b6c-d0b6-4a4a-b130-67e31e204221', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:02:ef', 'vm-uuid': 'df13eb08-f03e-43d5-a950-22b892d819af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.633 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:36 compute-0 NetworkManager[49171]: <info>  [1770046476.6342] manager: (tapfc0f8b6c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.636 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.638 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.639 239549 INFO os_vif [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:02:ef,bridge_name='br-int',has_traffic_filtering=True,id=fc0f8b6c-d0b6-4a4a-b130-67e31e204221,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc0f8b6c-d0')
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.699 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.699 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.699 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No VIF found with MAC fa:16:3e:e6:02:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.700 239549 INFO nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Using config drive
Feb 02 15:34:36 compute-0 nova_compute[239545]: 2026-02-02 15:34:36.723 239549 DEBUG nova.storage.rbd_utils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image df13eb08-f03e-43d5-a950-22b892d819af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:36 compute-0 ceph-mon[75334]: osdmap e209: 3 total, 3 up, 3 in
Feb 02 15:34:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3718945832' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1384669657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.082 239549 DEBUG nova.network.neutron [req-5b2f471b-93d0-4bd0-a4b5-3b7b28892394 req-d6d960bf-8d38-403c-b146-0c15f6391481 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Updated VIF entry in instance network info cache for port fc0f8b6c-d0b6-4a4a-b130-67e31e204221. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.082 239549 DEBUG nova.network.neutron [req-5b2f471b-93d0-4bd0-a4b5-3b7b28892394 req-d6d960bf-8d38-403c-b146-0c15f6391481 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Updating instance_info_cache with network_info: [{"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.098 239549 DEBUG oslo_concurrency.lockutils [req-5b2f471b-93d0-4bd0-a4b5-3b7b28892394 req-d6d960bf-8d38-403c-b146-0c15f6391481 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.184 239549 INFO nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Creating config drive at /var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af/disk.config
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.188 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdjsldyxc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.305 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdjsldyxc" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.336 239549 DEBUG nova.storage.rbd_utils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image df13eb08-f03e-43d5-a950-22b892d819af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.339 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af/disk.config df13eb08-f03e-43d5-a950-22b892d819af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.454 239549 DEBUG oslo_concurrency.processutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af/disk.config df13eb08-f03e-43d5-a950-22b892d819af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.455 239549 INFO nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Deleting local config drive /var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af/disk.config because it was imported into RBD.
Feb 02 15:34:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 130 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 5.0 MiB/s wr, 131 op/s
Feb 02 15:34:37 compute-0 kernel: tapfc0f8b6c-d0: entered promiscuous mode
Feb 02 15:34:37 compute-0 NetworkManager[49171]: <info>  [1770046477.4995] manager: (tapfc0f8b6c-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.500 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 ovn_controller[144995]: 2026-02-02T15:34:37Z|00065|binding|INFO|Claiming lport fc0f8b6c-d0b6-4a4a-b130-67e31e204221 for this chassis.
Feb 02 15:34:37 compute-0 ovn_controller[144995]: 2026-02-02T15:34:37Z|00066|binding|INFO|fc0f8b6c-d0b6-4a4a-b130-67e31e204221: Claiming fa:16:3e:e6:02:ef 10.100.0.6
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.504 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.507 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.517 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:02:ef 10.100.0.6'], port_security=['fa:16:3e:e6:02:ef 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'df13eb08-f03e-43d5-a950-22b892d819af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd39cd97fc8041569e2a21b01b4ed0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e1e8ad4-4b9e-4a25-af31-be35b7753118', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=387ba1e2-c4db-437f-a706-eb9807770b03, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=fc0f8b6c-d0b6-4a4a-b130-67e31e204221) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.519 154982 INFO neutron.agent.ovn.metadata.agent [-] Port fc0f8b6c-d0b6-4a4a-b130-67e31e204221 in datapath 8a81d067-8083-4de2-8ac6-1682b4d8e6bb bound to our chassis
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.520 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a81d067-8083-4de2-8ac6-1682b4d8e6bb
Feb 02 15:34:37 compute-0 systemd-udevd[249891]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:34:37 compute-0 systemd-machined[207609]: New machine qemu-5-instance-00000005.
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.528 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[384fe1bd-9bc3-4d4a-8b0d-bf0c3ffabbca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.529 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a81d067-81 in ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.531 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a81d067-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.531 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[23e380a6-8305-473a-9944-22158101807f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.532 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[db07ecad-e10c-4d21-a792-5eb2dd211476]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 NetworkManager[49171]: <info>  [1770046477.5350] device (tapfc0f8b6c-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:34:37 compute-0 NetworkManager[49171]: <info>  [1770046477.5359] device (tapfc0f8b6c-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:34:37 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.541 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[5116634f-9d4c-4f81-899e-02336cc95225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.545 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 ovn_controller[144995]: 2026-02-02T15:34:37Z|00067|binding|INFO|Setting lport fc0f8b6c-d0b6-4a4a-b130-67e31e204221 ovn-installed in OVS
Feb 02 15:34:37 compute-0 ovn_controller[144995]: 2026-02-02T15:34:37Z|00068|binding|INFO|Setting lport fc0f8b6c-d0b6-4a4a-b130-67e31e204221 up in Southbound
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.548 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.553 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e1dce7-a93a-48a3-8810-86a335b29fcf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.571 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1f30e4-bf11-4b80-9d8d-c8fdbafaaf2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 NetworkManager[49171]: <info>  [1770046477.5767] manager: (tap8a81d067-80): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.576 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a0027ff1-ac64-4932-8fb2-9152e3567726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.595 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[eba4ef91-015e-4931-8f34-4475bd686e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.598 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[998ff389-3ff0-4b09-8bd3-bf7aa9fab786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 NetworkManager[49171]: <info>  [1770046477.6127] device (tap8a81d067-80): carrier: link connected
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.615 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[da768290-c14f-4a84-ab28-da4730ee42cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442851310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442851310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.627 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4887b1a6-2d92-4da5-b293-28ebf344279a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a81d067-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:2e:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396352, 'reachable_time': 28099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249924, 'error': None, 'target': 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.637 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b46d215b-fe97-4a62-8ae6-6e18e405a511]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:2e9e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396352, 'tstamp': 396352}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249925, 'error': None, 'target': 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.649 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[73c29cc7-dff2-4b3d-b94e-8b61624a5332]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a81d067-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:2e:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396352, 'reachable_time': 28099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249926, 'error': None, 'target': 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.668 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3745a7f-5c8b-4de7-8982-249f0de16cad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.705 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d0429387-cd33-4801-a134-5737319e1e53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.707 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a81d067-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.708 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.708 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a81d067-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.709 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 NetworkManager[49171]: <info>  [1770046477.7103] manager: (tap8a81d067-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Feb 02 15:34:37 compute-0 kernel: tap8a81d067-80: entered promiscuous mode
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.712 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.714 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a81d067-80, col_values=(('external_ids', {'iface-id': '0e2183d9-9021-4390-95a4-b6c8ee275a55'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.715 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 ovn_controller[144995]: 2026-02-02T15:34:37Z|00069|binding|INFO|Releasing lport 0e2183d9-9021-4390-95a4-b6c8ee275a55 from this chassis (sb_readonly=0)
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.716 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.718 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a81d067-8083-4de2-8ac6-1682b4d8e6bb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a81d067-8083-4de2-8ac6-1682b4d8e6bb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.719 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[38ce65fb-1200-47ff-b046-39c6a22bcfd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.720 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-8a81d067-8083-4de2-8ac6-1682b4d8e6bb
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/8a81d067-8083-4de2-8ac6-1682b4d8e6bb.pid.haproxy
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 8a81d067-8083-4de2-8ac6-1682b4d8e6bb
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:34:37 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:37.720 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'env', 'PROCESS_TAG=haproxy-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a81d067-8083-4de2-8ac6-1682b4d8e6bb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.722 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:37 compute-0 ceph-mon[75334]: pgmap v1037: 305 pgs: 305 active+clean; 130 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 5.0 MiB/s wr, 131 op/s
Feb 02 15:34:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2442851310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2442851310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.783 239549 DEBUG nova.compute.manager [req-b49a73a3-9b40-4e6d-b198-fbac9dd44623 req-7d073b0f-0a18-4d9d-9237-ccd25da02690 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received event network-vif-plugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.784 239549 DEBUG oslo_concurrency.lockutils [req-b49a73a3-9b40-4e6d-b198-fbac9dd44623 req-7d073b0f-0a18-4d9d-9237-ccd25da02690 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.784 239549 DEBUG oslo_concurrency.lockutils [req-b49a73a3-9b40-4e6d-b198-fbac9dd44623 req-7d073b0f-0a18-4d9d-9237-ccd25da02690 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.784 239549 DEBUG oslo_concurrency.lockutils [req-b49a73a3-9b40-4e6d-b198-fbac9dd44623 req-7d073b0f-0a18-4d9d-9237-ccd25da02690 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:37 compute-0 nova_compute[239545]: 2026-02-02 15:34:37.784 239549 DEBUG nova.compute.manager [req-b49a73a3-9b40-4e6d-b198-fbac9dd44623 req-7d073b0f-0a18-4d9d-9237-ccd25da02690 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Processing event network-vif-plugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:34:38 compute-0 podman[249958]: 2026-02-02 15:34:38.057789085 +0000 UTC m=+0.041756856 container create 8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:34:38 compute-0 systemd[1]: Started libpod-conmon-8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5.scope.
Feb 02 15:34:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18fd7bdf9806b002d22020da5c1cde707558db025184a403f00560c3cd1c6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:38 compute-0 podman[249958]: 2026-02-02 15:34:38.12860644 +0000 UTC m=+0.112574241 container init 8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:34:38 compute-0 podman[249958]: 2026-02-02 15:34:38.132349243 +0000 UTC m=+0.116317014 container start 8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:34:38 compute-0 podman[249958]: 2026-02-02 15:34:38.037046021 +0000 UTC m=+0.021013812 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:34:38 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[249973]: [NOTICE]   (249977) : New worker (249979) forked
Feb 02 15:34:38 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[249973]: [NOTICE]   (249977) : Loading success.
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.332 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.591 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046478.5909367, df13eb08-f03e-43d5-a950-22b892d819af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.592 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] VM Started (Lifecycle Event)
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.593 239549 DEBUG nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.596 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.599 239549 INFO nova.virt.libvirt.driver [-] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Instance spawned successfully.
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.600 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.614 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.619 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.623 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.623 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.624 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.624 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.625 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.625 239549 DEBUG nova.virt.libvirt.driver [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.643 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.644 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046478.5911503, df13eb08-f03e-43d5-a950-22b892d819af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.644 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] VM Paused (Lifecycle Event)
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.669 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.672 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046478.5960293, df13eb08-f03e-43d5-a950-22b892d819af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.672 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] VM Resumed (Lifecycle Event)
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.701 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.704 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.717 239549 INFO nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Took 8.55 seconds to spawn the instance on the hypervisor.
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.718 239549 DEBUG nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.727 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.791 239549 INFO nova.compute.manager [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Took 10.05 seconds to build instance.
Feb 02 15:34:38 compute-0 nova_compute[239545]: 2026-02-02 15:34:38.807 239549 DEBUG oslo_concurrency.lockutils [None req-a2e822eb-7af0-4498-aa77-f4c5fff1bafd b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3857530910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3857530910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3857530910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3857530910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.3 MiB/s wr, 115 op/s
Feb 02 15:34:39 compute-0 nova_compute[239545]: 2026-02-02 15:34:39.885 239549 DEBUG nova.compute.manager [req-2731545b-d0ce-496b-91b4-c355ceb8c058 req-0024ce32-7299-4784-a639-9dbc88a2adb2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received event network-vif-plugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:39 compute-0 nova_compute[239545]: 2026-02-02 15:34:39.885 239549 DEBUG oslo_concurrency.lockutils [req-2731545b-d0ce-496b-91b4-c355ceb8c058 req-0024ce32-7299-4784-a639-9dbc88a2adb2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:39 compute-0 nova_compute[239545]: 2026-02-02 15:34:39.886 239549 DEBUG oslo_concurrency.lockutils [req-2731545b-d0ce-496b-91b4-c355ceb8c058 req-0024ce32-7299-4784-a639-9dbc88a2adb2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:39 compute-0 nova_compute[239545]: 2026-02-02 15:34:39.886 239549 DEBUG oslo_concurrency.lockutils [req-2731545b-d0ce-496b-91b4-c355ceb8c058 req-0024ce32-7299-4784-a639-9dbc88a2adb2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:39 compute-0 nova_compute[239545]: 2026-02-02 15:34:39.887 239549 DEBUG nova.compute.manager [req-2731545b-d0ce-496b-91b4-c355ceb8c058 req-0024ce32-7299-4784-a639-9dbc88a2adb2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] No waiting events found dispatching network-vif-plugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:34:39 compute-0 nova_compute[239545]: 2026-02-02 15:34:39.887 239549 WARNING nova.compute.manager [req-2731545b-d0ce-496b-91b4-c355ceb8c058 req-0024ce32-7299-4784-a639-9dbc88a2adb2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received unexpected event network-vif-plugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 for instance with vm_state active and task_state None.
Feb 02 15:34:40 compute-0 ceph-mon[75334]: pgmap v1038: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.3 MiB/s wr, 115 op/s
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.171 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.172 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.196 239549 DEBUG nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:34:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591714427' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591714427' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.269 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.270 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.276 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.276 239549 INFO nova.compute.claims [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.401 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:34:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148744069' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.946 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.951 239549 DEBUG nova.compute.provider_tree [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:34:40 compute-0 nova_compute[239545]: 2026-02-02 15:34:40.972 239549 DEBUG nova.scheduler.client.report [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.001 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.002 239549 DEBUG nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.052 239549 DEBUG nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.052 239549 DEBUG nova.network.neutron [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:34:41 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1591714427' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:41 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1591714427' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:41 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2148744069' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.079 239549 INFO nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.117 239549 DEBUG nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.208 239549 DEBUG nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.209 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.210 239549 INFO nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Creating image(s)
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.226 239549 DEBUG nova.storage.rbd_utils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.245 239549 DEBUG nova.storage.rbd_utils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.263 239549 DEBUG nova.storage.rbd_utils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.265 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.309 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.310 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.311 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.311 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.331 239549 DEBUG nova.storage.rbd_utils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.336 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.441 239549 DEBUG nova.policy [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2059424184a34c2da768a2a83c23a7f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '010150769bb34684be4a2dff720d1b35', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:34:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 247 op/s
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.511 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.561 239549 DEBUG nova.storage.rbd_utils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] resizing rbd image 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.626 239549 DEBUG nova.objects.instance [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'migration_context' on Instance uuid 4b3386f6-82b3-4e67-abc7-d82021a8f04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.634 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.649 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.650 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Ensure instance console log exists: /var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.651 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.651 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.652 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2956006727' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2956006727' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:41 compute-0 nova_compute[239545]: 2026-02-02 15:34:41.980 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:41 compute-0 NetworkManager[49171]: <info>  [1770046481.9816] manager: (patch-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Feb 02 15:34:41 compute-0 NetworkManager[49171]: <info>  [1770046481.9825] manager: (patch-br-int-to-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Feb 02 15:34:42 compute-0 nova_compute[239545]: 2026-02-02 15:34:42.011 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:42 compute-0 ovn_controller[144995]: 2026-02-02T15:34:42Z|00070|binding|INFO|Releasing lport 0e2183d9-9021-4390-95a4-b6c8ee275a55 from this chassis (sb_readonly=0)
Feb 02 15:34:42 compute-0 nova_compute[239545]: 2026-02-02 15:34:42.018 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:42 compute-0 ceph-mon[75334]: pgmap v1039: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 247 op/s
Feb 02 15:34:42 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2956006727' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:42 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2956006727' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Feb 02 15:34:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Feb 02 15:34:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Feb 02 15:34:42 compute-0 nova_compute[239545]: 2026-02-02 15:34:42.484 239549 DEBUG nova.compute.manager [req-c0102dfa-099d-4dcc-a243-491477f97124 req-f3ba74d9-b05a-46b9-bd2a-8e832b8d74b7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received event network-changed-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:42 compute-0 nova_compute[239545]: 2026-02-02 15:34:42.485 239549 DEBUG nova.compute.manager [req-c0102dfa-099d-4dcc-a243-491477f97124 req-f3ba74d9-b05a-46b9-bd2a-8e832b8d74b7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Refreshing instance network info cache due to event network-changed-fc0f8b6c-d0b6-4a4a-b130-67e31e204221. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:34:42 compute-0 nova_compute[239545]: 2026-02-02 15:34:42.485 239549 DEBUG oslo_concurrency.lockutils [req-c0102dfa-099d-4dcc-a243-491477f97124 req-f3ba74d9-b05a-46b9-bd2a-8e832b8d74b7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:34:42 compute-0 nova_compute[239545]: 2026-02-02 15:34:42.486 239549 DEBUG oslo_concurrency.lockutils [req-c0102dfa-099d-4dcc-a243-491477f97124 req-f3ba74d9-b05a-46b9-bd2a-8e832b8d74b7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:34:42 compute-0 nova_compute[239545]: 2026-02-02 15:34:42.486 239549 DEBUG nova.network.neutron [req-c0102dfa-099d-4dcc-a243-491477f97124 req-f3ba74d9-b05a-46b9-bd2a-8e832b8d74b7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Refreshing network info cache for port fc0f8b6c-d0b6-4a4a-b130-67e31e204221 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:34:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1755787302' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1755787302' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:42 compute-0 nova_compute[239545]: 2026-02-02 15:34:42.780 239549 DEBUG nova.network.neutron [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Successfully created port: ee164fa3-c608-44ca-a9ad-458e67ac2c7a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:34:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:34:42
Feb 02 15:34:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:34:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:34:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'images', 'backups', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr']
Feb 02 15:34:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:34:43 compute-0 ceph-mon[75334]: osdmap e210: 3 total, 3 up, 3 in
Feb 02 15:34:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1755787302' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1755787302' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:43 compute-0 nova_compute[239545]: 2026-02-02 15:34:43.333 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:43 compute-0 nova_compute[239545]: 2026-02-02 15:34:43.358 239549 DEBUG nova.network.neutron [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Successfully updated port: ee164fa3-c608-44ca-a9ad-458e67ac2c7a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:34:43 compute-0 nova_compute[239545]: 2026-02-02 15:34:43.372 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "refresh_cache-4b3386f6-82b3-4e67-abc7-d82021a8f04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:34:43 compute-0 nova_compute[239545]: 2026-02-02 15:34:43.372 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquired lock "refresh_cache-4b3386f6-82b3-4e67-abc7-d82021a8f04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:34:43 compute-0 nova_compute[239545]: 2026-02-02 15:34:43.373 239549 DEBUG nova.network.neutron [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:34:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 142 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 256 op/s
Feb 02 15:34:43 compute-0 nova_compute[239545]: 2026-02-02 15:34:43.501 239549 DEBUG nova.network.neutron [req-c0102dfa-099d-4dcc-a243-491477f97124 req-f3ba74d9-b05a-46b9-bd2a-8e832b8d74b7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Updated VIF entry in instance network info cache for port fc0f8b6c-d0b6-4a4a-b130-67e31e204221. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:34:43 compute-0 nova_compute[239545]: 2026-02-02 15:34:43.502 239549 DEBUG nova.network.neutron [req-c0102dfa-099d-4dcc-a243-491477f97124 req-f3ba74d9-b05a-46b9-bd2a-8e832b8d74b7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Updating instance_info_cache with network_info: [{"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:34:43 compute-0 nova_compute[239545]: 2026-02-02 15:34:43.520 239549 DEBUG oslo_concurrency.lockutils [req-c0102dfa-099d-4dcc-a243-491477f97124 req-f3ba74d9-b05a-46b9-bd2a-8e832b8d74b7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:34:43 compute-0 nova_compute[239545]: 2026-02-02 15:34:43.652 239549 DEBUG nova.network.neutron [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:34:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Feb 02 15:34:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Feb 02 15:34:44 compute-0 ceph-mon[75334]: pgmap v1041: 305 pgs: 305 active+clean; 142 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 256 op/s
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.315 239549 DEBUG nova.network.neutron [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Updating instance_info_cache with network_info: [{"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:34:44 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.338 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Releasing lock "refresh_cache-4b3386f6-82b3-4e67-abc7-d82021a8f04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.338 239549 DEBUG nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Instance network_info: |[{"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.341 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Start _get_guest_xml network_info=[{"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.346 239549 WARNING nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.356 239549 DEBUG nova.virt.libvirt.host [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.357 239549 DEBUG nova.virt.libvirt.host [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.364 239549 DEBUG nova.virt.libvirt.host [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.364 239549 DEBUG nova.virt.libvirt.host [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.365 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.365 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.365 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.365 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.365 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.366 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.366 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.366 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.366 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.366 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.366 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.367 239549 DEBUG nova.virt.hardware [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.369 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.574 239549 DEBUG nova.compute.manager [req-ba79885d-9f29-46f7-9e70-a337c94a1b81 req-9a600313-71e2-4e02-b352-8a134b247884 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received event network-changed-ee164fa3-c608-44ca-a9ad-458e67ac2c7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.574 239549 DEBUG nova.compute.manager [req-ba79885d-9f29-46f7-9e70-a337c94a1b81 req-9a600313-71e2-4e02-b352-8a134b247884 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Refreshing instance network info cache due to event network-changed-ee164fa3-c608-44ca-a9ad-458e67ac2c7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.574 239549 DEBUG oslo_concurrency.lockutils [req-ba79885d-9f29-46f7-9e70-a337c94a1b81 req-9a600313-71e2-4e02-b352-8a134b247884 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-4b3386f6-82b3-4e67-abc7-d82021a8f04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.574 239549 DEBUG oslo_concurrency.lockutils [req-ba79885d-9f29-46f7-9e70-a337c94a1b81 req-9a600313-71e2-4e02-b352-8a134b247884 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-4b3386f6-82b3-4e67-abc7-d82021a8f04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.575 239549 DEBUG nova.network.neutron [req-ba79885d-9f29-46f7-9e70-a337c94a1b81 req-9a600313-71e2-4e02-b352-8a134b247884 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Refreshing network info cache for port ee164fa3-c608-44ca-a9ad-458e67ac2c7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:34:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39222931' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39222931' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:34:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.921 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:34:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1184010606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.949 239549 WARNING nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.949 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Triggering sync for uuid df13eb08-f03e-43d5-a950-22b892d819af _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.949 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Triggering sync for uuid 4b3386f6-82b3-4e67-abc7-d82021a8f04c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.949 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.950 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "df13eb08-f03e-43d5-a950-22b892d819af" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.950 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.951 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.970 239549 DEBUG nova.storage.rbd_utils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:44 compute-0 nova_compute[239545]: 2026-02-02 15:34:44.974 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.057 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "df13eb08-f03e-43d5-a950-22b892d819af" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:45 compute-0 ceph-mon[75334]: osdmap e211: 3 total, 3 up, 3 in
Feb 02 15:34:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/39222931' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/39222931' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1184010606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 171 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 272 op/s
Feb 02 15:34:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:34:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614615240' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.510 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.513 239549 DEBUG nova.virt.libvirt.vif [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:34:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1183795373',display_name='tempest-VolumesSnapshotTestJSON-instance-1183795373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1183795373',id=6,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGLkQ4rSay35gXx1sFkXgaspAfMZ05j08g+NZnAfJgqfVDoVOOX3N+K2gSQe2SOV6USMmPUPKwx63dSLrOJFEAS7IXNmtZlVlE4Hhp+41AEUZLhOaiuapRiHo54watMy3g==',key_name='tempest-keypair-308051672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='010150769bb34684be4a2dff720d1b35',ramdisk_id='',reservation_id='r-cmhqxqvq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1645199079',owner_user_name='tempest-VolumesSnapshotTestJSON-1645199079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:34:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2059424184a34c2da768a2a83c23a7f5',uuid=4b3386f6-82b3-4e67-abc7-d82021a8f04c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.514 239549 DEBUG nova.network.os_vif_util [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converting VIF {"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.515 239549 DEBUG nova.network.os_vif_util [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c0:e9,bridge_name='br-int',has_traffic_filtering=True,id=ee164fa3-c608-44ca-a9ad-458e67ac2c7a,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee164fa3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.516 239549 DEBUG nova.objects.instance [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b3386f6-82b3-4e67-abc7-d82021a8f04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.563 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <uuid>4b3386f6-82b3-4e67-abc7-d82021a8f04c</uuid>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <name>instance-00000006</name>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-1183795373</nova:name>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:34:44</nova:creationTime>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <nova:user uuid="2059424184a34c2da768a2a83c23a7f5">tempest-VolumesSnapshotTestJSON-1645199079-project-member</nova:user>
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <nova:project uuid="010150769bb34684be4a2dff720d1b35">tempest-VolumesSnapshotTestJSON-1645199079</nova:project>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <nova:port uuid="ee164fa3-c608-44ca-a9ad-458e67ac2c7a">
Feb 02 15:34:45 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <system>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <entry name="serial">4b3386f6-82b3-4e67-abc7-d82021a8f04c</entry>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <entry name="uuid">4b3386f6-82b3-4e67-abc7-d82021a8f04c</entry>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     </system>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <os>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   </os>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <features>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   </features>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk">
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       </source>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk.config">
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       </source>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:34:45 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:d7:c0:e9"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <target dev="tapee164fa3-c6"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c/console.log" append="off"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <video>
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     </video>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:34:45 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:34:45 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:34:45 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:34:45 compute-0 nova_compute[239545]: </domain>
Feb 02 15:34:45 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.564 239549 DEBUG nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Preparing to wait for external event network-vif-plugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.565 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.565 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.565 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.566 239549 DEBUG nova.virt.libvirt.vif [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:34:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1183795373',display_name='tempest-VolumesSnapshotTestJSON-instance-1183795373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1183795373',id=6,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGLkQ4rSay35gXx1sFkXgaspAfMZ05j08g+NZnAfJgqfVDoVOOX3N+K2gSQe2SOV6USMmPUPKwx63dSLrOJFEAS7IXNmtZlVlE4Hhp+41AEUZLhOaiuapRiHo54watMy3g==',key_name='tempest-keypair-308051672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='010150769bb34684be4a2dff720d1b35',ramdisk_id='',reservation_id='r-cmhqxqvq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1645199079',owner_user_name='tempest-VolumesSnapshotTestJSON-1645199079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:34:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2059424184a34c2da768a2a83c23a7f5',uuid=4b3386f6-82b3-4e67-abc7-d82021a8f04c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.566 239549 DEBUG nova.network.os_vif_util [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converting VIF {"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.566 239549 DEBUG nova.network.os_vif_util [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c0:e9,bridge_name='br-int',has_traffic_filtering=True,id=ee164fa3-c608-44ca-a9ad-458e67ac2c7a,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee164fa3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.567 239549 DEBUG os_vif [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c0:e9,bridge_name='br-int',has_traffic_filtering=True,id=ee164fa3-c608-44ca-a9ad-458e67ac2c7a,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee164fa3-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.567 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.568 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.568 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.570 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.570 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee164fa3-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.571 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee164fa3-c6, col_values=(('external_ids', {'iface-id': 'ee164fa3-c608-44ca-a9ad-458e67ac2c7a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:c0:e9', 'vm-uuid': '4b3386f6-82b3-4e67-abc7-d82021a8f04c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:45 compute-0 NetworkManager[49171]: <info>  [1770046485.5730] manager: (tapee164fa3-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.574 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.577 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.577 239549 INFO os_vif [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c0:e9,bridge_name='br-int',has_traffic_filtering=True,id=ee164fa3-c608-44ca-a9ad-458e67ac2c7a,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee164fa3-c6')
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.761 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.762 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.762 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No VIF found with MAC fa:16:3e:d7:c0:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.763 239549 INFO nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Using config drive
Feb 02 15:34:45 compute-0 nova_compute[239545]: 2026-02-02 15:34:45.783 239549 DEBUG nova.storage.rbd_utils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:46 compute-0 nova_compute[239545]: 2026-02-02 15:34:46.096 239549 DEBUG nova.network.neutron [req-ba79885d-9f29-46f7-9e70-a337c94a1b81 req-9a600313-71e2-4e02-b352-8a134b247884 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Updated VIF entry in instance network info cache for port ee164fa3-c608-44ca-a9ad-458e67ac2c7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:34:46 compute-0 nova_compute[239545]: 2026-02-02 15:34:46.097 239549 DEBUG nova.network.neutron [req-ba79885d-9f29-46f7-9e70-a337c94a1b81 req-9a600313-71e2-4e02-b352-8a134b247884 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Updating instance_info_cache with network_info: [{"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:34:46 compute-0 nova_compute[239545]: 2026-02-02 15:34:46.260 239549 INFO nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Creating config drive at /var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c/disk.config
Feb 02 15:34:46 compute-0 nova_compute[239545]: 2026-02-02 15:34:46.265 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbui396qb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:46 compute-0 nova_compute[239545]: 2026-02-02 15:34:46.384 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbui396qb" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:46 compute-0 ceph-mon[75334]: pgmap v1043: 305 pgs: 305 active+clean; 171 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 272 op/s
Feb 02 15:34:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3614615240' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:46 compute-0 nova_compute[239545]: 2026-02-02 15:34:46.429 239549 DEBUG nova.storage.rbd_utils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:34:46 compute-0 nova_compute[239545]: 2026-02-02 15:34:46.432 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c/disk.config 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:46 compute-0 nova_compute[239545]: 2026-02-02 15:34:46.454 239549 DEBUG oslo_concurrency.lockutils [req-ba79885d-9f29-46f7-9e70-a337c94a1b81 req-9a600313-71e2-4e02-b352-8a134b247884 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-4b3386f6-82b3-4e67-abc7-d82021a8f04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:34:47 compute-0 sudo[250342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:34:47 compute-0 sudo[250342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:34:47 compute-0 sudo[250342]: pam_unix(sudo:session): session closed for user root
Feb 02 15:34:47 compute-0 sudo[250367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:34:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:47 compute-0 sudo[250367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:34:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 288 op/s
Feb 02 15:34:47 compute-0 nova_compute[239545]: 2026-02-02 15:34:47.542 239549 DEBUG oslo_concurrency.processutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c/disk.config 4b3386f6-82b3-4e67-abc7-d82021a8f04c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:47 compute-0 nova_compute[239545]: 2026-02-02 15:34:47.542 239549 INFO nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Deleting local config drive /var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c/disk.config because it was imported into RBD.
Feb 02 15:34:47 compute-0 kernel: tapee164fa3-c6: entered promiscuous mode
Feb 02 15:34:47 compute-0 NetworkManager[49171]: <info>  [1770046487.5930] manager: (tapee164fa3-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Feb 02 15:34:47 compute-0 ovn_controller[144995]: 2026-02-02T15:34:47Z|00071|binding|INFO|Claiming lport ee164fa3-c608-44ca-a9ad-458e67ac2c7a for this chassis.
Feb 02 15:34:47 compute-0 ovn_controller[144995]: 2026-02-02T15:34:47Z|00072|binding|INFO|ee164fa3-c608-44ca-a9ad-458e67ac2c7a: Claiming fa:16:3e:d7:c0:e9 10.100.0.3
Feb 02 15:34:47 compute-0 nova_compute[239545]: 2026-02-02 15:34:47.599 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:47 compute-0 ovn_controller[144995]: 2026-02-02T15:34:47Z|00073|binding|INFO|Setting lport ee164fa3-c608-44ca-a9ad-458e67ac2c7a ovn-installed in OVS
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.610 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:c0:e9 10.100.0.3'], port_security=['fa:16:3e:d7:c0:e9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4b3386f6-82b3-4e67-abc7-d82021a8f04c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '010150769bb34684be4a2dff720d1b35', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9d818ae-e9da-48a5-a5e2-ecd0a0e7ed92', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71d31c89-df7b-4a1a-b202-a6dac026a894, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ee164fa3-c608-44ca-a9ad-458e67ac2c7a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.611 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ee164fa3-c608-44ca-a9ad-458e67ac2c7a in datapath 476af4b4-172e-44ce-8fec-4b78aa7603bb bound to our chassis
Feb 02 15:34:47 compute-0 ovn_controller[144995]: 2026-02-02T15:34:47Z|00074|binding|INFO|Setting lport ee164fa3-c608-44ca-a9ad-458e67ac2c7a up in Southbound
Feb 02 15:34:47 compute-0 nova_compute[239545]: 2026-02-02 15:34:47.607 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.612 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 476af4b4-172e-44ce-8fec-4b78aa7603bb
Feb 02 15:34:47 compute-0 nova_compute[239545]: 2026-02-02 15:34:47.613 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.619 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4fb097-d81d-4714-94ce-287d81ffbec3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.620 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap476af4b4-11 in ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.622 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap476af4b4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.623 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7f91094d-5afa-4872-bb76-76b5368def73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.623 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[affb1ee0-14c6-4465-8a32-0e85e49fc5ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 systemd-machined[207609]: New machine qemu-6-instance-00000006.
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.631 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8c54d9-5951-4f71-aab7-687cb83be3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.652 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[748cc6af-9663-486d-8e5d-88ab42dfc569]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 systemd-udevd[250428]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:34:47 compute-0 NetworkManager[49171]: <info>  [1770046487.6705] device (tapee164fa3-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:34:47 compute-0 NetworkManager[49171]: <info>  [1770046487.6710] device (tapee164fa3-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.677 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[23d380fe-2bad-4c71-94b7-5ae7be1b6293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.680 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f754fe19-1e6f-4e06-95ea-ae6c108630c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 NetworkManager[49171]: <info>  [1770046487.6821] manager: (tap476af4b4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.698 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c9b2c2-f62c-40f0-9c54-6cf30864d439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.701 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[86d73695-9df5-4dbe-b00a-8bec3f29fb04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 NetworkManager[49171]: <info>  [1770046487.7165] device (tap476af4b4-10): carrier: link connected
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.719 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5a58ca-9d2b-4a42-adcc-4d41ee0b91fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.729 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6281c9-ab44-4bd1-b54a-97364d8709dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap476af4b4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:15:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 397363, 'reachable_time': 21689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250470, 'error': None, 'target': 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 sudo[250367]: pam_unix(sudo:session): session closed for user root
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.740 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[979d9378-2c7d-4aac-b22e-c4b7ecdba7c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:15ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 397363, 'tstamp': 397363}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250472, 'error': None, 'target': 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.751 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a86e9028-9df4-4cb6-8512-49d1ff9553a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap476af4b4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:15:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 397363, 'reachable_time': 21689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250473, 'error': None, 'target': 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.772 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca57344-3929-4fc0-86e4-87434b27d17a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.806 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6d2625-d820-4e26-a393-bcee421fdb9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 kernel: tap476af4b4-10: entered promiscuous mode
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.808 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap476af4b4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.809 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.809 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap476af4b4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:47 compute-0 NetworkManager[49171]: <info>  [1770046487.8129] manager: (tap476af4b4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Feb 02 15:34:47 compute-0 nova_compute[239545]: 2026-02-02 15:34:47.811 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.816 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap476af4b4-10, col_values=(('external_ids', {'iface-id': 'dc26ec84-c08a-465a-bcfe-8d9ce28f5877'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:47 compute-0 ovn_controller[144995]: 2026-02-02T15:34:47Z|00075|binding|INFO|Releasing lport dc26ec84-c08a-465a-bcfe-8d9ce28f5877 from this chassis (sb_readonly=0)
Feb 02 15:34:47 compute-0 nova_compute[239545]: 2026-02-02 15:34:47.817 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.818 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/476af4b4-172e-44ce-8fec-4b78aa7603bb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/476af4b4-172e-44ce-8fec-4b78aa7603bb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.819 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5b47dd2d-7f64-4a06-8399-a7f391d6ea70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.820 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-476af4b4-172e-44ce-8fec-4b78aa7603bb
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/476af4b4-172e-44ce-8fec-4b78aa7603bb.pid.haproxy
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 476af4b4-172e-44ce-8fec-4b78aa7603bb
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:34:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:47.820 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'env', 'PROCESS_TAG=haproxy-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/476af4b4-172e-44ce-8fec-4b78aa7603bb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:34:47 compute-0 nova_compute[239545]: 2026-02-02 15:34:47.823 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:34:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:34:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:34:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:34:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:34:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:34:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:34:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:34:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:34:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:34:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:34:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:34:48 compute-0 sudo[250483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:34:48 compute-0 sudo[250483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:34:48 compute-0 sudo[250483]: pam_unix(sudo:session): session closed for user root
Feb 02 15:34:48 compute-0 sudo[250509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:34:48 compute-0 sudo[250509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.199 239549 DEBUG nova.compute.manager [req-a637151a-6003-46b8-a294-aa1dc21d36ae req-c7d4a051-4cf1-40c2-bf05-3ac5fa4a69c1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received event network-vif-plugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.200 239549 DEBUG oslo_concurrency.lockutils [req-a637151a-6003-46b8-a294-aa1dc21d36ae req-c7d4a051-4cf1-40c2-bf05-3ac5fa4a69c1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.200 239549 DEBUG oslo_concurrency.lockutils [req-a637151a-6003-46b8-a294-aa1dc21d36ae req-c7d4a051-4cf1-40c2-bf05-3ac5fa4a69c1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.201 239549 DEBUG oslo_concurrency.lockutils [req-a637151a-6003-46b8-a294-aa1dc21d36ae req-c7d4a051-4cf1-40c2-bf05-3ac5fa4a69c1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.201 239549 DEBUG nova.compute.manager [req-a637151a-6003-46b8-a294-aa1dc21d36ae req-c7d4a051-4cf1-40c2-bf05-3ac5fa4a69c1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Processing event network-vif-plugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:34:48 compute-0 podman[250553]: 2026-02-02 15:34:48.12162643 +0000 UTC m=+0.030140718 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.335 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.383 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046488.3826632, 4b3386f6-82b3-4e67-abc7-d82021a8f04c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.383 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] VM Started (Lifecycle Event)
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.384 239549 DEBUG nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.394 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.398 239549 INFO nova.virt.libvirt.driver [-] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Instance spawned successfully.
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.398 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.452 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.456 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:34:48 compute-0 podman[250553]: 2026-02-02 15:34:48.470735924 +0000 UTC m=+0.379250172 container create 0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.523 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.524 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046488.382759, 4b3386f6-82b3-4e67-abc7-d82021a8f04c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.524 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] VM Paused (Lifecycle Event)
Feb 02 15:34:48 compute-0 podman[250615]: 2026-02-02 15:34:48.527669704 +0000 UTC m=+0.248083250 container create 6951992212a651102817f5d4eecb7d7235b9a9066c3a873dff356edea1329683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.545 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.546 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.547 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.547 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.547 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.548 239549 DEBUG nova.virt.libvirt.driver [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.551 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.555 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046488.3869033, 4b3386f6-82b3-4e67-abc7-d82021a8f04c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.555 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] VM Resumed (Lifecycle Event)
Feb 02 15:34:48 compute-0 systemd[1]: Started libpod-conmon-0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2.scope.
Feb 02 15:34:48 compute-0 systemd[1]: Started libpod-conmon-6951992212a651102817f5d4eecb7d7235b9a9066c3a873dff356edea1329683.scope.
Feb 02 15:34:48 compute-0 podman[250615]: 2026-02-02 15:34:48.4883523 +0000 UTC m=+0.208765876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:34:48 compute-0 ceph-mon[75334]: pgmap v1044: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 288 op/s
Feb 02 15:34:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:34:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:34:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:34:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:34:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:34:48 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:34:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:34:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef82acc9ac2edabc02e8954cfdad3821f8fa707daa2a6baf1cf238845967cce6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.613 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.619 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:34:48 compute-0 podman[250553]: 2026-02-02 15:34:48.620016473 +0000 UTC m=+0.528530731 container init 0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:34:48 compute-0 podman[250615]: 2026-02-02 15:34:48.62392184 +0000 UTC m=+0.344335416 container init 6951992212a651102817f5d4eecb7d7235b9a9066c3a873dff356edea1329683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:34:48 compute-0 podman[250553]: 2026-02-02 15:34:48.627277153 +0000 UTC m=+0.535791411 container start 0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:34:48 compute-0 podman[250615]: 2026-02-02 15:34:48.632117003 +0000 UTC m=+0.352530549 container start 6951992212a651102817f5d4eecb7d7235b9a9066c3a873dff356edea1329683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.635 239549 INFO nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Took 7.43 seconds to spawn the instance on the hypervisor.
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.635 239549 DEBUG nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:34:48 compute-0 podman[250615]: 2026-02-02 15:34:48.63643553 +0000 UTC m=+0.356849076 container attach 6951992212a651102817f5d4eecb7d7235b9a9066c3a873dff356edea1329683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:34:48 compute-0 cranky_fermat[250639]: 167 167
Feb 02 15:34:48 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[250637]: [NOTICE]   (250645) : New worker (250648) forked
Feb 02 15:34:48 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[250637]: [NOTICE]   (250645) : Loading success.
Feb 02 15:34:48 compute-0 systemd[1]: libpod-6951992212a651102817f5d4eecb7d7235b9a9066c3a873dff356edea1329683.scope: Deactivated successfully.
Feb 02 15:34:48 compute-0 podman[250615]: 2026-02-02 15:34:48.648609172 +0000 UTC m=+0.369022748 container died 6951992212a651102817f5d4eecb7d7235b9a9066c3a873dff356edea1329683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.651 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcb0da0cec98c1153d6b98a39db5f573cb65698b41291a811a169713ea4aefc3-merged.mount: Deactivated successfully.
Feb 02 15:34:48 compute-0 podman[250615]: 2026-02-02 15:34:48.688997173 +0000 UTC m=+0.409410719 container remove 6951992212a651102817f5d4eecb7d7235b9a9066c3a873dff356edea1329683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:34:48 compute-0 systemd[1]: libpod-conmon-6951992212a651102817f5d4eecb7d7235b9a9066c3a873dff356edea1329683.scope: Deactivated successfully.
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.716 239549 INFO nova.compute.manager [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Took 8.47 seconds to build instance.
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.775 239549 DEBUG oslo_concurrency.lockutils [None req-6988d6ec-7e61-4995-bb7b-0679e67e000f 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.776 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.776 239549 INFO nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:34:48 compute-0 nova_compute[239545]: 2026-02-02 15:34:48.776 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:48 compute-0 podman[250674]: 2026-02-02 15:34:48.894443665 +0000 UTC m=+0.109456094 container create 00d614134bfd1767dd1b42da1674d9ee10a497fcc052eefd845830acd79aa912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:34:48 compute-0 podman[250674]: 2026-02-02 15:34:48.804029795 +0000 UTC m=+0.019042254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:34:48 compute-0 systemd[1]: Started libpod-conmon-00d614134bfd1767dd1b42da1674d9ee10a497fcc052eefd845830acd79aa912.scope.
Feb 02 15:34:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf816221b5ea7a4a452630dfb04b7f445b0044a083ad52e472489e1db609385a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf816221b5ea7a4a452630dfb04b7f445b0044a083ad52e472489e1db609385a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf816221b5ea7a4a452630dfb04b7f445b0044a083ad52e472489e1db609385a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf816221b5ea7a4a452630dfb04b7f445b0044a083ad52e472489e1db609385a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf816221b5ea7a4a452630dfb04b7f445b0044a083ad52e472489e1db609385a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:48 compute-0 podman[250674]: 2026-02-02 15:34:48.965458675 +0000 UTC m=+0.180471164 container init 00d614134bfd1767dd1b42da1674d9ee10a497fcc052eefd845830acd79aa912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mccarthy, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:34:48 compute-0 podman[250674]: 2026-02-02 15:34:48.973430533 +0000 UTC m=+0.188442972 container start 00d614134bfd1767dd1b42da1674d9ee10a497fcc052eefd845830acd79aa912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:34:48 compute-0 podman[250674]: 2026-02-02 15:34:48.977810302 +0000 UTC m=+0.192822761 container attach 00d614134bfd1767dd1b42da1674d9ee10a497fcc052eefd845830acd79aa912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:34:49 compute-0 funny_mccarthy[250690]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:34:49 compute-0 funny_mccarthy[250690]: --> All data devices are unavailable
Feb 02 15:34:49 compute-0 systemd[1]: libpod-00d614134bfd1767dd1b42da1674d9ee10a497fcc052eefd845830acd79aa912.scope: Deactivated successfully.
Feb 02 15:34:49 compute-0 podman[250674]: 2026-02-02 15:34:49.408833185 +0000 UTC m=+0.623845614 container died 00d614134bfd1767dd1b42da1674d9ee10a497fcc052eefd845830acd79aa912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:34:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 504 KiB/s rd, 2.7 MiB/s wr, 131 op/s
Feb 02 15:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf816221b5ea7a4a452630dfb04b7f445b0044a083ad52e472489e1db609385a-merged.mount: Deactivated successfully.
Feb 02 15:34:50 compute-0 ceph-mon[75334]: pgmap v1045: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 504 KiB/s rd, 2.7 MiB/s wr, 131 op/s
Feb 02 15:34:50 compute-0 podman[250674]: 2026-02-02 15:34:50.043515286 +0000 UTC m=+1.258527715 container remove 00d614134bfd1767dd1b42da1674d9ee10a497fcc052eefd845830acd79aa912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:34:50 compute-0 systemd[1]: libpod-conmon-00d614134bfd1767dd1b42da1674d9ee10a497fcc052eefd845830acd79aa912.scope: Deactivated successfully.
Feb 02 15:34:50 compute-0 sudo[250509]: pam_unix(sudo:session): session closed for user root
Feb 02 15:34:50 compute-0 sudo[250722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:34:50 compute-0 sudo[250722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:34:50 compute-0 sudo[250722]: pam_unix(sudo:session): session closed for user root
Feb 02 15:34:50 compute-0 sudo[250747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:34:50 compute-0 sudo[250747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:34:50 compute-0 nova_compute[239545]: 2026-02-02 15:34:50.318 239549 DEBUG nova.compute.manager [req-ce2a43c7-c125-4d08-8d9f-5b205e50e0f8 req-bd0703ff-f9d8-41a8-a425-202488b8074d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received event network-vif-plugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:50 compute-0 nova_compute[239545]: 2026-02-02 15:34:50.319 239549 DEBUG oslo_concurrency.lockutils [req-ce2a43c7-c125-4d08-8d9f-5b205e50e0f8 req-bd0703ff-f9d8-41a8-a425-202488b8074d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:50 compute-0 nova_compute[239545]: 2026-02-02 15:34:50.320 239549 DEBUG oslo_concurrency.lockutils [req-ce2a43c7-c125-4d08-8d9f-5b205e50e0f8 req-bd0703ff-f9d8-41a8-a425-202488b8074d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:50 compute-0 nova_compute[239545]: 2026-02-02 15:34:50.320 239549 DEBUG oslo_concurrency.lockutils [req-ce2a43c7-c125-4d08-8d9f-5b205e50e0f8 req-bd0703ff-f9d8-41a8-a425-202488b8074d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:50 compute-0 nova_compute[239545]: 2026-02-02 15:34:50.320 239549 DEBUG nova.compute.manager [req-ce2a43c7-c125-4d08-8d9f-5b205e50e0f8 req-bd0703ff-f9d8-41a8-a425-202488b8074d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] No waiting events found dispatching network-vif-plugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:34:50 compute-0 nova_compute[239545]: 2026-02-02 15:34:50.320 239549 WARNING nova.compute.manager [req-ce2a43c7-c125-4d08-8d9f-5b205e50e0f8 req-bd0703ff-f9d8-41a8-a425-202488b8074d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received unexpected event network-vif-plugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a for instance with vm_state active and task_state None.
Feb 02 15:34:50 compute-0 podman[250783]: 2026-02-02 15:34:50.462566893 +0000 UTC m=+0.046080843 container create 3cba5cf806dd4e6d49e6cc0518a4e45aee952551acbb3ad01f4443f6ec47055b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:34:50 compute-0 systemd[1]: Started libpod-conmon-3cba5cf806dd4e6d49e6cc0518a4e45aee952551acbb3ad01f4443f6ec47055b.scope.
Feb 02 15:34:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:34:50 compute-0 podman[250783]: 2026-02-02 15:34:50.439559123 +0000 UTC m=+0.023073103 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:34:50 compute-0 podman[250783]: 2026-02-02 15:34:50.542673768 +0000 UTC m=+0.126187728 container init 3cba5cf806dd4e6d49e6cc0518a4e45aee952551acbb3ad01f4443f6ec47055b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:34:50 compute-0 podman[250783]: 2026-02-02 15:34:50.547675123 +0000 UTC m=+0.131189073 container start 3cba5cf806dd4e6d49e6cc0518a4e45aee952551acbb3ad01f4443f6ec47055b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:34:50 compute-0 podman[250783]: 2026-02-02 15:34:50.550845751 +0000 UTC m=+0.134359701 container attach 3cba5cf806dd4e6d49e6cc0518a4e45aee952551acbb3ad01f4443f6ec47055b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb 02 15:34:50 compute-0 thirsty_margulis[250800]: 167 167
Feb 02 15:34:50 compute-0 systemd[1]: libpod-3cba5cf806dd4e6d49e6cc0518a4e45aee952551acbb3ad01f4443f6ec47055b.scope: Deactivated successfully.
Feb 02 15:34:50 compute-0 podman[250783]: 2026-02-02 15:34:50.553053626 +0000 UTC m=+0.136567576 container died 3cba5cf806dd4e6d49e6cc0518a4e45aee952551acbb3ad01f4443f6ec47055b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e097cf483fb43c3b5864cb2cd0cb056f0b2cb6e111be0ce5e42c91dd0dfb4b-merged.mount: Deactivated successfully.
Feb 02 15:34:50 compute-0 nova_compute[239545]: 2026-02-02 15:34:50.574 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:50 compute-0 podman[250783]: 2026-02-02 15:34:50.582698651 +0000 UTC m=+0.166212601 container remove 3cba5cf806dd4e6d49e6cc0518a4e45aee952551acbb3ad01f4443f6ec47055b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:34:50 compute-0 systemd[1]: libpod-conmon-3cba5cf806dd4e6d49e6cc0518a4e45aee952551acbb3ad01f4443f6ec47055b.scope: Deactivated successfully.
Feb 02 15:34:50 compute-0 podman[250824]: 2026-02-02 15:34:50.713430791 +0000 UTC m=+0.043602852 container create d6a1f1693d39fd8a53974c8d79d697c08262fcec4df9d07a0359d45e26267f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:34:50 compute-0 systemd[1]: Started libpod-conmon-d6a1f1693d39fd8a53974c8d79d697c08262fcec4df9d07a0359d45e26267f09.scope.
Feb 02 15:34:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe779ccc3b29405b81f780eb7a915c4bdba66ff9fe4e29e616d4f665cebe2cc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:50 compute-0 podman[250824]: 2026-02-02 15:34:50.693404355 +0000 UTC m=+0.023576326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe779ccc3b29405b81f780eb7a915c4bdba66ff9fe4e29e616d4f665cebe2cc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe779ccc3b29405b81f780eb7a915c4bdba66ff9fe4e29e616d4f665cebe2cc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe779ccc3b29405b81f780eb7a915c4bdba66ff9fe4e29e616d4f665cebe2cc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:50 compute-0 podman[250824]: 2026-02-02 15:34:50.807942574 +0000 UTC m=+0.138114555 container init d6a1f1693d39fd8a53974c8d79d697c08262fcec4df9d07a0359d45e26267f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:34:50 compute-0 podman[250824]: 2026-02-02 15:34:50.816354312 +0000 UTC m=+0.146526263 container start d6a1f1693d39fd8a53974c8d79d697c08262fcec4df9d07a0359d45e26267f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:34:50 compute-0 podman[250824]: 2026-02-02 15:34:50.819868139 +0000 UTC m=+0.150040110 container attach d6a1f1693d39fd8a53974c8d79d697c08262fcec4df9d07a0359d45e26267f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:34:51 compute-0 stupefied_carver[250841]: {
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:     "0": [
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:         {
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "devices": [
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "/dev/loop3"
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             ],
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_name": "ceph_lv0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_size": "21470642176",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "name": "ceph_lv0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "tags": {
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.cluster_name": "ceph",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.crush_device_class": "",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.encrypted": "0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.objectstore": "bluestore",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.osd_id": "0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.type": "block",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.vdo": "0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.with_tpm": "0"
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             },
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "type": "block",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "vg_name": "ceph_vg0"
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:         }
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:     ],
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:     "1": [
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:         {
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "devices": [
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "/dev/loop4"
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             ],
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_name": "ceph_lv1",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_size": "21470642176",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "name": "ceph_lv1",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "tags": {
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.cluster_name": "ceph",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.crush_device_class": "",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.encrypted": "0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.objectstore": "bluestore",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.osd_id": "1",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.type": "block",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.vdo": "0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.with_tpm": "0"
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             },
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "type": "block",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "vg_name": "ceph_vg1"
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:         }
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:     ],
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:     "2": [
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:         {
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "devices": [
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "/dev/loop5"
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             ],
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_name": "ceph_lv2",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_size": "21470642176",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "name": "ceph_lv2",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "tags": {
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.cluster_name": "ceph",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.crush_device_class": "",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.encrypted": "0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.objectstore": "bluestore",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.osd_id": "2",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.type": "block",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.vdo": "0",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:                 "ceph.with_tpm": "0"
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             },
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "type": "block",
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:             "vg_name": "ceph_vg2"
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:         }
Feb 02 15:34:51 compute-0 stupefied_carver[250841]:     ]
Feb 02 15:34:51 compute-0 stupefied_carver[250841]: }
Feb 02 15:34:51 compute-0 systemd[1]: libpod-d6a1f1693d39fd8a53974c8d79d697c08262fcec4df9d07a0359d45e26267f09.scope: Deactivated successfully.
Feb 02 15:34:51 compute-0 podman[250850]: 2026-02-02 15:34:51.172787956 +0000 UTC m=+0.029283557 container died d6a1f1693d39fd8a53974c8d79d697c08262fcec4df9d07a0359d45e26267f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 15:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe779ccc3b29405b81f780eb7a915c4bdba66ff9fe4e29e616d4f665cebe2cc9-merged.mount: Deactivated successfully.
Feb 02 15:34:51 compute-0 podman[250850]: 2026-02-02 15:34:51.211462225 +0000 UTC m=+0.067957816 container remove d6a1f1693d39fd8a53974c8d79d697c08262fcec4df9d07a0359d45e26267f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:34:51 compute-0 systemd[1]: libpod-conmon-d6a1f1693d39fd8a53974c8d79d697c08262fcec4df9d07a0359d45e26267f09.scope: Deactivated successfully.
Feb 02 15:34:51 compute-0 sudo[250747]: pam_unix(sudo:session): session closed for user root
Feb 02 15:34:51 compute-0 sudo[250865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:34:51 compute-0 sudo[250865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:34:51 compute-0 sudo[250865]: pam_unix(sudo:session): session closed for user root
Feb 02 15:34:51 compute-0 sudo[250890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:34:51 compute-0 sudo[250890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:34:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1420194704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1420194704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 193 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.1 MiB/s wr, 231 op/s
Feb 02 15:34:51 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1420194704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:51 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1420194704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:51 compute-0 ovn_controller[144995]: 2026-02-02T15:34:51Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:02:ef 10.100.0.6
Feb 02 15:34:51 compute-0 ovn_controller[144995]: 2026-02-02T15:34:51Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:02:ef 10.100.0.6
Feb 02 15:34:51 compute-0 podman[250928]: 2026-02-02 15:34:51.684814048 +0000 UTC m=+0.052304737 container create 6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 02 15:34:51 compute-0 systemd[1]: Started libpod-conmon-6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa.scope.
Feb 02 15:34:51 compute-0 podman[250928]: 2026-02-02 15:34:51.666963185 +0000 UTC m=+0.034453894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:34:51 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:34:51 compute-0 podman[250928]: 2026-02-02 15:34:51.768583124 +0000 UTC m=+0.136073843 container init 6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:34:51 compute-0 podman[250928]: 2026-02-02 15:34:51.779982927 +0000 UTC m=+0.147473606 container start 6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:34:51 compute-0 dazzling_turing[250945]: 167 167
Feb 02 15:34:51 compute-0 systemd[1]: libpod-6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa.scope: Deactivated successfully.
Feb 02 15:34:51 compute-0 conmon[250945]: conmon 6589bd32127cd9f9d119 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa.scope/container/memory.events
Feb 02 15:34:51 compute-0 podman[250928]: 2026-02-02 15:34:51.787971455 +0000 UTC m=+0.155462154 container attach 6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:34:51 compute-0 podman[250950]: 2026-02-02 15:34:51.824379467 +0000 UTC m=+0.023968225 container died 6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-17aa805a2cb1aad4334900901eb57ee55bba5e29c2c2d862e22c6a6bf1cbbf20-merged.mount: Deactivated successfully.
Feb 02 15:34:51 compute-0 podman[250950]: 2026-02-02 15:34:51.859206871 +0000 UTC m=+0.058795609 container remove 6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:34:51 compute-0 systemd[1]: libpod-conmon-6589bd32127cd9f9d1192b6c31143b6055e9b2628674cf17815bd2d225bec3fa.scope: Deactivated successfully.
Feb 02 15:34:51 compute-0 podman[250972]: 2026-02-02 15:34:51.996840252 +0000 UTC m=+0.045559381 container create 5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:34:52 compute-0 systemd[1]: Started libpod-conmon-5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299.scope.
Feb 02 15:34:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270413ba790fccf2d9588cfe3002591022dc9faf51ab5c6544e54f4524979ab0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270413ba790fccf2d9588cfe3002591022dc9faf51ab5c6544e54f4524979ab0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270413ba790fccf2d9588cfe3002591022dc9faf51ab5c6544e54f4524979ab0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270413ba790fccf2d9588cfe3002591022dc9faf51ab5c6544e54f4524979ab0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:34:52 compute-0 podman[250972]: 2026-02-02 15:34:51.972247452 +0000 UTC m=+0.020966581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:34:52 compute-0 podman[250972]: 2026-02-02 15:34:52.079986582 +0000 UTC m=+0.128705711 container init 5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:34:52 compute-0 podman[250972]: 2026-02-02 15:34:52.086605166 +0000 UTC m=+0.135324265 container start 5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:34:52 compute-0 podman[250972]: 2026-02-02 15:34:52.093077847 +0000 UTC m=+0.141796976 container attach 5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:34:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Feb 02 15:34:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Feb 02 15:34:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Feb 02 15:34:52 compute-0 nova_compute[239545]: 2026-02-02 15:34:52.413 239549 DEBUG nova.compute.manager [req-0180bd06-ed4d-4da1-a77f-cb7da8e4cc12 req-1d126a41-a659-47b9-a705-58c89b8ff6b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received event network-changed-ee164fa3-c608-44ca-a9ad-458e67ac2c7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:34:52 compute-0 nova_compute[239545]: 2026-02-02 15:34:52.414 239549 DEBUG nova.compute.manager [req-0180bd06-ed4d-4da1-a77f-cb7da8e4cc12 req-1d126a41-a659-47b9-a705-58c89b8ff6b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Refreshing instance network info cache due to event network-changed-ee164fa3-c608-44ca-a9ad-458e67ac2c7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:34:52 compute-0 nova_compute[239545]: 2026-02-02 15:34:52.415 239549 DEBUG oslo_concurrency.lockutils [req-0180bd06-ed4d-4da1-a77f-cb7da8e4cc12 req-1d126a41-a659-47b9-a705-58c89b8ff6b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-4b3386f6-82b3-4e67-abc7-d82021a8f04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:34:52 compute-0 nova_compute[239545]: 2026-02-02 15:34:52.415 239549 DEBUG oslo_concurrency.lockutils [req-0180bd06-ed4d-4da1-a77f-cb7da8e4cc12 req-1d126a41-a659-47b9-a705-58c89b8ff6b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-4b3386f6-82b3-4e67-abc7-d82021a8f04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:34:52 compute-0 nova_compute[239545]: 2026-02-02 15:34:52.415 239549 DEBUG nova.network.neutron [req-0180bd06-ed4d-4da1-a77f-cb7da8e4cc12 req-1d126a41-a659-47b9-a705-58c89b8ff6b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Refreshing network info cache for port ee164fa3-c608-44ca-a9ad-458e67ac2c7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:34:52 compute-0 ceph-mon[75334]: pgmap v1046: 305 pgs: 305 active+clean; 193 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.1 MiB/s wr, 231 op/s
Feb 02 15:34:52 compute-0 ceph-mon[75334]: osdmap e212: 3 total, 3 up, 3 in
Feb 02 15:34:52 compute-0 lvm[251066]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:34:52 compute-0 lvm[251067]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:34:52 compute-0 lvm[251066]: VG ceph_vg0 finished
Feb 02 15:34:52 compute-0 lvm[251067]: VG ceph_vg1 finished
Feb 02 15:34:52 compute-0 lvm[251069]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:34:52 compute-0 lvm[251069]: VG ceph_vg2 finished
Feb 02 15:34:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878623043' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878623043' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:52 compute-0 mystifying_chandrasekhar[250987]: {}
Feb 02 15:34:52 compute-0 systemd[1]: libpod-5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299.scope: Deactivated successfully.
Feb 02 15:34:52 compute-0 systemd[1]: libpod-5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299.scope: Consumed 1.001s CPU time.
Feb 02 15:34:52 compute-0 podman[250972]: 2026-02-02 15:34:52.756431168 +0000 UTC m=+0.805150317 container died 5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-270413ba790fccf2d9588cfe3002591022dc9faf51ab5c6544e54f4524979ab0-merged.mount: Deactivated successfully.
Feb 02 15:34:52 compute-0 podman[250972]: 2026-02-02 15:34:52.794836461 +0000 UTC m=+0.843555580 container remove 5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:34:52 compute-0 systemd[1]: libpod-conmon-5c526ef47e94257e39c0d2b8413e087cfd1bcd38dc2243cd10d8d33aad776299.scope: Deactivated successfully.
Feb 02 15:34:52 compute-0 sudo[250890]: pam_unix(sudo:session): session closed for user root
Feb 02 15:34:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:34:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:34:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:34:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:34:52 compute-0 sudo[251083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:34:52 compute-0 sudo[251083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:34:52 compute-0 sudo[251083]: pam_unix(sudo:session): session closed for user root
Feb 02 15:34:53 compute-0 nova_compute[239545]: 2026-02-02 15:34:53.333 239549 DEBUG nova.network.neutron [req-0180bd06-ed4d-4da1-a77f-cb7da8e4cc12 req-1d126a41-a659-47b9-a705-58c89b8ff6b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Updated VIF entry in instance network info cache for port ee164fa3-c608-44ca-a9ad-458e67ac2c7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:34:53 compute-0 nova_compute[239545]: 2026-02-02 15:34:53.334 239549 DEBUG nova.network.neutron [req-0180bd06-ed4d-4da1-a77f-cb7da8e4cc12 req-1d126a41-a659-47b9-a705-58c89b8ff6b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Updating instance_info_cache with network_info: [{"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:34:53 compute-0 nova_compute[239545]: 2026-02-02 15:34:53.337 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:53 compute-0 nova_compute[239545]: 2026-02-02 15:34:53.376 239549 DEBUG oslo_concurrency.lockutils [req-0180bd06-ed4d-4da1-a77f-cb7da8e4cc12 req-1d126a41-a659-47b9-a705-58c89b8ff6b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-4b3386f6-82b3-4e67-abc7-d82021a8f04c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:34:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 204 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 220 op/s
Feb 02 15:34:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/878623043' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/878623043' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:34:53 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:34:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3274275111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3274275111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001077730612044705 of space, bias 1.0, pg target 0.32331918361341144 quantized to 32 (current 32)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034967535073196564 of space, bias 1.0, pg target 0.10490260521958969 quantized to 32 (current 32)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.3866398935478833e-07 of space, bias 1.0, pg target 0.0001015991968064365 quantized to 32 (current 32)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659908391554204 of space, bias 1.0, pg target 0.1997972517466261 quantized to 32 (current 32)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2734753394519676e-06 of space, bias 4.0, pg target 0.0015281704073423611 quantized to 16 (current 16)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:34:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:34:54 compute-0 ceph-mon[75334]: pgmap v1048: 305 pgs: 305 active+clean; 204 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 220 op/s
Feb 02 15:34:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3274275111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3274275111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3745578922' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3745578922' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 212 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 211 op/s
Feb 02 15:34:55 compute-0 nova_compute[239545]: 2026-02-02 15:34:55.578 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3745578922' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3745578922' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:56 compute-0 nova_compute[239545]: 2026-02-02 15:34:56.573 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:56 compute-0 ceph-mon[75334]: pgmap v1049: 305 pgs: 305 active+clean; 212 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 211 op/s
Feb 02 15:34:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:34:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2772666789' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:34:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2772666789' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:34:57 compute-0 podman[251109]: 2026-02-02 15:34:57.330391615 +0000 UTC m=+0.075346841 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 02 15:34:57 compute-0 podman[251110]: 2026-02-02 15:34:57.335854508 +0000 UTC m=+0.078338824 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:34:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 229 op/s
Feb 02 15:34:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2772666789' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:34:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2772666789' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:34:57 compute-0 ceph-mon[75334]: pgmap v1050: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 229 op/s
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.338 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:58 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:58.380 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.380 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:34:58 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:58.381 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:34:58 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:58.382 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.399 239549 DEBUG oslo_concurrency.lockutils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.399 239549 DEBUG oslo_concurrency.lockutils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.419 239549 DEBUG nova.objects.instance [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'flavor' on Instance uuid df13eb08-f03e-43d5-a950-22b892d819af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.462 239549 INFO nova.virt.libvirt.driver [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Ignoring supplied device name: /dev/vdb
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.483 239549 DEBUG oslo_concurrency.lockutils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.547 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.547 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.690 239549 DEBUG oslo_concurrency.lockutils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.692 239549 DEBUG oslo_concurrency.lockutils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.693 239549 INFO nova.compute.manager [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Attaching volume fc4bbe92-1043-49fb-901f-a2e31aa75a71 to /dev/vdb
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.752 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.752 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.752 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.752 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid df13eb08-f03e-43d5-a950-22b892d819af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.923 239549 DEBUG os_brick.utils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.924 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.938 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.939 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[a739d473-fe5a-489c-9253-797b0f35f50e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.940 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.945 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.945 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0debd5-e63c-4cf7-b9c0-f830aed7b578]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.946 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.952 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.953 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[ff15f38e-74b6-481f-9f4e-6b5205f2febe]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.954 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2c6eec-6f15-49ea-90f3-32c6bfa74680]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.955 239549 DEBUG oslo_concurrency.processutils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.978 239549 DEBUG oslo_concurrency.processutils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.983 239549 DEBUG os_brick.initiator.connectors.lightos [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.983 239549 DEBUG os_brick.initiator.connectors.lightos [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.983 239549 DEBUG os_brick.initiator.connectors.lightos [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.984 239549 DEBUG os_brick.utils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:34:58 compute-0 nova_compute[239545]: 2026-02-02 15:34:58.985 239549 DEBUG nova.virt.block_device [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Updating existing volume attachment record: 4f1ef9ff-dff8-4a7a-9f64-0ccff5789e65 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:34:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:59.247 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:34:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:59.247 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:34:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:34:59.248 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:34:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 235 op/s
Feb 02 15:34:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:34:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2831506866' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:34:59 compute-0 nova_compute[239545]: 2026-02-02 15:34:59.913 239549 DEBUG nova.objects.instance [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'flavor' on Instance uuid df13eb08-f03e-43d5-a950-22b892d819af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:34:59 compute-0 nova_compute[239545]: 2026-02-02 15:34:59.939 239549 DEBUG nova.virt.libvirt.driver [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Attempting to attach volume fc4bbe92-1043-49fb-901f-a2e31aa75a71 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:34:59 compute-0 nova_compute[239545]: 2026-02-02 15:34:59.943 239549 DEBUG nova.virt.libvirt.guest [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:34:59 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:34:59 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-fc4bbe92-1043-49fb-901f-a2e31aa75a71">
Feb 02 15:34:59 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:34:59 compute-0 nova_compute[239545]:   </source>
Feb 02 15:34:59 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:34:59 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:34:59 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:34:59 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:34:59 compute-0 nova_compute[239545]:   <serial>fc4bbe92-1043-49fb-901f-a2e31aa75a71</serial>
Feb 02 15:34:59 compute-0 nova_compute[239545]: </disk>
Feb 02 15:34:59 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.141 239549 DEBUG nova.virt.libvirt.driver [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.142 239549 DEBUG nova.virt.libvirt.driver [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.142 239549 DEBUG nova.virt.libvirt.driver [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.142 239549 DEBUG nova.virt.libvirt.driver [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No VIF found with MAC fa:16:3e:e6:02:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.185 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Updating instance_info_cache with network_info: [{"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.246 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-df13eb08-f03e-43d5-a950-22b892d819af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.246 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.249 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.524 239549 DEBUG oslo_concurrency.lockutils [None req-a726ef5d-bcc0-4c5a-b05f-e6e8fbe1517d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:00 compute-0 nova_compute[239545]: 2026-02-02 15:35:00.582 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:00 compute-0 ceph-mon[75334]: pgmap v1051: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 235 op/s
Feb 02 15:35:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2831506866' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 221 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 581 KiB/s rd, 1.9 MiB/s wr, 171 op/s
Feb 02 15:35:01 compute-0 nova_compute[239545]: 2026-02-02 15:35:01.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:35:01 compute-0 nova_compute[239545]: 2026-02-02 15:35:01.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:35:01 compute-0 ceph-mon[75334]: pgmap v1052: 305 pgs: 305 active+clean; 221 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 581 KiB/s rd, 1.9 MiB/s wr, 171 op/s
Feb 02 15:35:01 compute-0 nova_compute[239545]: 2026-02-02 15:35:01.793 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:01 compute-0 nova_compute[239545]: 2026-02-02 15:35:01.794 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:01 compute-0 nova_compute[239545]: 2026-02-02 15:35:01.794 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:01 compute-0 nova_compute[239545]: 2026-02-02 15:35:01.794 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:35:01 compute-0 nova_compute[239545]: 2026-02-02 15:35:01.794 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:02 compute-0 ovn_controller[144995]: 2026-02-02T15:35:02Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:c0:e9 10.100.0.3
Feb 02 15:35:02 compute-0 ovn_controller[144995]: 2026-02-02T15:35:02Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:c0:e9 10.100.0.3
Feb 02 15:35:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:35:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033981457' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.356 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.425 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.425 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.425 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.428 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.428 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.581 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.582 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4240MB free_disk=59.91055710054934GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.583 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.583 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.652 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance df13eb08-f03e-43d5-a950-22b892d819af actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.653 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 4b3386f6-82b3-4e67-abc7-d82021a8f04c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.653 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.653 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:35:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1390778570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:02 compute-0 nova_compute[239545]: 2026-02-02 15:35:02.728 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4033981457' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1390778570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:35:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3731987350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:03 compute-0 nova_compute[239545]: 2026-02-02 15:35:03.246 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:03 compute-0 nova_compute[239545]: 2026-02-02 15:35:03.251 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:35:03 compute-0 nova_compute[239545]: 2026-02-02 15:35:03.270 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:35:03 compute-0 nova_compute[239545]: 2026-02-02 15:35:03.290 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:35:03 compute-0 nova_compute[239545]: 2026-02-02 15:35:03.291 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:03 compute-0 nova_compute[239545]: 2026-02-02 15:35:03.341 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 227 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 576 KiB/s rd, 2.3 MiB/s wr, 163 op/s
Feb 02 15:35:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Feb 02 15:35:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Feb 02 15:35:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3731987350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:03 compute-0 ceph-mon[75334]: pgmap v1053: 305 pgs: 305 active+clean; 227 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 576 KiB/s rd, 2.3 MiB/s wr, 163 op/s
Feb 02 15:35:03 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Feb 02 15:35:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/658279835' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/658279835' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:04 compute-0 nova_compute[239545]: 2026-02-02 15:35:04.286 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:35:04 compute-0 nova_compute[239545]: 2026-02-02 15:35:04.287 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:35:04 compute-0 nova_compute[239545]: 2026-02-02 15:35:04.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:35:04 compute-0 nova_compute[239545]: 2026-02-02 15:35:04.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:35:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Feb 02 15:35:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Feb 02 15:35:04 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Feb 02 15:35:04 compute-0 ceph-mon[75334]: osdmap e213: 3 total, 3 up, 3 in
Feb 02 15:35:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/658279835' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/658279835' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190175339' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190175339' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 244 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 452 KiB/s rd, 3.2 MiB/s wr, 172 op/s
Feb 02 15:35:05 compute-0 nova_compute[239545]: 2026-02-02 15:35:05.584 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:05 compute-0 ceph-mon[75334]: osdmap e214: 3 total, 3 up, 3 in
Feb 02 15:35:05 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4190175339' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:05 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4190175339' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:05 compute-0 ceph-mon[75334]: pgmap v1056: 305 pgs: 305 active+clean; 244 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 452 KiB/s rd, 3.2 MiB/s wr, 172 op/s
Feb 02 15:35:06 compute-0 nova_compute[239545]: 2026-02-02 15:35:06.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:35:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1913883836' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1913883836' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Feb 02 15:35:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1913883836' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1913883836' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Feb 02 15:35:06 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.311 239549 DEBUG oslo_concurrency.lockutils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.311 239549 DEBUG oslo_concurrency.lockutils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.330 239549 DEBUG nova.objects.instance [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'flavor' on Instance uuid 4b3386f6-82b3-4e67-abc7-d82021a8f04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.373 239549 INFO nova.virt.libvirt.driver [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Ignoring supplied device name: /dev/vdb
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.416 239549 DEBUG oslo_concurrency.lockutils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 2.8 MiB/s wr, 171 op/s
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.782 239549 DEBUG oslo_concurrency.lockutils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.782 239549 DEBUG oslo_concurrency.lockutils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.783 239549 INFO nova.compute.manager [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Attaching volume 505add96-37b3-4fa0-b8e7-4ce2dc3c22cf to /dev/vdb
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.900 239549 DEBUG os_brick.utils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.901 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.907 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.908 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae2df6d-b2af-424c-b371-b9f3120e74f3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.908 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.913 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.913 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[794238ec-4a33-411a-8434-05718d8bcc41]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.914 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.923 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.923 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[346e255a-c18a-4e70-8700-d8ae66e68163]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.924 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[fb166ef0-0995-4b23-a014-1c6ac68de789]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.925 239549 DEBUG oslo_concurrency.processutils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.943 239549 DEBUG oslo_concurrency.processutils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.945 239549 DEBUG os_brick.initiator.connectors.lightos [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.945 239549 DEBUG os_brick.initiator.connectors.lightos [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.946 239549 DEBUG os_brick.initiator.connectors.lightos [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.946 239549 DEBUG os_brick.utils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] <== get_connector_properties: return (45ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:35:07 compute-0 nova_compute[239545]: 2026-02-02 15:35:07.946 239549 DEBUG nova.virt.block_device [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Updating existing volume attachment record: a46ead61-2021-4e21-ad7d-2216198e1442 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:35:07 compute-0 ceph-mon[75334]: osdmap e215: 3 total, 3 up, 3 in
Feb 02 15:35:07 compute-0 ceph-mon[75334]: pgmap v1058: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 2.8 MiB/s wr, 171 op/s
Feb 02 15:35:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/749886518' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/749886518' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.343 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.553 239549 DEBUG oslo_concurrency.lockutils [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.554 239549 DEBUG oslo_concurrency.lockutils [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.572 239549 INFO nova.compute.manager [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Detaching volume fc4bbe92-1043-49fb-901f-a2e31aa75a71
Feb 02 15:35:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658398427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.792 239549 INFO nova.virt.block_device [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Attempting to driver detach volume fc4bbe92-1043-49fb-901f-a2e31aa75a71 from mountpoint /dev/vdb
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.802 239549 DEBUG nova.virt.libvirt.driver [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Attempting to detach device vdb from instance df13eb08-f03e-43d5-a950-22b892d819af from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.803 239549 DEBUG nova.virt.libvirt.guest [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-fc4bbe92-1043-49fb-901f-a2e31aa75a71">
Feb 02 15:35:08 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   </source>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <serial>fc4bbe92-1043-49fb-901f-a2e31aa75a71</serial>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]: </disk>
Feb 02 15:35:08 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.819 239549 INFO nova.virt.libvirt.driver [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Successfully detached device vdb from instance df13eb08-f03e-43d5-a950-22b892d819af from the persistent domain config.
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.820 239549 DEBUG nova.virt.libvirt.driver [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance df13eb08-f03e-43d5-a950-22b892d819af from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.821 239549 DEBUG nova.virt.libvirt.guest [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-fc4bbe92-1043-49fb-901f-a2e31aa75a71">
Feb 02 15:35:08 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   </source>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <serial>fc4bbe92-1043-49fb-901f-a2e31aa75a71</serial>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]: </disk>
Feb 02 15:35:08 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.829 239549 DEBUG nova.objects.instance [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'flavor' on Instance uuid 4b3386f6-82b3-4e67-abc7-d82021a8f04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.883 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Received event <DeviceRemovedEvent: 1770046508.8825543, df13eb08-f03e-43d5-a950-22b892d819af => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.884 239549 DEBUG nova.virt.libvirt.driver [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance df13eb08-f03e-43d5-a950-22b892d819af _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.887 239549 INFO nova.virt.libvirt.driver [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Successfully detached device vdb from instance df13eb08-f03e-43d5-a950-22b892d819af from the live domain config.
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.894 239549 DEBUG nova.virt.libvirt.driver [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Attempting to attach volume 505add96-37b3-4fa0-b8e7-4ce2dc3c22cf with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:35:08 compute-0 nova_compute[239545]: 2026-02-02 15:35:08.896 239549 DEBUG nova.virt.libvirt.guest [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-505add96-37b3-4fa0-b8e7-4ce2dc3c22cf">
Feb 02 15:35:08 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   </source>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:35:08 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:35:08 compute-0 nova_compute[239545]:   <serial>505add96-37b3-4fa0-b8e7-4ce2dc3c22cf</serial>
Feb 02 15:35:08 compute-0 nova_compute[239545]: </disk>
Feb 02 15:35:08 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:35:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/749886518' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/749886518' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2658398427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.022 239549 DEBUG nova.virt.libvirt.driver [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.023 239549 DEBUG nova.virt.libvirt.driver [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.023 239549 DEBUG nova.virt.libvirt.driver [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.023 239549 DEBUG nova.virt.libvirt.driver [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No VIF found with MAC fa:16:3e:d7:c0:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.170 239549 DEBUG nova.objects.instance [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'flavor' on Instance uuid df13eb08-f03e-43d5-a950-22b892d819af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.257 239549 DEBUG oslo_concurrency.lockutils [None req-1a77123f-7dec-4260-9aab-d8bce367b810 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.318 239549 DEBUG oslo_concurrency.lockutils [None req-629f8d1c-edb6-421a-8f2e-35553cce0197 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 454 KiB/s rd, 1.5 MiB/s wr, 166 op/s
Feb 02 15:35:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1273156526' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1273156526' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.990 239549 DEBUG oslo_concurrency.lockutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.991 239549 DEBUG oslo_concurrency.lockutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.991 239549 DEBUG oslo_concurrency.lockutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.992 239549 DEBUG oslo_concurrency.lockutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.992 239549 DEBUG oslo_concurrency.lockutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.993 239549 INFO nova.compute.manager [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Terminating instance
Feb 02 15:35:09 compute-0 nova_compute[239545]: 2026-02-02 15:35:09.995 239549 DEBUG nova.compute.manager [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:35:10 compute-0 ceph-mon[75334]: pgmap v1059: 305 pgs: 305 active+clean; 246 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 454 KiB/s rd, 1.5 MiB/s wr, 166 op/s
Feb 02 15:35:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1273156526' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1273156526' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:10 compute-0 kernel: tapfc0f8b6c-d0 (unregistering): left promiscuous mode
Feb 02 15:35:10 compute-0 NetworkManager[49171]: <info>  [1770046510.0898] device (tapfc0f8b6c-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:35:10 compute-0 ovn_controller[144995]: 2026-02-02T15:35:10Z|00076|binding|INFO|Releasing lport fc0f8b6c-d0b6-4a4a-b130-67e31e204221 from this chassis (sb_readonly=0)
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.096 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:10 compute-0 ovn_controller[144995]: 2026-02-02T15:35:10Z|00077|binding|INFO|Setting lport fc0f8b6c-d0b6-4a4a-b130-67e31e204221 down in Southbound
Feb 02 15:35:10 compute-0 ovn_controller[144995]: 2026-02-02T15:35:10Z|00078|binding|INFO|Removing iface tapfc0f8b6c-d0 ovn-installed in OVS
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.098 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.107 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.111 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:02:ef 10.100.0.6'], port_security=['fa:16:3e:e6:02:ef 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'df13eb08-f03e-43d5-a950-22b892d819af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd39cd97fc8041569e2a21b01b4ed0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e1e8ad4-4b9e-4a25-af31-be35b7753118', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=387ba1e2-c4db-437f-a706-eb9807770b03, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=fc0f8b6c-d0b6-4a4a-b130-67e31e204221) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.112 154982 INFO neutron.agent.ovn.metadata.agent [-] Port fc0f8b6c-d0b6-4a4a-b130-67e31e204221 in datapath 8a81d067-8083-4de2-8ac6-1682b4d8e6bb unbound from our chassis
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.113 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a81d067-8083-4de2-8ac6-1682b4d8e6bb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.114 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[73966ded-96db-4e84-819f-e7093d2b6dd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.114 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb namespace which is not needed anymore
Feb 02 15:35:10 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb 02 15:35:10 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 13.807s CPU time.
Feb 02 15:35:10 compute-0 systemd-machined[207609]: Machine qemu-5-instance-00000005 terminated.
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.222 239549 INFO nova.virt.libvirt.driver [-] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Instance destroyed successfully.
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.222 239549 DEBUG nova.objects.instance [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'resources' on Instance uuid df13eb08-f03e-43d5-a950-22b892d819af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:35:10 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[249973]: [NOTICE]   (249977) : haproxy version is 2.8.14-c23fe91
Feb 02 15:35:10 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[249973]: [NOTICE]   (249977) : path to executable is /usr/sbin/haproxy
Feb 02 15:35:10 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[249973]: [WARNING]  (249977) : Exiting Master process...
Feb 02 15:35:10 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[249973]: [ALERT]    (249977) : Current worker (249979) exited with code 143 (Terminated)
Feb 02 15:35:10 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[249973]: [WARNING]  (249977) : All workers exited. Exiting... (0)
Feb 02 15:35:10 compute-0 systemd[1]: libpod-8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5.scope: Deactivated successfully.
Feb 02 15:35:10 compute-0 podman[251277]: 2026-02-02 15:35:10.239750561 +0000 UTC m=+0.061277019 container died 8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.246 239549 DEBUG nova.virt.libvirt.vif [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:34:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-437458788',display_name='tempest-VolumesBackupsTest-instance-437458788',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-437458788',id=5,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjsax5FDb5fGUvyB8ABhrpMBNJUBKCrgcZrFiak24zHXTLIsVDZR1IDlBWePQfsstMPHqrf+Jx6Fe86XxqHlRK4lexDzhIFxvdGEa2SuYmMNSCyALH2/fgufYxMXTs6/Q==',key_name='tempest-keypair-358872520',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:34:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cd39cd97fc8041569e2a21b01b4ed0db',ramdisk_id='',reservation_id='r-i8xwotfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1207235356',owner_user_name='tempest-VolumesBackupsTest-1207235356-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:34:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b10e73971e784c20a0843cf9caf5cbbe',uuid=df13eb08-f03e-43d5-a950-22b892d819af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.247 239549 DEBUG nova.network.os_vif_util [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converting VIF {"id": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "address": "fa:16:3e:e6:02:ef", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0f8b6c-d0", "ovs_interfaceid": "fc0f8b6c-d0b6-4a4a-b130-67e31e204221", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.248 239549 DEBUG nova.network.os_vif_util [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:02:ef,bridge_name='br-int',has_traffic_filtering=True,id=fc0f8b6c-d0b6-4a4a-b130-67e31e204221,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc0f8b6c-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.248 239549 DEBUG os_vif [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:02:ef,bridge_name='br-int',has_traffic_filtering=True,id=fc0f8b6c-d0b6-4a4a-b130-67e31e204221,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc0f8b6c-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.249 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.250 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc0f8b6c-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.251 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.254 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.256 239549 INFO os_vif [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:02:ef,bridge_name='br-int',has_traffic_filtering=True,id=fc0f8b6c-d0b6-4a4a-b130-67e31e204221,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc0f8b6c-d0')
Feb 02 15:35:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5-userdata-shm.mount: Deactivated successfully.
Feb 02 15:35:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e18fd7bdf9806b002d22020da5c1cde707558db025184a403f00560c3cd1c6a-merged.mount: Deactivated successfully.
Feb 02 15:35:10 compute-0 podman[251277]: 2026-02-02 15:35:10.303632963 +0000 UTC m=+0.125159411 container cleanup 8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 02 15:35:10 compute-0 systemd[1]: libpod-conmon-8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5.scope: Deactivated successfully.
Feb 02 15:35:10 compute-0 podman[251337]: 2026-02-02 15:35:10.371430409 +0000 UTC m=+0.052316731 container remove 8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.375 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a939f2f7-17ee-432c-bc95-5abc96eab594]: (4, ('Mon Feb  2 03:35:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb (8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5)\n8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5\nMon Feb  2 03:35:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb (8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5)\n8fac040e1053ed6a1534f71fc64b3cff91c87bca7085bd7220a46e37f8fe3ca5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.377 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[158a9bd1-10d8-468b-ba15-5876bd4b1ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.377 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a81d067-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.379 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:10 compute-0 kernel: tap8a81d067-80: left promiscuous mode
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.381 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.384 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[42fa67d9-71e6-4df7-8882-9ce86b8a0ecd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.388 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.403 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a900f1-d0eb-4e8d-bdd9-e047e54eea34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.405 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ecf577-d846-41bc-a424-74edd661f447]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.416 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb77381-ce1b-41ad-8f41-28c44a274e92]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396348, 'reachable_time': 28928, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251352, 'error': None, 'target': 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d8a81d067\x2d8083\x2d4de2\x2d8ac6\x2d1682b4d8e6bb.mount: Deactivated successfully.
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.419 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:35:10 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:10.419 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ac6303-eff2-42e0-840d-478466bc2976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.435 239549 DEBUG nova.compute.manager [req-0c8b9e7b-f1c0-49b5-b10b-0b283fc100b4 req-eeda1afd-8cb3-4a74-9381-2bf301b2ab4e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received event network-vif-unplugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.435 239549 DEBUG oslo_concurrency.lockutils [req-0c8b9e7b-f1c0-49b5-b10b-0b283fc100b4 req-eeda1afd-8cb3-4a74-9381-2bf301b2ab4e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.435 239549 DEBUG oslo_concurrency.lockutils [req-0c8b9e7b-f1c0-49b5-b10b-0b283fc100b4 req-eeda1afd-8cb3-4a74-9381-2bf301b2ab4e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.436 239549 DEBUG oslo_concurrency.lockutils [req-0c8b9e7b-f1c0-49b5-b10b-0b283fc100b4 req-eeda1afd-8cb3-4a74-9381-2bf301b2ab4e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.436 239549 DEBUG nova.compute.manager [req-0c8b9e7b-f1c0-49b5-b10b-0b283fc100b4 req-eeda1afd-8cb3-4a74-9381-2bf301b2ab4e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] No waiting events found dispatching network-vif-unplugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:35:10 compute-0 nova_compute[239545]: 2026-02-02 15:35:10.436 239549 DEBUG nova.compute.manager [req-0c8b9e7b-f1c0-49b5-b10b-0b283fc100b4 req-eeda1afd-8cb3-4a74-9381-2bf301b2ab4e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received event network-vif-unplugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:35:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3261964609' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3261964609' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Feb 02 15:35:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Feb 02 15:35:11 compute-0 nova_compute[239545]: 2026-02-02 15:35:11.052 239549 INFO nova.virt.libvirt.driver [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Deleting instance files /var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af_del
Feb 02 15:35:11 compute-0 nova_compute[239545]: 2026-02-02 15:35:11.054 239549 INFO nova.virt.libvirt.driver [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Deletion of /var/lib/nova/instances/df13eb08-f03e-43d5-a950-22b892d819af_del complete
Feb 02 15:35:11 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Feb 02 15:35:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3261964609' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3261964609' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:11 compute-0 nova_compute[239545]: 2026-02-02 15:35:11.152 239549 INFO nova.compute.manager [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Took 1.16 seconds to destroy the instance on the hypervisor.
Feb 02 15:35:11 compute-0 nova_compute[239545]: 2026-02-02 15:35:11.152 239549 DEBUG oslo.service.loopingcall [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:35:11 compute-0 nova_compute[239545]: 2026-02-02 15:35:11.152 239549 DEBUG nova.compute.manager [-] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:35:11 compute-0 nova_compute[239545]: 2026-02-02 15:35:11.153 239549 DEBUG nova.network.neutron [-] [instance: df13eb08-f03e-43d5-a950-22b892d819af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:35:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 229 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 504 KiB/s rd, 1.4 MiB/s wr, 269 op/s
Feb 02 15:35:12 compute-0 ceph-mon[75334]: osdmap e216: 3 total, 3 up, 3 in
Feb 02 15:35:12 compute-0 ceph-mon[75334]: pgmap v1061: 305 pgs: 305 active+clean; 229 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 504 KiB/s rd, 1.4 MiB/s wr, 269 op/s
Feb 02 15:35:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Feb 02 15:35:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Feb 02 15:35:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.545 239549 DEBUG nova.compute.manager [req-c51a8041-c1c1-43b0-979c-7f2c4c52482d req-3c3c8791-bf0d-4d1c-a520-e539b0a402cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received event network-vif-plugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.546 239549 DEBUG oslo_concurrency.lockutils [req-c51a8041-c1c1-43b0-979c-7f2c4c52482d req-3c3c8791-bf0d-4d1c-a520-e539b0a402cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "df13eb08-f03e-43d5-a950-22b892d819af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.546 239549 DEBUG oslo_concurrency.lockutils [req-c51a8041-c1c1-43b0-979c-7f2c4c52482d req-3c3c8791-bf0d-4d1c-a520-e539b0a402cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.546 239549 DEBUG oslo_concurrency.lockutils [req-c51a8041-c1c1-43b0-979c-7f2c4c52482d req-3c3c8791-bf0d-4d1c-a520-e539b0a402cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.546 239549 DEBUG nova.compute.manager [req-c51a8041-c1c1-43b0-979c-7f2c4c52482d req-3c3c8791-bf0d-4d1c-a520-e539b0a402cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] No waiting events found dispatching network-vif-plugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.547 239549 WARNING nova.compute.manager [req-c51a8041-c1c1-43b0-979c-7f2c4c52482d req-3c3c8791-bf0d-4d1c-a520-e539b0a402cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received unexpected event network-vif-plugged-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 for instance with vm_state active and task_state deleting.
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.717 239549 DEBUG nova.network.neutron [-] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.752 239549 INFO nova.compute.manager [-] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Took 1.60 seconds to deallocate network for instance.
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.812 239549 DEBUG oslo_concurrency.lockutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.813 239549 DEBUG oslo_concurrency.lockutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:12 compute-0 nova_compute[239545]: 2026-02-02 15:35:12.899 239549 DEBUG oslo_concurrency.processutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Feb 02 15:35:13 compute-0 ceph-mon[75334]: osdmap e217: 3 total, 3 up, 3 in
Feb 02 15:35:13 compute-0 nova_compute[239545]: 2026-02-02 15:35:13.345 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Feb 02 15:35:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Feb 02 15:35:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:35:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22407110' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:13 compute-0 nova_compute[239545]: 2026-02-02 15:35:13.433 239549 DEBUG oslo_concurrency.processutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:13 compute-0 nova_compute[239545]: 2026-02-02 15:35:13.438 239549 DEBUG nova.compute.provider_tree [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:35:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 212 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 33 KiB/s wr, 174 op/s
Feb 02 15:35:13 compute-0 nova_compute[239545]: 2026-02-02 15:35:13.490 239549 DEBUG nova.scheduler.client.report [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:35:13 compute-0 nova_compute[239545]: 2026-02-02 15:35:13.539 239549 DEBUG oslo_concurrency.lockutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:13 compute-0 nova_compute[239545]: 2026-02-02 15:35:13.602 239549 INFO nova.scheduler.client.report [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Deleted allocations for instance df13eb08-f03e-43d5-a950-22b892d819af
Feb 02 15:35:13 compute-0 nova_compute[239545]: 2026-02-02 15:35:13.726 239549 DEBUG oslo_concurrency.lockutils [None req-21d9cb67-5efe-4c2a-bb14-e19abe939495 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "df13eb08-f03e-43d5-a950-22b892d819af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:14 compute-0 ceph-mon[75334]: osdmap e218: 3 total, 3 up, 3 in
Feb 02 15:35:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/22407110' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:14 compute-0 ceph-mon[75334]: pgmap v1064: 305 pgs: 305 active+clean; 212 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 33 KiB/s wr, 174 op/s
Feb 02 15:35:14 compute-0 nova_compute[239545]: 2026-02-02 15:35:14.674 239549 DEBUG nova.compute.manager [req-910dfd02-cbcc-4ef7-a091-9456c3703c03 req-865b7d66-eb42-4416-8898-c5f21855395c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Received event network-vif-deleted-fc0f8b6c-d0b6-4a4a-b130-67e31e204221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:35:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:35:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:35:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:35:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:35:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:35:15 compute-0 nova_compute[239545]: 2026-02-02 15:35:15.252 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Feb 02 15:35:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Feb 02 15:35:15 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Feb 02 15:35:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 167 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.2 KiB/s wr, 81 op/s
Feb 02 15:35:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Feb 02 15:35:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Feb 02 15:35:16 compute-0 ceph-mon[75334]: osdmap e219: 3 total, 3 up, 3 in
Feb 02 15:35:16 compute-0 ceph-mon[75334]: pgmap v1066: 305 pgs: 305 active+clean; 167 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.2 KiB/s wr, 81 op/s
Feb 02 15:35:16 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Feb 02 15:35:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 167 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 5.1 KiB/s wr, 136 op/s
Feb 02 15:35:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Feb 02 15:35:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Feb 02 15:35:17 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Feb 02 15:35:17 compute-0 ceph-mon[75334]: osdmap e220: 3 total, 3 up, 3 in
Feb 02 15:35:18 compute-0 nova_compute[239545]: 2026-02-02 15:35:18.347 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2292026816' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2292026816' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Feb 02 15:35:18 compute-0 ceph-mon[75334]: pgmap v1068: 305 pgs: 305 active+clean; 167 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 5.1 KiB/s wr, 136 op/s
Feb 02 15:35:18 compute-0 ceph-mon[75334]: osdmap e221: 3 total, 3 up, 3 in
Feb 02 15:35:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2292026816' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2292026816' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Feb 02 15:35:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Feb 02 15:35:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.0 KiB/s wr, 93 op/s
Feb 02 15:35:19 compute-0 ceph-mon[75334]: osdmap e222: 3 total, 3 up, 3 in
Feb 02 15:35:20 compute-0 nova_compute[239545]: 2026-02-02 15:35:20.182 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:20 compute-0 nova_compute[239545]: 2026-02-02 15:35:20.253 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Feb 02 15:35:20 compute-0 ceph-mon[75334]: pgmap v1071: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.0 KiB/s wr, 93 op/s
Feb 02 15:35:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Feb 02 15:35:20 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Feb 02 15:35:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 5.4 KiB/s wr, 116 op/s
Feb 02 15:35:21 compute-0 ceph-mon[75334]: osdmap e223: 3 total, 3 up, 3 in
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.684 239549 DEBUG oslo_concurrency.lockutils [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.685 239549 DEBUG oslo_concurrency.lockutils [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.703 239549 INFO nova.compute.manager [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Detaching volume 505add96-37b3-4fa0-b8e7-4ce2dc3c22cf
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.837 239549 DEBUG oslo_concurrency.lockutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.885 239549 INFO nova.virt.block_device [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Attempting to driver detach volume 505add96-37b3-4fa0-b8e7-4ce2dc3c22cf from mountpoint /dev/vdb
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.895 239549 DEBUG nova.virt.libvirt.driver [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Attempting to detach device vdb from instance 4b3386f6-82b3-4e67-abc7-d82021a8f04c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.896 239549 DEBUG nova.virt.libvirt.guest [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-505add96-37b3-4fa0-b8e7-4ce2dc3c22cf">
Feb 02 15:35:21 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   </source>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <serial>505add96-37b3-4fa0-b8e7-4ce2dc3c22cf</serial>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:35:21 compute-0 nova_compute[239545]: </disk>
Feb 02 15:35:21 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.903 239549 INFO nova.virt.libvirt.driver [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Successfully detached device vdb from instance 4b3386f6-82b3-4e67-abc7-d82021a8f04c from the persistent domain config.
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.904 239549 DEBUG nova.virt.libvirt.driver [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 4b3386f6-82b3-4e67-abc7-d82021a8f04c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.904 239549 DEBUG nova.virt.libvirt.guest [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-505add96-37b3-4fa0-b8e7-4ce2dc3c22cf">
Feb 02 15:35:21 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   </source>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <serial>505add96-37b3-4fa0-b8e7-4ce2dc3c22cf</serial>
Feb 02 15:35:21 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:35:21 compute-0 nova_compute[239545]: </disk>
Feb 02 15:35:21 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.954 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Received event <DeviceRemovedEvent: 1770046521.9542506, 4b3386f6-82b3-4e67-abc7-d82021a8f04c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.957 239549 DEBUG nova.virt.libvirt.driver [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 4b3386f6-82b3-4e67-abc7-d82021a8f04c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 15:35:21 compute-0 nova_compute[239545]: 2026-02-02 15:35:21.961 239549 INFO nova.virt.libvirt.driver [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Successfully detached device vdb from instance 4b3386f6-82b3-4e67-abc7-d82021a8f04c from the live domain config.
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.206 239549 DEBUG nova.objects.instance [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'flavor' on Instance uuid 4b3386f6-82b3-4e67-abc7-d82021a8f04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.247 239549 DEBUG oslo_concurrency.lockutils [None req-e165fa72-42ad-486d-be6e-c3de4c7a8c23 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.248 239549 DEBUG oslo_concurrency.lockutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.248 239549 DEBUG oslo_concurrency.lockutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.248 239549 DEBUG oslo_concurrency.lockutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.249 239549 DEBUG oslo_concurrency.lockutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.250 239549 INFO nova.compute.manager [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Terminating instance
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.251 239549 DEBUG nova.compute.manager [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:35:22 compute-0 kernel: tapee164fa3-c6 (unregistering): left promiscuous mode
Feb 02 15:35:22 compute-0 NetworkManager[49171]: <info>  [1770046522.2906] device (tapee164fa3-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.295 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 ovn_controller[144995]: 2026-02-02T15:35:22Z|00079|binding|INFO|Releasing lport ee164fa3-c608-44ca-a9ad-458e67ac2c7a from this chassis (sb_readonly=0)
Feb 02 15:35:22 compute-0 ovn_controller[144995]: 2026-02-02T15:35:22Z|00080|binding|INFO|Setting lport ee164fa3-c608-44ca-a9ad-458e67ac2c7a down in Southbound
Feb 02 15:35:22 compute-0 ovn_controller[144995]: 2026-02-02T15:35:22Z|00081|binding|INFO|Removing iface tapee164fa3-c6 ovn-installed in OVS
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.297 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.303 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:c0:e9 10.100.0.3'], port_security=['fa:16:3e:d7:c0:e9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4b3386f6-82b3-4e67-abc7-d82021a8f04c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '010150769bb34684be4a2dff720d1b35', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9d818ae-e9da-48a5-a5e2-ecd0a0e7ed92', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71d31c89-df7b-4a1a-b202-a6dac026a894, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ee164fa3-c608-44ca-a9ad-458e67ac2c7a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.304 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ee164fa3-c608-44ca-a9ad-458e67ac2c7a in datapath 476af4b4-172e-44ce-8fec-4b78aa7603bb unbound from our chassis
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.305 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 476af4b4-172e-44ce-8fec-4b78aa7603bb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.306 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ea967e0b-a705-43ef-a3d7-72a57128c26f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.307 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb namespace which is not needed anymore
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.314 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Feb 02 15:35:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Feb 02 15:35:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Feb 02 15:35:22 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb 02 15:35:22 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 14.655s CPU time.
Feb 02 15:35:22 compute-0 systemd-machined[207609]: Machine qemu-6-instance-00000006 terminated.
Feb 02 15:35:22 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[250637]: [NOTICE]   (250645) : haproxy version is 2.8.14-c23fe91
Feb 02 15:35:22 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[250637]: [NOTICE]   (250645) : path to executable is /usr/sbin/haproxy
Feb 02 15:35:22 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[250637]: [WARNING]  (250645) : Exiting Master process...
Feb 02 15:35:22 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[250637]: [ALERT]    (250645) : Current worker (250648) exited with code 143 (Terminated)
Feb 02 15:35:22 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[250637]: [WARNING]  (250645) : All workers exited. Exiting... (0)
Feb 02 15:35:22 compute-0 systemd[1]: libpod-0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2.scope: Deactivated successfully.
Feb 02 15:35:22 compute-0 podman[251404]: 2026-02-02 15:35:22.435314844 +0000 UTC m=+0.044299707 container died 0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.471 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.475 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.484 239549 INFO nova.virt.libvirt.driver [-] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Instance destroyed successfully.
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.485 239549 DEBUG nova.objects.instance [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'resources' on Instance uuid 4b3386f6-82b3-4e67-abc7-d82021a8f04c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2-userdata-shm.mount: Deactivated successfully.
Feb 02 15:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef82acc9ac2edabc02e8954cfdad3821f8fa707daa2a6baf1cf238845967cce6-merged.mount: Deactivated successfully.
Feb 02 15:35:22 compute-0 podman[251404]: 2026-02-02 15:35:22.529624605 +0000 UTC m=+0.138609458 container cleanup 0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:35:22 compute-0 systemd[1]: libpod-conmon-0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2.scope: Deactivated successfully.
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.585 239549 DEBUG nova.virt.libvirt.vif [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:34:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1183795373',display_name='tempest-VolumesSnapshotTestJSON-instance-1183795373',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1183795373',id=6,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGLkQ4rSay35gXx1sFkXgaspAfMZ05j08g+NZnAfJgqfVDoVOOX3N+K2gSQe2SOV6USMmPUPKwx63dSLrOJFEAS7IXNmtZlVlE4Hhp+41AEUZLhOaiuapRiHo54watMy3g==',key_name='tempest-keypair-308051672',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:34:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='010150769bb34684be4a2dff720d1b35',ramdisk_id='',reservation_id='r-cmhqxqvq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-1645199079',owner_user_name='tempest-VolumesSnapshotTestJSON-1645199079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:34:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2059424184a34c2da768a2a83c23a7f5',uuid=4b3386f6-82b3-4e67-abc7-d82021a8f04c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.586 239549 DEBUG nova.network.os_vif_util [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converting VIF {"id": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "address": "fa:16:3e:d7:c0:e9", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee164fa3-c6", "ovs_interfaceid": "ee164fa3-c608-44ca-a9ad-458e67ac2c7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.589 239549 DEBUG nova.network.os_vif_util [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:c0:e9,bridge_name='br-int',has_traffic_filtering=True,id=ee164fa3-c608-44ca-a9ad-458e67ac2c7a,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee164fa3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.589 239549 DEBUG os_vif [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:c0:e9,bridge_name='br-int',has_traffic_filtering=True,id=ee164fa3-c608-44ca-a9ad-458e67ac2c7a,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee164fa3-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.592 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.592 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee164fa3-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.594 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.596 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.599 239549 INFO os_vif [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:c0:e9,bridge_name='br-int',has_traffic_filtering=True,id=ee164fa3-c608-44ca-a9ad-458e67ac2c7a,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee164fa3-c6')
Feb 02 15:35:22 compute-0 ceph-mon[75334]: pgmap v1073: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 5.4 KiB/s wr, 116 op/s
Feb 02 15:35:22 compute-0 ceph-mon[75334]: osdmap e224: 3 total, 3 up, 3 in
Feb 02 15:35:22 compute-0 podman[251442]: 2026-02-02 15:35:22.800969794 +0000 UTC m=+0.250875834 container remove 0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.807 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d44c4c-8cf6-4221-9c98-c15c919705b8]: (4, ('Mon Feb  2 03:35:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb (0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2)\n0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2\nMon Feb  2 03:35:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb (0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2)\n0a68ca87cec72154cf948374f1db2339cbd2241adc968be5be260e9878e821d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.808 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4129c76e-bc91-4a7d-bd38-b89802bbc553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.809 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap476af4b4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.811 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 kernel: tap476af4b4-10: left promiscuous mode
Feb 02 15:35:22 compute-0 nova_compute[239545]: 2026-02-02 15:35:22.827 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.829 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7faa7b89-d791-4d7a-b752-47d4495acb19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.849 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[78f438d0-4858-4955-abc4-7a470a19c7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.850 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[87af033d-414d-4881-a38f-b8f3b4b30ca4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.862 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e30a51-0c0e-4e3e-822c-5c72ac10f02a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 397358, 'reachable_time': 15974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251477, 'error': None, 'target': 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.864 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:35:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:22.864 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[bbaedb8b-f852-48da-a707-2d345ca5e0b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d476af4b4\x2d172e\x2d44ce\x2d8fec\x2d4b78aa7603bb.mount: Deactivated successfully.
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.272 239549 DEBUG nova.compute.manager [req-f4dd9477-2324-44fb-a201-962dce4d09a1 req-ed0418d7-f383-4b61-9727-06cee0bc6018 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received event network-vif-unplugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.273 239549 DEBUG oslo_concurrency.lockutils [req-f4dd9477-2324-44fb-a201-962dce4d09a1 req-ed0418d7-f383-4b61-9727-06cee0bc6018 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.273 239549 DEBUG oslo_concurrency.lockutils [req-f4dd9477-2324-44fb-a201-962dce4d09a1 req-ed0418d7-f383-4b61-9727-06cee0bc6018 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.273 239549 DEBUG oslo_concurrency.lockutils [req-f4dd9477-2324-44fb-a201-962dce4d09a1 req-ed0418d7-f383-4b61-9727-06cee0bc6018 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.273 239549 DEBUG nova.compute.manager [req-f4dd9477-2324-44fb-a201-962dce4d09a1 req-ed0418d7-f383-4b61-9727-06cee0bc6018 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] No waiting events found dispatching network-vif-unplugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.273 239549 DEBUG nova.compute.manager [req-f4dd9477-2324-44fb-a201-962dce4d09a1 req-ed0418d7-f383-4b61-9727-06cee0bc6018 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received event network-vif-unplugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.348 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.8 KiB/s wr, 126 op/s
Feb 02 15:35:23 compute-0 ceph-mon[75334]: pgmap v1075: 305 pgs: 305 active+clean; 159 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.8 KiB/s wr, 126 op/s
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.733 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.952 239549 INFO nova.virt.libvirt.driver [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Deleting instance files /var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c_del
Feb 02 15:35:23 compute-0 nova_compute[239545]: 2026-02-02 15:35:23.953 239549 INFO nova.virt.libvirt.driver [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Deletion of /var/lib/nova/instances/4b3386f6-82b3-4e67-abc7-d82021a8f04c_del complete
Feb 02 15:35:24 compute-0 nova_compute[239545]: 2026-02-02 15:35:24.054 239549 INFO nova.compute.manager [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Took 1.80 seconds to destroy the instance on the hypervisor.
Feb 02 15:35:24 compute-0 nova_compute[239545]: 2026-02-02 15:35:24.054 239549 DEBUG oslo.service.loopingcall [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:35:24 compute-0 nova_compute[239545]: 2026-02-02 15:35:24.055 239549 DEBUG nova.compute.manager [-] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:35:24 compute-0 nova_compute[239545]: 2026-02-02 15:35:24.055 239549 DEBUG nova.network.neutron [-] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:35:24 compute-0 nova_compute[239545]: 2026-02-02 15:35:24.908 239549 DEBUG nova.network.neutron [-] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:35:24 compute-0 nova_compute[239545]: 2026-02-02 15:35:24.968 239549 INFO nova.compute.manager [-] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Took 0.91 seconds to deallocate network for instance.
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.087 239549 WARNING nova.volume.cinder [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Attachment a46ead61-2021-4e21-ad7d-2216198e1442 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = a46ead61-2021-4e21-ad7d-2216198e1442. (HTTP 404) (Request-ID: req-e79e7a1d-3e48-4701-af18-92d8becf44a8)
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.088 239549 INFO nova.compute.manager [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Took 0.12 seconds to detach 1 volumes for instance.
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.204 239549 DEBUG oslo_concurrency.lockutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.205 239549 DEBUG oslo_concurrency.lockutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.221 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046510.22001, df13eb08-f03e-43d5-a950-22b892d819af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.222 239549 INFO nova.compute.manager [-] [instance: df13eb08-f03e-43d5-a950-22b892d819af] VM Stopped (Lifecycle Event)
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.249 239549 DEBUG nova.compute.manager [None req-227a3d00-aa5e-4840-98b0-80e5c4eb3831 - - - - - -] [instance: df13eb08-f03e-43d5-a950-22b892d819af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.258 239549 DEBUG oslo_concurrency.processutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.406 239549 DEBUG nova.compute.manager [req-f7326eab-f2be-46c9-9e41-77d5a67974af req-391d82a5-4c6c-4c5a-a028-122a9f6a4915 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received event network-vif-plugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.406 239549 DEBUG oslo_concurrency.lockutils [req-f7326eab-f2be-46c9-9e41-77d5a67974af req-391d82a5-4c6c-4c5a-a028-122a9f6a4915 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.407 239549 DEBUG oslo_concurrency.lockutils [req-f7326eab-f2be-46c9-9e41-77d5a67974af req-391d82a5-4c6c-4c5a-a028-122a9f6a4915 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.407 239549 DEBUG oslo_concurrency.lockutils [req-f7326eab-f2be-46c9-9e41-77d5a67974af req-391d82a5-4c6c-4c5a-a028-122a9f6a4915 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.407 239549 DEBUG nova.compute.manager [req-f7326eab-f2be-46c9-9e41-77d5a67974af req-391d82a5-4c6c-4c5a-a028-122a9f6a4915 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] No waiting events found dispatching network-vif-plugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.407 239549 WARNING nova.compute.manager [req-f7326eab-f2be-46c9-9e41-77d5a67974af req-391d82a5-4c6c-4c5a-a028-122a9f6a4915 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received unexpected event network-vif-plugged-ee164fa3-c608-44ca-a9ad-458e67ac2c7a for instance with vm_state deleted and task_state None.
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.407 239549 DEBUG nova.compute.manager [req-f7326eab-f2be-46c9-9e41-77d5a67974af req-391d82a5-4c6c-4c5a-a028-122a9f6a4915 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Received event network-vif-deleted-ee164fa3-c608-44ca-a9ad-458e67ac2c7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 109 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.0 KiB/s wr, 139 op/s
Feb 02 15:35:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:35:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907546357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.789 239549 DEBUG oslo_concurrency.processutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.794 239549 DEBUG nova.compute.provider_tree [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.830 239549 DEBUG nova.scheduler.client.report [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.890 239549 DEBUG oslo_concurrency.lockutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:25 compute-0 nova_compute[239545]: 2026-02-02 15:35:25.951 239549 INFO nova.scheduler.client.report [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Deleted allocations for instance 4b3386f6-82b3-4e67-abc7-d82021a8f04c
Feb 02 15:35:26 compute-0 nova_compute[239545]: 2026-02-02 15:35:26.124 239549 DEBUG oslo_concurrency.lockutils [None req-9ed6ef7f-e1e7-4c74-b281-e48c1236bbaf 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "4b3386f6-82b3-4e67-abc7-d82021a8f04c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:26 compute-0 ceph-mon[75334]: pgmap v1076: 305 pgs: 305 active+clean; 109 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.0 KiB/s wr, 139 op/s
Feb 02 15:35:26 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1907546357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094826545' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Feb 02 15:35:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Feb 02 15:35:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Feb 02 15:35:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 94 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.4 KiB/s wr, 123 op/s
Feb 02 15:35:27 compute-0 nova_compute[239545]: 2026-02-02 15:35:27.596 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2094826545' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:27 compute-0 ceph-mon[75334]: osdmap e225: 3 total, 3 up, 3 in
Feb 02 15:35:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4054994657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4054994657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:28 compute-0 podman[251502]: 2026-02-02 15:35:28.330038048 +0000 UTC m=+0.068966637 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 02 15:35:28 compute-0 podman[251501]: 2026-02-02 15:35:28.34783615 +0000 UTC m=+0.093078891 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 02 15:35:28 compute-0 nova_compute[239545]: 2026-02-02 15:35:28.350 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/813785946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Feb 02 15:35:28 compute-0 ceph-mon[75334]: pgmap v1078: 305 pgs: 305 active+clean; 94 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.4 KiB/s wr, 123 op/s
Feb 02 15:35:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4054994657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4054994657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/813785946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Feb 02 15:35:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Feb 02 15:35:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958814287' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958814287' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 93 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 438 KiB/s wr, 82 op/s
Feb 02 15:35:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Feb 02 15:35:29 compute-0 ceph-mon[75334]: osdmap e226: 3 total, 3 up, 3 in
Feb 02 15:35:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1958814287' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1958814287' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:29 compute-0 ceph-mon[75334]: pgmap v1080: 305 pgs: 305 active+clean; 93 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 438 KiB/s wr, 82 op/s
Feb 02 15:35:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Feb 02 15:35:29 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Feb 02 15:35:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Feb 02 15:35:30 compute-0 ceph-mon[75334]: osdmap e227: 3 total, 3 up, 3 in
Feb 02 15:35:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Feb 02 15:35:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Feb 02 15:35:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 126 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 8.4 MiB/s wr, 242 op/s
Feb 02 15:35:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/590888561' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Feb 02 15:35:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Feb 02 15:35:31 compute-0 ceph-mon[75334]: osdmap e228: 3 total, 3 up, 3 in
Feb 02 15:35:31 compute-0 ceph-mon[75334]: pgmap v1083: 305 pgs: 305 active+clean; 126 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 8.4 MiB/s wr, 242 op/s
Feb 02 15:35:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/590888561' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Feb 02 15:35:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Feb 02 15:35:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Feb 02 15:35:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Feb 02 15:35:32 compute-0 nova_compute[239545]: 2026-02-02 15:35:32.598 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:32 compute-0 ceph-mon[75334]: osdmap e229: 3 total, 3 up, 3 in
Feb 02 15:35:32 compute-0 ceph-mon[75334]: osdmap e230: 3 total, 3 up, 3 in
Feb 02 15:35:33 compute-0 nova_compute[239545]: 2026-02-02 15:35:33.352 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 150 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 14 MiB/s wr, 268 op/s
Feb 02 15:35:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Feb 02 15:35:33 compute-0 ceph-mon[75334]: pgmap v1086: 305 pgs: 305 active+clean; 150 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 14 MiB/s wr, 268 op/s
Feb 02 15:35:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Feb 02 15:35:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Feb 02 15:35:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1131495499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:35 compute-0 ceph-mon[75334]: osdmap e231: 3 total, 3 up, 3 in
Feb 02 15:35:35 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1131495499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 325 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 41 MiB/s wr, 312 op/s
Feb 02 15:35:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Feb 02 15:35:36 compute-0 ceph-mon[75334]: pgmap v1088: 305 pgs: 305 active+clean; 325 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 41 MiB/s wr, 312 op/s
Feb 02 15:35:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Feb 02 15:35:36 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Feb 02 15:35:37 compute-0 ceph-mon[75334]: osdmap e232: 3 total, 3 up, 3 in
Feb 02 15:35:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:37 compute-0 nova_compute[239545]: 2026-02-02 15:35:37.482 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046522.4811459, 4b3386f6-82b3-4e67-abc7-d82021a8f04c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:35:37 compute-0 nova_compute[239545]: 2026-02-02 15:35:37.483 239549 INFO nova.compute.manager [-] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] VM Stopped (Lifecycle Event)
Feb 02 15:35:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 535 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 70 MiB/s wr, 324 op/s
Feb 02 15:35:37 compute-0 nova_compute[239545]: 2026-02-02 15:35:37.518 239549 DEBUG nova.compute.manager [None req-7053d128-0324-4bd1-9afa-1c4998a7d0ef - - - - - -] [instance: 4b3386f6-82b3-4e67-abc7-d82021a8f04c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:35:37 compute-0 nova_compute[239545]: 2026-02-02 15:35:37.600 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:38 compute-0 nova_compute[239545]: 2026-02-02 15:35:38.354 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:38 compute-0 ceph-mon[75334]: pgmap v1090: 305 pgs: 305 active+clean; 535 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 70 MiB/s wr, 324 op/s
Feb 02 15:35:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 772 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 89 MiB/s wr, 272 op/s
Feb 02 15:35:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Feb 02 15:35:39 compute-0 ceph-mon[75334]: pgmap v1091: 305 pgs: 305 active+clean; 772 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 89 MiB/s wr, 272 op/s
Feb 02 15:35:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Feb 02 15:35:39 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Feb 02 15:35:41 compute-0 ceph-mon[75334]: osdmap e233: 3 total, 3 up, 3 in
Feb 02 15:35:41 compute-0 nova_compute[239545]: 2026-02-02 15:35:41.309 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:41 compute-0 nova_compute[239545]: 2026-02-02 15:35:41.310 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:41 compute-0 nova_compute[239545]: 2026-02-02 15:35:41.391 239549 DEBUG nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:35:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 122 MiB/s wr, 393 op/s
Feb 02 15:35:41 compute-0 nova_compute[239545]: 2026-02-02 15:35:41.598 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:41 compute-0 nova_compute[239545]: 2026-02-02 15:35:41.598 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:41 compute-0 nova_compute[239545]: 2026-02-02 15:35:41.607 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:35:41 compute-0 nova_compute[239545]: 2026-02-02 15:35:41.608 239549 INFO nova.compute.claims [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:35:41 compute-0 nova_compute[239545]: 2026-02-02 15:35:41.769 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:42 compute-0 ceph-mon[75334]: pgmap v1093: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 122 MiB/s wr, 393 op/s
Feb 02 15:35:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/125539137' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/125539137' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:35:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079724935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.335 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.342 239549 DEBUG nova.compute.provider_tree [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.361 239549 DEBUG nova.scheduler.client.report [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.408 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.409 239549 DEBUG nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:35:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Feb 02 15:35:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Feb 02 15:35:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.489 239549 DEBUG nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.489 239549 DEBUG nova.network.neutron [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.517 239549 INFO nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.565 239549 DEBUG nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.622 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.675 239549 DEBUG nova.policy [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2059424184a34c2da768a2a83c23a7f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '010150769bb34684be4a2dff720d1b35', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.707 239549 DEBUG nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.708 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.709 239549 INFO nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Creating image(s)
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.746 239549 DEBUG nova.storage.rbd_utils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.778 239549 DEBUG nova.storage.rbd_utils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.808 239549 DEBUG nova.storage.rbd_utils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:35:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:35:42
Feb 02 15:35:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:35:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:35:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'images', '.rgw.root']
Feb 02 15:35:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.812 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.891 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.894 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.896 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.896 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.925 239549 DEBUG nova.storage.rbd_utils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:35:42 compute-0 nova_compute[239545]: 2026-02-02 15:35:42.930 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/125539137' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/125539137' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2079724935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:35:43 compute-0 ceph-mon[75334]: osdmap e234: 3 total, 3 up, 3 in
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.260 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.308 239549 DEBUG nova.storage.rbd_utils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] resizing rbd image a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.356 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.450 239549 DEBUG nova.objects.instance [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'migration_context' on Instance uuid a39fdefd-dea8-4cde-af15-a9b32e21ec59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.481 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.482 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Ensure instance console log exists: /var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.482 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.483 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.483 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 86 MiB/s wr, 182 op/s
Feb 02 15:35:43 compute-0 nova_compute[239545]: 2026-02-02 15:35:43.510 239549 DEBUG nova.network.neutron [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Successfully created port: 64418707-ba84-4b70-969a-d0882e71bae7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:35:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Feb 02 15:35:44 compute-0 ceph-mon[75334]: pgmap v1095: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 86 MiB/s wr, 182 op/s
Feb 02 15:35:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Feb 02 15:35:44 compute-0 nova_compute[239545]: 2026-02-02 15:35:44.407 239549 DEBUG nova.network.neutron [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Successfully updated port: 64418707-ba84-4b70-969a-d0882e71bae7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:35:44 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Feb 02 15:35:44 compute-0 nova_compute[239545]: 2026-02-02 15:35:44.487 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "refresh_cache-a39fdefd-dea8-4cde-af15-a9b32e21ec59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:35:44 compute-0 nova_compute[239545]: 2026-02-02 15:35:44.488 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquired lock "refresh_cache-a39fdefd-dea8-4cde-af15-a9b32e21ec59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:35:44 compute-0 nova_compute[239545]: 2026-02-02 15:35:44.488 239549 DEBUG nova.network.neutron [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:35:44 compute-0 nova_compute[239545]: 2026-02-02 15:35:44.561 239549 DEBUG nova.compute.manager [req-3a9b9c9f-4fbd-4580-980d-94d54723264b req-2a941b37-0805-489d-ba6b-95b4507d1de1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received event network-changed-64418707-ba84-4b70-969a-d0882e71bae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:44 compute-0 nova_compute[239545]: 2026-02-02 15:35:44.561 239549 DEBUG nova.compute.manager [req-3a9b9c9f-4fbd-4580-980d-94d54723264b req-2a941b37-0805-489d-ba6b-95b4507d1de1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Refreshing instance network info cache due to event network-changed-64418707-ba84-4b70-969a-d0882e71bae7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:35:44 compute-0 nova_compute[239545]: 2026-02-02 15:35:44.561 239549 DEBUG oslo_concurrency.lockutils [req-3a9b9c9f-4fbd-4580-980d-94d54723264b req-2a941b37-0805-489d-ba6b-95b4507d1de1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-a39fdefd-dea8-4cde-af15-a9b32e21ec59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:35:44 compute-0 nova_compute[239545]: 2026-02-02 15:35:44.714 239549 DEBUG nova.network.neutron [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:35:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:35:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 71 MiB/s wr, 308 op/s
Feb 02 15:35:45 compute-0 ceph-mon[75334]: osdmap e235: 3 total, 3 up, 3 in
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.323 239549 DEBUG nova.network.neutron [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Updating instance_info_cache with network_info: [{"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.375 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Releasing lock "refresh_cache-a39fdefd-dea8-4cde-af15-a9b32e21ec59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.376 239549 DEBUG nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Instance network_info: |[{"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.376 239549 DEBUG oslo_concurrency.lockutils [req-3a9b9c9f-4fbd-4580-980d-94d54723264b req-2a941b37-0805-489d-ba6b-95b4507d1de1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-a39fdefd-dea8-4cde-af15-a9b32e21ec59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.376 239549 DEBUG nova.network.neutron [req-3a9b9c9f-4fbd-4580-980d-94d54723264b req-2a941b37-0805-489d-ba6b-95b4507d1de1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Refreshing network info cache for port 64418707-ba84-4b70-969a-d0882e71bae7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.380 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Start _get_guest_xml network_info=[{"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.384 239549 WARNING nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.391 239549 DEBUG nova.virt.libvirt.host [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.392 239549 DEBUG nova.virt.libvirt.host [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.395 239549 DEBUG nova.virt.libvirt.host [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.395 239549 DEBUG nova.virt.libvirt.host [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.396 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.396 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.397 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.397 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.397 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.397 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.398 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.398 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.398 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.398 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.399 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.400 239549 DEBUG nova.virt.hardware [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.404 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:46 compute-0 ceph-mon[75334]: pgmap v1097: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 71 MiB/s wr, 308 op/s
Feb 02 15:35:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500231203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.930 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.950 239549 DEBUG nova.storage.rbd_utils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:35:46 compute-0 nova_compute[239545]: 2026-02-02 15:35:46.954 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094609415' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.479 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.481 239549 DEBUG nova.virt.libvirt.vif [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:35:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-600180089',display_name='tempest-VolumesSnapshotTestJSON-instance-600180089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-600180089',id=7,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIiThGePfK2DrL2AItHEHOGIrdU1smZ3U40keJe8fQQtl5n612JiE/KiPwhNPhY4j3H7qa5W9L8WWgGPcgmddwbzlNN11KVdKqW6TkB0kL+C6GYSzoEU6/cvMXh+RuBnIg==',key_name='tempest-keypair-1188114825',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='010150769bb34684be4a2dff720d1b35',ramdisk_id='',reservation_id='r-d6bqhw8i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1645199079',owner_user_name='tempest-VolumesSnapshotTestJSON-1645199079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:35:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2059424184a34c2da768a2a83c23a7f5',uuid=a39fdefd-dea8-4cde-af15-a9b32e21ec59,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.482 239549 DEBUG nova.network.os_vif_util [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converting VIF {"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.483 239549 DEBUG nova.network.os_vif_util [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:a5:a1,bridge_name='br-int',has_traffic_filtering=True,id=64418707-ba84-4b70-969a-d0882e71bae7,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64418707-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.484 239549 DEBUG nova.objects.instance [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'pci_devices' on Instance uuid a39fdefd-dea8-4cde-af15-a9b32e21ec59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:35:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 60 MiB/s wr, 294 op/s
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.530 239549 DEBUG nova.network.neutron [req-3a9b9c9f-4fbd-4580-980d-94d54723264b req-2a941b37-0805-489d-ba6b-95b4507d1de1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Updated VIF entry in instance network info cache for port 64418707-ba84-4b70-969a-d0882e71bae7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.531 239549 DEBUG nova.network.neutron [req-3a9b9c9f-4fbd-4580-980d-94d54723264b req-2a941b37-0805-489d-ba6b-95b4507d1de1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Updating instance_info_cache with network_info: [{"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.626 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.675 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <uuid>a39fdefd-dea8-4cde-af15-a9b32e21ec59</uuid>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <name>instance-00000007</name>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-600180089</nova:name>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:35:46</nova:creationTime>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <nova:user uuid="2059424184a34c2da768a2a83c23a7f5">tempest-VolumesSnapshotTestJSON-1645199079-project-member</nova:user>
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <nova:project uuid="010150769bb34684be4a2dff720d1b35">tempest-VolumesSnapshotTestJSON-1645199079</nova:project>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <nova:port uuid="64418707-ba84-4b70-969a-d0882e71bae7">
Feb 02 15:35:47 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <system>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <entry name="serial">a39fdefd-dea8-4cde-af15-a9b32e21ec59</entry>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <entry name="uuid">a39fdefd-dea8-4cde-af15-a9b32e21ec59</entry>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     </system>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <os>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   </os>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <features>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   </features>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk">
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       </source>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk.config">
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       </source>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:35:47 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:fd:a5:a1"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <target dev="tap64418707-ba"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59/console.log" append="off"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <video>
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     </video>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:35:47 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:35:47 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:35:47 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:35:47 compute-0 nova_compute[239545]: </domain>
Feb 02 15:35:47 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.676 239549 DEBUG nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Preparing to wait for external event network-vif-plugged-64418707-ba84-4b70-969a-d0882e71bae7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.676 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.676 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.677 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.677 239549 DEBUG nova.virt.libvirt.vif [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:35:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-600180089',display_name='tempest-VolumesSnapshotTestJSON-instance-600180089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-600180089',id=7,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIiThGePfK2DrL2AItHEHOGIrdU1smZ3U40keJe8fQQtl5n612JiE/KiPwhNPhY4j3H7qa5W9L8WWgGPcgmddwbzlNN11KVdKqW6TkB0kL+C6GYSzoEU6/cvMXh+RuBnIg==',key_name='tempest-keypair-1188114825',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='010150769bb34684be4a2dff720d1b35',ramdisk_id='',reservation_id='r-d6bqhw8i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1645199079',owner_user_name='tempest-VolumesSnapshotTestJSON-1645199079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:35:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2059424184a34c2da768a2a83c23a7f5',uuid=a39fdefd-dea8-4cde-af15-a9b32e21ec59,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.677 239549 DEBUG nova.network.os_vif_util [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converting VIF {"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.678 239549 DEBUG nova.network.os_vif_util [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:a5:a1,bridge_name='br-int',has_traffic_filtering=True,id=64418707-ba84-4b70-969a-d0882e71bae7,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64418707-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.678 239549 DEBUG os_vif [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:a5:a1,bridge_name='br-int',has_traffic_filtering=True,id=64418707-ba84-4b70-969a-d0882e71bae7,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64418707-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.679 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.679 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.679 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.682 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.682 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64418707-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.682 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64418707-ba, col_values=(('external_ids', {'iface-id': '64418707-ba84-4b70-969a-d0882e71bae7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:a5:a1', 'vm-uuid': 'a39fdefd-dea8-4cde-af15-a9b32e21ec59'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.683 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:47 compute-0 NetworkManager[49171]: <info>  [1770046547.6843] manager: (tap64418707-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.688 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.689 239549 INFO os_vif [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:a5:a1,bridge_name='br-int',has_traffic_filtering=True,id=64418707-ba84-4b70-969a-d0882e71bae7,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64418707-ba')
Feb 02 15:35:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.750 239549 DEBUG oslo_concurrency.lockutils [req-3a9b9c9f-4fbd-4580-980d-94d54723264b req-2a941b37-0805-489d-ba6b-95b4507d1de1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-a39fdefd-dea8-4cde-af15-a9b32e21ec59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:35:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/500231203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4094609415' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:47 compute-0 ceph-mon[75334]: pgmap v1098: 305 pgs: 305 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 60 MiB/s wr, 294 op/s
Feb 02 15:35:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Feb 02 15:35:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.973 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.974 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.974 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No VIF found with MAC fa:16:3e:fd:a5:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:35:47 compute-0 nova_compute[239545]: 2026-02-02 15:35:47.975 239549 INFO nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Using config drive
Feb 02 15:35:48 compute-0 nova_compute[239545]: 2026-02-02 15:35:48.064 239549 DEBUG nova.storage.rbd_utils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:35:48 compute-0 nova_compute[239545]: 2026-02-02 15:35:48.357 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:48 compute-0 ceph-mon[75334]: osdmap e236: 3 total, 3 up, 3 in
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.005 239549 INFO nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Creating config drive at /var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59/disk.config
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.011 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_e1m0616 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.130 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_e1m0616" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1389448798' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.147 239549 DEBUG nova.storage.rbd_utils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] rbd image a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.150 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59/disk.config a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.411 239549 DEBUG oslo_concurrency.processutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59/disk.config a39fdefd-dea8-4cde-af15-a9b32e21ec59_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.411 239549 INFO nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Deleting local config drive /var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59/disk.config because it was imported into RBD.
Feb 02 15:35:49 compute-0 kernel: tap64418707-ba: entered promiscuous mode
Feb 02 15:35:49 compute-0 NetworkManager[49171]: <info>  [1770046549.4482] manager: (tap64418707-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Feb 02 15:35:49 compute-0 ovn_controller[144995]: 2026-02-02T15:35:49Z|00082|binding|INFO|Claiming lport 64418707-ba84-4b70-969a-d0882e71bae7 for this chassis.
Feb 02 15:35:49 compute-0 ovn_controller[144995]: 2026-02-02T15:35:49Z|00083|binding|INFO|64418707-ba84-4b70-969a-d0882e71bae7: Claiming fa:16:3e:fd:a5:a1 10.100.0.12
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.450 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.461 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:49 compute-0 ovn_controller[144995]: 2026-02-02T15:35:49Z|00084|binding|INFO|Setting lport 64418707-ba84-4b70-969a-d0882e71bae7 ovn-installed in OVS
Feb 02 15:35:49 compute-0 ovn_controller[144995]: 2026-02-02T15:35:49Z|00085|binding|INFO|Setting lport 64418707-ba84-4b70-969a-d0882e71bae7 up in Southbound
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.461 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:a5:a1 10.100.0.12'], port_security=['fa:16:3e:fd:a5:a1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a39fdefd-dea8-4cde-af15-a9b32e21ec59', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '010150769bb34684be4a2dff720d1b35', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3cbd3bf-cbad-4116-898c-fe2794c264e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71d31c89-df7b-4a1a-b202-a6dac026a894, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=64418707-ba84-4b70-969a-d0882e71bae7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.462 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 64418707-ba84-4b70-969a-d0882e71bae7 in datapath 476af4b4-172e-44ce-8fec-4b78aa7603bb bound to our chassis
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.463 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 476af4b4-172e-44ce-8fec-4b78aa7603bb
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.465 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:49 compute-0 systemd-udevd[251872]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:35:49 compute-0 systemd-machined[207609]: New machine qemu-7-instance-00000007.
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.473 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[04261220-aa64-4460-9e0d-eee38aa062ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.475 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap476af4b4-11 in ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.476 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap476af4b4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.476 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2073b42-5b18-433f-92ba-c9fde6daf4fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.477 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f516a28b-ad3d-45f8-bb5f-458ce45683cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 NetworkManager[49171]: <info>  [1770046549.4815] device (tap64418707-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:35:49 compute-0 NetworkManager[49171]: <info>  [1770046549.4822] device (tap64418707-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.485 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[79e9b874-c198-4582-8235-6c32e768a76a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.493 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae016b6-5a05-457f-b5e5-2db0cb2f75e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 21 MiB/s wr, 155 op/s
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.512 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[27b76fc2-e673-4815-a9aa-16c74b9f8b94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.519 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[41302323-7366-4339-a1ff-5fba11547c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 systemd-udevd[251876]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:35:49 compute-0 NetworkManager[49171]: <info>  [1770046549.5199] manager: (tap476af4b4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.543 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[1e614457-6de6-459a-9164-ede7b75b7695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.546 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[2bec23c3-2cc5-4582-90fd-c973eccb9b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 NetworkManager[49171]: <info>  [1770046549.5653] device (tap476af4b4-10): carrier: link connected
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.568 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbdc557-ef41-465b-b1da-b19af674a4e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.586 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f61d027a-4b9a-4a62-bf77-a7dd09eede3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap476af4b4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:15:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403548, 'reachable_time': 44558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251905, 'error': None, 'target': 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.600 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[317cc3c7-50ed-4264-8561-7a709040f741]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:15ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403548, 'tstamp': 403548}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251906, 'error': None, 'target': 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.615 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ff9ed2-6d6a-4b40-9762-47cabe2bb681]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap476af4b4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:15:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403548, 'reachable_time': 44558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251907, 'error': None, 'target': 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:35:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1698134956' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:35:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1698134956' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.636 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b18119dd-8cb6-4f7c-a3a7-354589716787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.698 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[117a1d8c-c23a-47e0-b040-8e442af40f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.699 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap476af4b4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.700 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.700 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap476af4b4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:49 compute-0 NetworkManager[49171]: <info>  [1770046549.7035] manager: (tap476af4b4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Feb 02 15:35:49 compute-0 kernel: tap476af4b4-10: entered promiscuous mode
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.702 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.706 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.707 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap476af4b4-10, col_values=(('external_ids', {'iface-id': 'dc26ec84-c08a-465a-bcfe-8d9ce28f5877'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:35:49 compute-0 ovn_controller[144995]: 2026-02-02T15:35:49Z|00086|binding|INFO|Releasing lport dc26ec84-c08a-465a-bcfe-8d9ce28f5877 from this chassis (sb_readonly=0)
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.709 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.721 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.722 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.722 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/476af4b4-172e-44ce-8fec-4b78aa7603bb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/476af4b4-172e-44ce-8fec-4b78aa7603bb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.723 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c152d254-e37c-4bd4-9b3c-51127a258701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.724 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-476af4b4-172e-44ce-8fec-4b78aa7603bb
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/476af4b4-172e-44ce-8fec-4b78aa7603bb.pid.haproxy
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 476af4b4-172e-44ce-8fec-4b78aa7603bb
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:35:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:49.725 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'env', 'PROCESS_TAG=haproxy-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/476af4b4-172e-44ce-8fec-4b78aa7603bb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.738 239549 DEBUG nova.compute.manager [req-c2f39da3-21c6-49fa-a8d2-e70c46cc3d7a req-a4006724-fc2d-43a5-bf8f-e9b3066d221a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received event network-vif-plugged-64418707-ba84-4b70-969a-d0882e71bae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.738 239549 DEBUG oslo_concurrency.lockutils [req-c2f39da3-21c6-49fa-a8d2-e70c46cc3d7a req-a4006724-fc2d-43a5-bf8f-e9b3066d221a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.739 239549 DEBUG oslo_concurrency.lockutils [req-c2f39da3-21c6-49fa-a8d2-e70c46cc3d7a req-a4006724-fc2d-43a5-bf8f-e9b3066d221a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.740 239549 DEBUG oslo_concurrency.lockutils [req-c2f39da3-21c6-49fa-a8d2-e70c46cc3d7a req-a4006724-fc2d-43a5-bf8f-e9b3066d221a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:49 compute-0 nova_compute[239545]: 2026-02-02 15:35:49.740 239549 DEBUG nova.compute.manager [req-c2f39da3-21c6-49fa-a8d2-e70c46cc3d7a req-a4006724-fc2d-43a5-bf8f-e9b3066d221a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Processing event network-vif-plugged-64418707-ba84-4b70-969a-d0882e71bae7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:35:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Feb 02 15:35:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Feb 02 15:35:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1389448798' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:50 compute-0 ceph-mon[75334]: pgmap v1100: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 21 MiB/s wr, 155 op/s
Feb 02 15:35:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1698134956' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:35:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1698134956' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:35:50 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Feb 02 15:35:50 compute-0 podman[251939]: 2026-02-02 15:35:50.03507431 +0000 UTC m=+0.018479600 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:35:50 compute-0 podman[251939]: 2026-02-02 15:35:50.271609075 +0000 UTC m=+0.255014345 container create 7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:35:50 compute-0 systemd[1]: Started libpod-conmon-7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535.scope.
Feb 02 15:35:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e578b22d5bd12da1662a7c6662adb669079469d0a8ca3e7a73ec2d1ac885607c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:50 compute-0 podman[251939]: 2026-02-02 15:35:50.41435397 +0000 UTC m=+0.397759260 container init 7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:35:50 compute-0 podman[251939]: 2026-02-02 15:35:50.421819792 +0000 UTC m=+0.405225072 container start 7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:35:50 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[251972]: [NOTICE]   (251994) : New worker (251996) forked
Feb 02 15:35:50 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[251972]: [NOTICE]   (251994) : Loading success.
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.560 239549 DEBUG nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.561 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046550.56028, a39fdefd-dea8-4cde-af15-a9b32e21ec59 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.561 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] VM Started (Lifecycle Event)
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.564 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.567 239549 INFO nova.virt.libvirt.driver [-] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Instance spawned successfully.
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.567 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.623 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.625 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.654 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.654 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.655 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.655 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.656 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.657 239549 DEBUG nova.virt.libvirt.driver [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.722 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.722 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046550.5642133, a39fdefd-dea8-4cde-af15-a9b32e21ec59 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.722 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] VM Paused (Lifecycle Event)
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.874 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.876 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046550.564529, a39fdefd-dea8-4cde-af15-a9b32e21ec59 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.877 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] VM Resumed (Lifecycle Event)
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.900 239549 INFO nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Took 8.19 seconds to spawn the instance on the hypervisor.
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.901 239549 DEBUG nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.909 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:35:50 compute-0 nova_compute[239545]: 2026-02-02 15:35:50.913 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:35:51 compute-0 nova_compute[239545]: 2026-02-02 15:35:51.055 239549 INFO nova.compute.manager [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Took 9.49 seconds to build instance.
Feb 02 15:35:51 compute-0 nova_compute[239545]: 2026-02-02 15:35:51.084 239549 DEBUG oslo_concurrency.lockutils [None req-3ede528b-7435-454f-8df7-dedf8d378f10 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Feb 02 15:35:51 compute-0 ceph-mon[75334]: osdmap e237: 3 total, 3 up, 3 in
Feb 02 15:35:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Feb 02 15:35:51 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Feb 02 15:35:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 4.2 MiB/s wr, 160 op/s
Feb 02 15:35:51 compute-0 nova_compute[239545]: 2026-02-02 15:35:51.971 239549 DEBUG nova.compute.manager [req-ebefd3dd-e00b-49f1-95f9-fc1e553ebc7e req-3fc6411b-9716-4571-a608-fe15b47e7c23 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received event network-vif-plugged-64418707-ba84-4b70-969a-d0882e71bae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:51 compute-0 nova_compute[239545]: 2026-02-02 15:35:51.972 239549 DEBUG oslo_concurrency.lockutils [req-ebefd3dd-e00b-49f1-95f9-fc1e553ebc7e req-3fc6411b-9716-4571-a608-fe15b47e7c23 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:51 compute-0 nova_compute[239545]: 2026-02-02 15:35:51.972 239549 DEBUG oslo_concurrency.lockutils [req-ebefd3dd-e00b-49f1-95f9-fc1e553ebc7e req-3fc6411b-9716-4571-a608-fe15b47e7c23 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:51 compute-0 nova_compute[239545]: 2026-02-02 15:35:51.972 239549 DEBUG oslo_concurrency.lockutils [req-ebefd3dd-e00b-49f1-95f9-fc1e553ebc7e req-3fc6411b-9716-4571-a608-fe15b47e7c23 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:51 compute-0 nova_compute[239545]: 2026-02-02 15:35:51.972 239549 DEBUG nova.compute.manager [req-ebefd3dd-e00b-49f1-95f9-fc1e553ebc7e req-3fc6411b-9716-4571-a608-fe15b47e7c23 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] No waiting events found dispatching network-vif-plugged-64418707-ba84-4b70-969a-d0882e71bae7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:35:51 compute-0 nova_compute[239545]: 2026-02-02 15:35:51.972 239549 WARNING nova.compute.manager [req-ebefd3dd-e00b-49f1-95f9-fc1e553ebc7e req-3fc6411b-9716-4571-a608-fe15b47e7c23 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received unexpected event network-vif-plugged-64418707-ba84-4b70-969a-d0882e71bae7 for instance with vm_state active and task_state None.
Feb 02 15:35:52 compute-0 ceph-mon[75334]: osdmap e238: 3 total, 3 up, 3 in
Feb 02 15:35:52 compute-0 ceph-mon[75334]: pgmap v1103: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 4.2 MiB/s wr, 160 op/s
Feb 02 15:35:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Feb 02 15:35:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Feb 02 15:35:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Feb 02 15:35:52 compute-0 nova_compute[239545]: 2026-02-02 15:35:52.730 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591789023' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273635003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:53 compute-0 sudo[252012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:35:53 compute-0 sudo[252012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:35:53 compute-0 sudo[252012]: pam_unix(sudo:session): session closed for user root
Feb 02 15:35:53 compute-0 sudo[252037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:35:53 compute-0 sudo[252037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:35:53 compute-0 nova_compute[239545]: 2026-02-02 15:35:53.359 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 590 KiB/s wr, 149 op/s
Feb 02 15:35:53 compute-0 ceph-mon[75334]: osdmap e239: 3 total, 3 up, 3 in
Feb 02 15:35:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1591789023' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4273635003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:53 compute-0 sudo[252037]: pam_unix(sudo:session): session closed for user root
Feb 02 15:35:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:35:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:35:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:35:53 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:35:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:35:53 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:35:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:35:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:35:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:35:53 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:35:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:35:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:35:54 compute-0 sudo[252093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:35:54 compute-0 sudo[252093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:35:54 compute-0 sudo[252093]: pam_unix(sudo:session): session closed for user root
Feb 02 15:35:54 compute-0 nova_compute[239545]: 2026-02-02 15:35:54.058 239549 DEBUG nova.compute.manager [req-ff098739-6484-45ef-95e6-7fa30fc9e181 req-0d719adb-dd02-4f33-b8cc-a3c403684c76 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received event network-changed-64418707-ba84-4b70-969a-d0882e71bae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:35:54 compute-0 nova_compute[239545]: 2026-02-02 15:35:54.058 239549 DEBUG nova.compute.manager [req-ff098739-6484-45ef-95e6-7fa30fc9e181 req-0d719adb-dd02-4f33-b8cc-a3c403684c76 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Refreshing instance network info cache due to event network-changed-64418707-ba84-4b70-969a-d0882e71bae7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:35:54 compute-0 nova_compute[239545]: 2026-02-02 15:35:54.058 239549 DEBUG oslo_concurrency.lockutils [req-ff098739-6484-45ef-95e6-7fa30fc9e181 req-0d719adb-dd02-4f33-b8cc-a3c403684c76 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-a39fdefd-dea8-4cde-af15-a9b32e21ec59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:35:54 compute-0 nova_compute[239545]: 2026-02-02 15:35:54.058 239549 DEBUG oslo_concurrency.lockutils [req-ff098739-6484-45ef-95e6-7fa30fc9e181 req-0d719adb-dd02-4f33-b8cc-a3c403684c76 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-a39fdefd-dea8-4cde-af15-a9b32e21ec59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:35:54 compute-0 nova_compute[239545]: 2026-02-02 15:35:54.058 239549 DEBUG nova.network.neutron [req-ff098739-6484-45ef-95e6-7fa30fc9e181 req-0d719adb-dd02-4f33-b8cc-a3c403684c76 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Refreshing network info cache for port 64418707-ba84-4b70-969a-d0882e71bae7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:35:54 compute-0 sudo[252118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:35:54 compute-0 sudo[252118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:35:54 compute-0 podman[252156]: 2026-02-02 15:35:54.329265436 +0000 UTC m=+0.030598294 container create 8f2fc808aef127fccdd8d1685b7eafc91e5653ed39f6b1f01b8abeeeabf6311f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:35:54 compute-0 systemd[1]: Started libpod-conmon-8f2fc808aef127fccdd8d1685b7eafc91e5653ed39f6b1f01b8abeeeabf6311f.scope.
Feb 02 15:35:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:35:54 compute-0 podman[252156]: 2026-02-02 15:35:54.38340208 +0000 UTC m=+0.084734988 container init 8f2fc808aef127fccdd8d1685b7eafc91e5653ed39f6b1f01b8abeeeabf6311f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb 02 15:35:54 compute-0 podman[252156]: 2026-02-02 15:35:54.389379046 +0000 UTC m=+0.090711924 container start 8f2fc808aef127fccdd8d1685b7eafc91e5653ed39f6b1f01b8abeeeabf6311f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:35:54 compute-0 podman[252156]: 2026-02-02 15:35:54.392795708 +0000 UTC m=+0.094128586 container attach 8f2fc808aef127fccdd8d1685b7eafc91e5653ed39f6b1f01b8abeeeabf6311f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 15:35:54 compute-0 hardcore_bartik[252173]: 167 167
Feb 02 15:35:54 compute-0 systemd[1]: libpod-8f2fc808aef127fccdd8d1685b7eafc91e5653ed39f6b1f01b8abeeeabf6311f.scope: Deactivated successfully.
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:35:54 compute-0 podman[252156]: 2026-02-02 15:35:54.408402867 +0000 UTC m=+0.109735725 container died 8f2fc808aef127fccdd8d1685b7eafc91e5653ed39f6b1f01b8abeeeabf6311f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003493328209988981 of space, bias 1.0, pg target 0.10479984629966943 quantized to 32 (current 32)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.016674476062024237 of space, bias 1.0, pg target 5.002342818607271 quantized to 32 (current 32)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 7.82572463161585e-07 of space, bias 1.0, pg target 0.00023085887663266757 quantized to 32 (current 32)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659935560435719 of space, bias 1.0, pg target 0.1964680990328537 quantized to 32 (current 32)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2938752882010366e-06 of space, bias 4.0, pg target 0.0015267728400772233 quantized to 16 (current 16)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011255555284235201 quantized to 32 (current 32)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012381110812658724 quantized to 32 (current 32)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:35:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015007407045646937 quantized to 32 (current 32)
Feb 02 15:35:54 compute-0 podman[252156]: 2026-02-02 15:35:54.316100416 +0000 UTC m=+0.017433294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3b1dbfafaf6c7f7e3e3d6d50187691da7fbabfba98ad8f6ec1796787e48ef85-merged.mount: Deactivated successfully.
Feb 02 15:35:54 compute-0 podman[252156]: 2026-02-02 15:35:54.443769796 +0000 UTC m=+0.145102654 container remove 8f2fc808aef127fccdd8d1685b7eafc91e5653ed39f6b1f01b8abeeeabf6311f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:35:54 compute-0 systemd[1]: libpod-conmon-8f2fc808aef127fccdd8d1685b7eafc91e5653ed39f6b1f01b8abeeeabf6311f.scope: Deactivated successfully.
Feb 02 15:35:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Feb 02 15:35:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Feb 02 15:35:54 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Feb 02 15:35:54 compute-0 podman[252199]: 2026-02-02 15:35:54.564773495 +0000 UTC m=+0.039736486 container create c9ef637edb8a29afd0953ed588836cf50f162d9d948c0b784b0d26cfcd7844ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_burnell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:35:54 compute-0 ceph-mon[75334]: pgmap v1105: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 590 KiB/s wr, 149 op/s
Feb 02 15:35:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:35:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:35:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:35:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:35:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:35:54 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:35:54 compute-0 systemd[1]: Started libpod-conmon-c9ef637edb8a29afd0953ed588836cf50f162d9d948c0b784b0d26cfcd7844ef.scope.
Feb 02 15:35:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca5cafbbc8f788dd4530fbe7a641be5d63be426e34b17cdad51e37ca42011b60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca5cafbbc8f788dd4530fbe7a641be5d63be426e34b17cdad51e37ca42011b60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca5cafbbc8f788dd4530fbe7a641be5d63be426e34b17cdad51e37ca42011b60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca5cafbbc8f788dd4530fbe7a641be5d63be426e34b17cdad51e37ca42011b60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca5cafbbc8f788dd4530fbe7a641be5d63be426e34b17cdad51e37ca42011b60/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:54 compute-0 podman[252199]: 2026-02-02 15:35:54.545850745 +0000 UTC m=+0.020813766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:35:54 compute-0 podman[252199]: 2026-02-02 15:35:54.655811296 +0000 UTC m=+0.130774307 container init c9ef637edb8a29afd0953ed588836cf50f162d9d948c0b784b0d26cfcd7844ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb 02 15:35:54 compute-0 podman[252199]: 2026-02-02 15:35:54.664912877 +0000 UTC m=+0.139875868 container start c9ef637edb8a29afd0953ed588836cf50f162d9d948c0b784b0d26cfcd7844ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:35:54 compute-0 podman[252199]: 2026-02-02 15:35:54.668997236 +0000 UTC m=+0.143960227 container attach c9ef637edb8a29afd0953ed588836cf50f162d9d948c0b784b0d26cfcd7844ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 15:35:55 compute-0 thirsty_burnell[252215]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:35:55 compute-0 thirsty_burnell[252215]: --> All data devices are unavailable
Feb 02 15:35:55 compute-0 systemd[1]: libpod-c9ef637edb8a29afd0953ed588836cf50f162d9d948c0b784b0d26cfcd7844ef.scope: Deactivated successfully.
Feb 02 15:35:55 compute-0 podman[252199]: 2026-02-02 15:35:55.117371225 +0000 UTC m=+0.592334246 container died c9ef637edb8a29afd0953ed588836cf50f162d9d948c0b784b0d26cfcd7844ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_burnell, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca5cafbbc8f788dd4530fbe7a641be5d63be426e34b17cdad51e37ca42011b60-merged.mount: Deactivated successfully.
Feb 02 15:35:55 compute-0 podman[252199]: 2026-02-02 15:35:55.161555768 +0000 UTC m=+0.636518759 container remove c9ef637edb8a29afd0953ed588836cf50f162d9d948c0b784b0d26cfcd7844ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:35:55 compute-0 systemd[1]: libpod-conmon-c9ef637edb8a29afd0953ed588836cf50f162d9d948c0b784b0d26cfcd7844ef.scope: Deactivated successfully.
Feb 02 15:35:55 compute-0 sudo[252118]: pam_unix(sudo:session): session closed for user root
Feb 02 15:35:55 compute-0 sudo[252246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:35:55 compute-0 sudo[252246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:35:55 compute-0 sudo[252246]: pam_unix(sudo:session): session closed for user root
Feb 02 15:35:55 compute-0 sudo[252271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:35:55 compute-0 sudo[252271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:35:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 38 KiB/s wr, 196 op/s
Feb 02 15:35:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Feb 02 15:35:55 compute-0 podman[252309]: 2026-02-02 15:35:55.567624499 +0000 UTC m=+0.043740343 container create 0af780d74dd9bdc2ff1cdd9b4386a815ffc2b0278a1dfca28cedf9cf553cef07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:35:55 compute-0 ceph-mon[75334]: osdmap e240: 3 total, 3 up, 3 in
Feb 02 15:35:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Feb 02 15:35:55 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Feb 02 15:35:55 compute-0 systemd[1]: Started libpod-conmon-0af780d74dd9bdc2ff1cdd9b4386a815ffc2b0278a1dfca28cedf9cf553cef07.scope.
Feb 02 15:35:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:35:55 compute-0 podman[252309]: 2026-02-02 15:35:55.633877168 +0000 UTC m=+0.109993012 container init 0af780d74dd9bdc2ff1cdd9b4386a815ffc2b0278a1dfca28cedf9cf553cef07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:35:55 compute-0 podman[252309]: 2026-02-02 15:35:55.638337847 +0000 UTC m=+0.114453691 container start 0af780d74dd9bdc2ff1cdd9b4386a815ffc2b0278a1dfca28cedf9cf553cef07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:35:55 compute-0 podman[252309]: 2026-02-02 15:35:55.546325252 +0000 UTC m=+0.022441116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:35:55 compute-0 podman[252309]: 2026-02-02 15:35:55.641582266 +0000 UTC m=+0.117698120 container attach 0af780d74dd9bdc2ff1cdd9b4386a815ffc2b0278a1dfca28cedf9cf553cef07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:35:55 compute-0 cool_diffie[252326]: 167 167
Feb 02 15:35:55 compute-0 systemd[1]: libpod-0af780d74dd9bdc2ff1cdd9b4386a815ffc2b0278a1dfca28cedf9cf553cef07.scope: Deactivated successfully.
Feb 02 15:35:55 compute-0 podman[252309]: 2026-02-02 15:35:55.643577164 +0000 UTC m=+0.119693008 container died 0af780d74dd9bdc2ff1cdd9b4386a815ffc2b0278a1dfca28cedf9cf553cef07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-40a3e3987fa01bcd0b62482722a8a7ea16127532af298090ee6cfb4dc8c3884d-merged.mount: Deactivated successfully.
Feb 02 15:35:55 compute-0 podman[252309]: 2026-02-02 15:35:55.677639912 +0000 UTC m=+0.153755756 container remove 0af780d74dd9bdc2ff1cdd9b4386a815ffc2b0278a1dfca28cedf9cf553cef07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:35:55 compute-0 systemd[1]: libpod-conmon-0af780d74dd9bdc2ff1cdd9b4386a815ffc2b0278a1dfca28cedf9cf553cef07.scope: Deactivated successfully.
Feb 02 15:35:55 compute-0 podman[252350]: 2026-02-02 15:35:55.823926324 +0000 UTC m=+0.049164815 container create d5c7083c27d986ae12bcfe86fb480baa4927649010d9abd15346033b2d2d77e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:35:55 compute-0 systemd[1]: Started libpod-conmon-d5c7083c27d986ae12bcfe86fb480baa4927649010d9abd15346033b2d2d77e7.scope.
Feb 02 15:35:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfd5480b318443e7a43894425a803a181a6af01dadf31f89535e8a3a86fad83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfd5480b318443e7a43894425a803a181a6af01dadf31f89535e8a3a86fad83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfd5480b318443e7a43894425a803a181a6af01dadf31f89535e8a3a86fad83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfd5480b318443e7a43894425a803a181a6af01dadf31f89535e8a3a86fad83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:55 compute-0 podman[252350]: 2026-02-02 15:35:55.806992823 +0000 UTC m=+0.032231344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:35:55 compute-0 podman[252350]: 2026-02-02 15:35:55.901941129 +0000 UTC m=+0.127179650 container init d5c7083c27d986ae12bcfe86fb480baa4927649010d9abd15346033b2d2d77e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:35:55 compute-0 podman[252350]: 2026-02-02 15:35:55.907051282 +0000 UTC m=+0.132289773 container start d5c7083c27d986ae12bcfe86fb480baa4927649010d9abd15346033b2d2d77e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pascal, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:35:55 compute-0 podman[252350]: 2026-02-02 15:35:55.910139018 +0000 UTC m=+0.135377539 container attach d5c7083c27d986ae12bcfe86fb480baa4927649010d9abd15346033b2d2d77e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]: {
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:     "0": [
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:         {
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "devices": [
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "/dev/loop3"
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             ],
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_name": "ceph_lv0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_size": "21470642176",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "name": "ceph_lv0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "tags": {
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.cluster_name": "ceph",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.crush_device_class": "",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.encrypted": "0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.objectstore": "bluestore",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.osd_id": "0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.type": "block",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.vdo": "0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.with_tpm": "0"
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             },
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "type": "block",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "vg_name": "ceph_vg0"
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:         }
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:     ],
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:     "1": [
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:         {
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "devices": [
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "/dev/loop4"
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             ],
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_name": "ceph_lv1",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_size": "21470642176",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "name": "ceph_lv1",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "tags": {
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.cluster_name": "ceph",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.crush_device_class": "",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.encrypted": "0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.objectstore": "bluestore",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.osd_id": "1",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.type": "block",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.vdo": "0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.with_tpm": "0"
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             },
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "type": "block",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "vg_name": "ceph_vg1"
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:         }
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:     ],
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:     "2": [
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:         {
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "devices": [
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "/dev/loop5"
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             ],
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_name": "ceph_lv2",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_size": "21470642176",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "name": "ceph_lv2",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "tags": {
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.cluster_name": "ceph",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.crush_device_class": "",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.encrypted": "0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.objectstore": "bluestore",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.osd_id": "2",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.type": "block",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.vdo": "0",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:                 "ceph.with_tpm": "0"
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             },
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "type": "block",
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:             "vg_name": "ceph_vg2"
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:         }
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]:     ]
Feb 02 15:35:56 compute-0 dreamy_pascal[252367]: }
Feb 02 15:35:56 compute-0 nova_compute[239545]: 2026-02-02 15:35:56.185 239549 DEBUG nova.network.neutron [req-ff098739-6484-45ef-95e6-7fa30fc9e181 req-0d719adb-dd02-4f33-b8cc-a3c403684c76 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Updated VIF entry in instance network info cache for port 64418707-ba84-4b70-969a-d0882e71bae7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:35:56 compute-0 nova_compute[239545]: 2026-02-02 15:35:56.187 239549 DEBUG nova.network.neutron [req-ff098739-6484-45ef-95e6-7fa30fc9e181 req-0d719adb-dd02-4f33-b8cc-a3c403684c76 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Updating instance_info_cache with network_info: [{"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:35:56 compute-0 systemd[1]: libpod-d5c7083c27d986ae12bcfe86fb480baa4927649010d9abd15346033b2d2d77e7.scope: Deactivated successfully.
Feb 02 15:35:56 compute-0 podman[252350]: 2026-02-02 15:35:56.199170467 +0000 UTC m=+0.424408998 container died d5c7083c27d986ae12bcfe86fb480baa4927649010d9abd15346033b2d2d77e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:35:56 compute-0 nova_compute[239545]: 2026-02-02 15:35:56.211 239549 DEBUG oslo_concurrency.lockutils [req-ff098739-6484-45ef-95e6-7fa30fc9e181 req-0d719adb-dd02-4f33-b8cc-a3c403684c76 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-a39fdefd-dea8-4cde-af15-a9b32e21ec59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cfd5480b318443e7a43894425a803a181a6af01dadf31f89535e8a3a86fad83-merged.mount: Deactivated successfully.
Feb 02 15:35:56 compute-0 podman[252350]: 2026-02-02 15:35:56.239471346 +0000 UTC m=+0.464709847 container remove d5c7083c27d986ae12bcfe86fb480baa4927649010d9abd15346033b2d2d77e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pascal, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:35:56 compute-0 systemd[1]: libpod-conmon-d5c7083c27d986ae12bcfe86fb480baa4927649010d9abd15346033b2d2d77e7.scope: Deactivated successfully.
Feb 02 15:35:56 compute-0 sudo[252271]: pam_unix(sudo:session): session closed for user root
Feb 02 15:35:56 compute-0 sudo[252388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:35:56 compute-0 sudo[252388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:35:56 compute-0 sudo[252388]: pam_unix(sudo:session): session closed for user root
Feb 02 15:35:56 compute-0 sudo[252413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:35:56 compute-0 sudo[252413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:35:56 compute-0 ceph-mon[75334]: pgmap v1107: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 38 KiB/s wr, 196 op/s
Feb 02 15:35:56 compute-0 ceph-mon[75334]: osdmap e241: 3 total, 3 up, 3 in
Feb 02 15:35:56 compute-0 podman[252450]: 2026-02-02 15:35:56.656098493 +0000 UTC m=+0.034669492 container create 2133fc66a8cb3cd098370721b9ab848ae814d9d234a54944e1da97e71a79da56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chatelet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:35:56 compute-0 systemd[1]: Started libpod-conmon-2133fc66a8cb3cd098370721b9ab848ae814d9d234a54944e1da97e71a79da56.scope.
Feb 02 15:35:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:35:56 compute-0 podman[252450]: 2026-02-02 15:35:56.717399152 +0000 UTC m=+0.095970191 container init 2133fc66a8cb3cd098370721b9ab848ae814d9d234a54944e1da97e71a79da56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:35:56 compute-0 podman[252450]: 2026-02-02 15:35:56.723236323 +0000 UTC m=+0.101807332 container start 2133fc66a8cb3cd098370721b9ab848ae814d9d234a54944e1da97e71a79da56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Feb 02 15:35:56 compute-0 condescending_chatelet[252467]: 167 167
Feb 02 15:35:56 compute-0 podman[252450]: 2026-02-02 15:35:56.726449092 +0000 UTC m=+0.105020081 container attach 2133fc66a8cb3cd098370721b9ab848ae814d9d234a54944e1da97e71a79da56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chatelet, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:35:56 compute-0 systemd[1]: libpod-2133fc66a8cb3cd098370721b9ab848ae814d9d234a54944e1da97e71a79da56.scope: Deactivated successfully.
Feb 02 15:35:56 compute-0 podman[252450]: 2026-02-02 15:35:56.727061337 +0000 UTC m=+0.105632326 container died 2133fc66a8cb3cd098370721b9ab848ae814d9d234a54944e1da97e71a79da56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:35:56 compute-0 podman[252450]: 2026-02-02 15:35:56.641575101 +0000 UTC m=+0.020146120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a5029b0c42eea753b92ea3db10bf2510de94c3912d51e0340d0a9bd7e645bfe-merged.mount: Deactivated successfully.
Feb 02 15:35:56 compute-0 podman[252450]: 2026-02-02 15:35:56.765235723 +0000 UTC m=+0.143806712 container remove 2133fc66a8cb3cd098370721b9ab848ae814d9d234a54944e1da97e71a79da56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:35:56 compute-0 systemd[1]: libpod-conmon-2133fc66a8cb3cd098370721b9ab848ae814d9d234a54944e1da97e71a79da56.scope: Deactivated successfully.
Feb 02 15:35:56 compute-0 podman[252491]: 2026-02-02 15:35:56.892918555 +0000 UTC m=+0.036982770 container create eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:35:56 compute-0 systemd[1]: Started libpod-conmon-eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8.scope.
Feb 02 15:35:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203712fe4cf421d6b37c0c8a5bdd3d138ddb1c028f2c6ddd1fb10f2cb1249a20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203712fe4cf421d6b37c0c8a5bdd3d138ddb1c028f2c6ddd1fb10f2cb1249a20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203712fe4cf421d6b37c0c8a5bdd3d138ddb1c028f2c6ddd1fb10f2cb1249a20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203712fe4cf421d6b37c0c8a5bdd3d138ddb1c028f2c6ddd1fb10f2cb1249a20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:35:56 compute-0 podman[252491]: 2026-02-02 15:35:56.8774876 +0000 UTC m=+0.021551845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:35:57 compute-0 podman[252491]: 2026-02-02 15:35:57.038812138 +0000 UTC m=+0.182876373 container init eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:35:57 compute-0 podman[252491]: 2026-02-02 15:35:57.044815763 +0000 UTC m=+0.188879998 container start eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:35:57 compute-0 podman[252491]: 2026-02-02 15:35:57.120813629 +0000 UTC m=+0.264877844 container attach eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:35:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160949868' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:35:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Feb 02 15:35:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Feb 02 15:35:57 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Feb 02 15:35:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.9 KiB/s wr, 233 op/s
Feb 02 15:35:57 compute-0 nova_compute[239545]: 2026-02-02 15:35:57.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:35:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2160949868' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:57 compute-0 ceph-mon[75334]: osdmap e242: 3 total, 3 up, 3 in
Feb 02 15:35:57 compute-0 lvm[252585]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:35:57 compute-0 lvm[252585]: VG ceph_vg0 finished
Feb 02 15:35:57 compute-0 lvm[252588]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:35:57 compute-0 lvm[252588]: VG ceph_vg1 finished
Feb 02 15:35:57 compute-0 lvm[252590]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:35:57 compute-0 lvm[252590]: VG ceph_vg2 finished
Feb 02 15:35:57 compute-0 nova_compute[239545]: 2026-02-02 15:35:57.732 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:57 compute-0 friendly_stonebraker[252508]: {}
Feb 02 15:35:57 compute-0 systemd[1]: libpod-eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8.scope: Deactivated successfully.
Feb 02 15:35:57 compute-0 systemd[1]: libpod-eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8.scope: Consumed 1.092s CPU time.
Feb 02 15:35:57 compute-0 podman[252491]: 2026-02-02 15:35:57.877287791 +0000 UTC m=+1.021352006 container died eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:35:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-203712fe4cf421d6b37c0c8a5bdd3d138ddb1c028f2c6ddd1fb10f2cb1249a20-merged.mount: Deactivated successfully.
Feb 02 15:35:58 compute-0 podman[252491]: 2026-02-02 15:35:58.115455924 +0000 UTC m=+1.259520139 container remove eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_stonebraker, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:35:58 compute-0 systemd[1]: libpod-conmon-eb4cefdda25e0d87ebd245e39a7894b6845193760810e46de928fe6714ac94c8.scope: Deactivated successfully.
Feb 02 15:35:58 compute-0 sudo[252413]: pam_unix(sudo:session): session closed for user root
Feb 02 15:35:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:35:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:35:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:35:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:35:58 compute-0 sudo[252607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:35:58 compute-0 sudo[252607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:35:58 compute-0 sudo[252607]: pam_unix(sudo:session): session closed for user root
Feb 02 15:35:58 compute-0 nova_compute[239545]: 2026-02-02 15:35:58.361 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:35:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:35:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/74689313' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:58 compute-0 ceph-mon[75334]: pgmap v1110: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.9 KiB/s wr, 233 op/s
Feb 02 15:35:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:35:58 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:35:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/74689313' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:35:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:59.248 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:35:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:59.255 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:35:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:35:59.256 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:35:59 compute-0 podman[252632]: 2026-02-02 15:35:59.32545774 +0000 UTC m=+0.065814390 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:35:59 compute-0 podman[252633]: 2026-02-02 15:35:59.337640406 +0000 UTC m=+0.075917756 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 02 15:35:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.0 MiB/s wr, 168 op/s
Feb 02 15:35:59 compute-0 nova_compute[239545]: 2026-02-02 15:35:59.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:35:59 compute-0 nova_compute[239545]: 2026-02-02 15:35:59.547 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:35:59 compute-0 nova_compute[239545]: 2026-02-02 15:35:59.595 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:35:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Feb 02 15:35:59 compute-0 ceph-mon[75334]: pgmap v1111: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.0 MiB/s wr, 168 op/s
Feb 02 15:35:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Feb 02 15:35:59 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Feb 02 15:36:00 compute-0 nova_compute[239545]: 2026-02-02 15:36:00.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:36:00 compute-0 ceph-mon[75334]: osdmap e243: 3 total, 3 up, 3 in
Feb 02 15:36:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 1.4 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 MiB/s wr, 324 op/s
Feb 02 15:36:01 compute-0 ceph-mon[75334]: pgmap v1113: 305 pgs: 305 active+clean; 1.4 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 MiB/s wr, 324 op/s
Feb 02 15:36:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:02 compute-0 nova_compute[239545]: 2026-02-02 15:36:02.735 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Feb 02 15:36:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Feb 02 15:36:03 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.399 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 1.6 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 324 KiB/s rd, 81 MiB/s wr, 270 op/s
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.584 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.585 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.585 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.586 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:36:03 compute-0 nova_compute[239545]: 2026-02-02 15:36:03.586 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:36:04 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/593061496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.253 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:04 compute-0 ceph-mon[75334]: osdmap e244: 3 total, 3 up, 3 in
Feb 02 15:36:04 compute-0 ceph-mon[75334]: pgmap v1115: 305 pgs: 305 active+clean; 1.6 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 324 KiB/s rd, 81 MiB/s wr, 270 op/s
Feb 02 15:36:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/593061496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.329 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.329 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.457 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.458 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4409MB free_disk=59.967324334196746GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.458 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.459 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:04 compute-0 ovn_controller[144995]: 2026-02-02T15:36:04Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:a5:a1 10.100.0.12
Feb 02 15:36:04 compute-0 ovn_controller[144995]: 2026-02-02T15:36:04Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:a5:a1 10.100.0.12
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.746 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance a39fdefd-dea8-4cde-af15-a9b32e21ec59 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.747 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.747 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:36:04 compute-0 nova_compute[239545]: 2026-02-02 15:36:04.783 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:36:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216186070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:05 compute-0 nova_compute[239545]: 2026-02-02 15:36:05.326 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:05 compute-0 nova_compute[239545]: 2026-02-02 15:36:05.330 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:36:05 compute-0 nova_compute[239545]: 2026-02-02 15:36:05.353 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:36:05 compute-0 nova_compute[239545]: 2026-02-02 15:36:05.397 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:36:05 compute-0 nova_compute[239545]: 2026-02-02 15:36:05.398 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 1.7 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 384 KiB/s rd, 80 MiB/s wr, 348 op/s
Feb 02 15:36:05 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4216186070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:36:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/929207084' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:36:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/929207084' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:06 compute-0 nova_compute[239545]: 2026-02-02 15:36:06.399 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:36:06 compute-0 nova_compute[239545]: 2026-02-02 15:36:06.399 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:36:06 compute-0 nova_compute[239545]: 2026-02-02 15:36:06.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:36:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Feb 02 15:36:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Feb 02 15:36:06 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Feb 02 15:36:06 compute-0 ceph-mon[75334]: pgmap v1116: 305 pgs: 305 active+clean; 1.7 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 384 KiB/s rd, 80 MiB/s wr, 348 op/s
Feb 02 15:36:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/929207084' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/929207084' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 597 KiB/s rd, 105 MiB/s wr, 400 op/s
Feb 02 15:36:07 compute-0 nova_compute[239545]: 2026-02-02 15:36:07.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:36:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Feb 02 15:36:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Feb 02 15:36:07 compute-0 ceph-mon[75334]: osdmap e245: 3 total, 3 up, 3 in
Feb 02 15:36:07 compute-0 ceph-mon[75334]: pgmap v1118: 305 pgs: 305 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 597 KiB/s rd, 105 MiB/s wr, 400 op/s
Feb 02 15:36:07 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Feb 02 15:36:07 compute-0 nova_compute[239545]: 2026-02-02 15:36:07.739 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:36:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2941195144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:36:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2941195144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:08 compute-0 nova_compute[239545]: 2026-02-02 15:36:08.402 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:08 compute-0 ceph-mon[75334]: osdmap e246: 3 total, 3 up, 3 in
Feb 02 15:36:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2941195144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2941195144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 2.0 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 550 KiB/s rd, 75 MiB/s wr, 294 op/s
Feb 02 15:36:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Feb 02 15:36:09 compute-0 ceph-mon[75334]: pgmap v1120: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 2.0 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 550 KiB/s rd, 75 MiB/s wr, 294 op/s
Feb 02 15:36:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Feb 02 15:36:09 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Feb 02 15:36:10 compute-0 ceph-mon[75334]: osdmap e247: 3 total, 3 up, 3 in
Feb 02 15:36:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 542 KiB/s rd, 68 MiB/s wr, 301 op/s
Feb 02 15:36:11 compute-0 ceph-mon[75334]: pgmap v1122: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 542 KiB/s rd, 68 MiB/s wr, 301 op/s
Feb 02 15:36:11 compute-0 nova_compute[239545]: 2026-02-02 15:36:11.863 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:11.863 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:36:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:11.865 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:36:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Feb 02 15:36:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Feb 02 15:36:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Feb 02 15:36:12 compute-0 nova_compute[239545]: 2026-02-02 15:36:12.754 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:13 compute-0 nova_compute[239545]: 2026-02-02 15:36:13.403 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:13 compute-0 ceph-mon[75334]: osdmap e248: 3 total, 3 up, 3 in
Feb 02 15:36:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 35 MiB/s wr, 255 op/s
Feb 02 15:36:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:13.867 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:14 compute-0 ceph-mon[75334]: pgmap v1124: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 35 MiB/s wr, 255 op/s
Feb 02 15:36:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:36:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:36:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:36:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:36:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:36:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:36:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.8 MiB/s rd, 27 MiB/s wr, 196 op/s
Feb 02 15:36:16 compute-0 ceph-mon[75334]: pgmap v1125: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.8 MiB/s rd, 27 MiB/s wr, 196 op/s
Feb 02 15:36:16 compute-0 nova_compute[239545]: 2026-02-02 15:36:16.786 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:16 compute-0 nova_compute[239545]: 2026-02-02 15:36:16.787 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:16 compute-0 nova_compute[239545]: 2026-02-02 15:36:16.848 239549 DEBUG nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:36:16 compute-0 nova_compute[239545]: 2026-02-02 15:36:16.926 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:16 compute-0 nova_compute[239545]: 2026-02-02 15:36:16.927 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:16 compute-0 nova_compute[239545]: 2026-02-02 15:36:16.933 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:36:16 compute-0 nova_compute[239545]: 2026-02-02 15:36:16.934 239549 INFO nova.compute.claims [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.039 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Feb 02 15:36:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Feb 02 15:36:17 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Feb 02 15:36:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.8 MiB/s rd, 13 MiB/s wr, 163 op/s
Feb 02 15:36:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:36:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2361928335' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.561 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.565 239549 DEBUG nova.compute.provider_tree [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.580 239549 DEBUG nova.scheduler.client.report [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.603 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.604 239549 DEBUG nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.663 239549 DEBUG nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.664 239549 DEBUG nova.network.neutron [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.680 239549 INFO nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.697 239549 DEBUG nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.757 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.788 239549 DEBUG nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.789 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.791 239549 INFO nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Creating image(s)
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.818 239549 DEBUG nova.storage.rbd_utils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image a19161ab-082d-4489-93df-8008cdef83ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.840 239549 DEBUG nova.storage.rbd_utils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image a19161ab-082d-4489-93df-8008cdef83ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.863 239549 DEBUG nova.storage.rbd_utils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image a19161ab-082d-4489-93df-8008cdef83ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.868 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.888 239549 DEBUG nova.policy [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b10e73971e784c20a0843cf9caf5cbbe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cd39cd97fc8041569e2a21b01b4ed0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.925 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.925 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.926 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.926 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.943 239549 DEBUG nova.storage.rbd_utils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image a19161ab-082d-4489-93df-8008cdef83ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:17 compute-0 nova_compute[239545]: 2026-02-02 15:36:17.945 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb a19161ab-082d-4489-93df-8008cdef83ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:18 compute-0 nova_compute[239545]: 2026-02-02 15:36:18.220 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb a19161ab-082d-4489-93df-8008cdef83ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:18 compute-0 nova_compute[239545]: 2026-02-02 15:36:18.288 239549 DEBUG nova.storage.rbd_utils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] resizing rbd image a19161ab-082d-4489-93df-8008cdef83ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:36:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1186830822' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:18 compute-0 nova_compute[239545]: 2026-02-02 15:36:18.356 239549 DEBUG nova.objects.instance [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'migration_context' on Instance uuid a19161ab-082d-4489-93df-8008cdef83ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:18 compute-0 nova_compute[239545]: 2026-02-02 15:36:18.380 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:36:18 compute-0 nova_compute[239545]: 2026-02-02 15:36:18.380 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Ensure instance console log exists: /var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:36:18 compute-0 nova_compute[239545]: 2026-02-02 15:36:18.381 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:18 compute-0 nova_compute[239545]: 2026-02-02 15:36:18.381 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:18 compute-0 nova_compute[239545]: 2026-02-02 15:36:18.381 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:18 compute-0 nova_compute[239545]: 2026-02-02 15:36:18.440 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:18 compute-0 ceph-mon[75334]: osdmap e249: 3 total, 3 up, 3 in
Feb 02 15:36:18 compute-0 ceph-mon[75334]: pgmap v1127: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.8 MiB/s rd, 13 MiB/s wr, 163 op/s
Feb 02 15:36:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2361928335' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1186830822' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:19 compute-0 nova_compute[239545]: 2026-02-02 15:36:19.284 239549 DEBUG nova.network.neutron [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Successfully created port: 8489a727-801c-4762-8094-7fe19ffe6dc8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:36:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Feb 02 15:36:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Feb 02 15:36:19 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Feb 02 15:36:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.3 MiB/s wr, 40 op/s
Feb 02 15:36:20 compute-0 nova_compute[239545]: 2026-02-02 15:36:20.269 239549 DEBUG nova.network.neutron [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Successfully updated port: 8489a727-801c-4762-8094-7fe19ffe6dc8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:36:20 compute-0 nova_compute[239545]: 2026-02-02 15:36:20.287 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:36:20 compute-0 nova_compute[239545]: 2026-02-02 15:36:20.287 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquired lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:36:20 compute-0 nova_compute[239545]: 2026-02-02 15:36:20.288 239549 DEBUG nova.network.neutron [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:36:20 compute-0 nova_compute[239545]: 2026-02-02 15:36:20.355 239549 DEBUG nova.compute.manager [req-ee9722fd-36da-4634-99f9-6304a1ccc005 req-b951c3b7-7803-4232-aa47-d702f4bb83ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received event network-changed-8489a727-801c-4762-8094-7fe19ffe6dc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:20 compute-0 nova_compute[239545]: 2026-02-02 15:36:20.355 239549 DEBUG nova.compute.manager [req-ee9722fd-36da-4634-99f9-6304a1ccc005 req-b951c3b7-7803-4232-aa47-d702f4bb83ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Refreshing instance network info cache due to event network-changed-8489a727-801c-4762-8094-7fe19ffe6dc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:36:20 compute-0 nova_compute[239545]: 2026-02-02 15:36:20.355 239549 DEBUG oslo_concurrency.lockutils [req-ee9722fd-36da-4634-99f9-6304a1ccc005 req-b951c3b7-7803-4232-aa47-d702f4bb83ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:36:20 compute-0 nova_compute[239545]: 2026-02-02 15:36:20.432 239549 DEBUG nova.network.neutron [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:36:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Feb 02 15:36:20 compute-0 ceph-mon[75334]: osdmap e250: 3 total, 3 up, 3 in
Feb 02 15:36:20 compute-0 ceph-mon[75334]: pgmap v1129: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.3 MiB/s wr, 40 op/s
Feb 02 15:36:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Feb 02 15:36:20 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.235 239549 DEBUG nova.network.neutron [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Updating instance_info_cache with network_info: [{"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.256 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Releasing lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.256 239549 DEBUG nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Instance network_info: |[{"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.257 239549 DEBUG oslo_concurrency.lockutils [req-ee9722fd-36da-4634-99f9-6304a1ccc005 req-b951c3b7-7803-4232-aa47-d702f4bb83ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.257 239549 DEBUG nova.network.neutron [req-ee9722fd-36da-4634-99f9-6304a1ccc005 req-b951c3b7-7803-4232-aa47-d702f4bb83ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Refreshing network info cache for port 8489a727-801c-4762-8094-7fe19ffe6dc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.259 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Start _get_guest_xml network_info=[{"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.265 239549 WARNING nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.269 239549 DEBUG nova.virt.libvirt.host [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.270 239549 DEBUG nova.virt.libvirt.host [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.273 239549 DEBUG nova.virt.libvirt.host [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.273 239549 DEBUG nova.virt.libvirt.host [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.273 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.274 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.274 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.274 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.274 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.275 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.275 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.275 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.275 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.275 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.276 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.276 239549 DEBUG nova.virt.hardware [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.278 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:21 compute-0 ceph-mon[75334]: osdmap e251: 3 total, 3 up, 3 in
Feb 02 15:36:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 2.2 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 9.3 MiB/s wr, 165 op/s
Feb 02 15:36:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182969709' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.819 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.841 239549 DEBUG nova.storage.rbd_utils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image a19161ab-082d-4489-93df-8008cdef83ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:21 compute-0 nova_compute[239545]: 2026-02-02 15:36:21.845 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:22 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629579830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.365 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.366 239549 DEBUG nova.virt.libvirt.vif [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-851722124',display_name='tempest-VolumesBackupsTest-instance-851722124',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-851722124',id=8,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxGZ42nZmy9eFXLNdGTZZ19oqS2EDCx8WV4lvxPpX26iHVNIzHKrUETtaVbtlSEIzrxlFV11P13FOzbbdPfC/FpLJMgr90TaCBLcQZVsQySCSrgZkjhs8C7ilx+k8W4PA==',key_name='tempest-keypair-1227534713',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd39cd97fc8041569e2a21b01b4ed0db',ramdisk_id='',reservation_id='r-owyyzt3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1207235356',owner_user_name='tempest-VolumesBackupsTest-1207235356-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:36:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b10e73971e784c20a0843cf9caf5cbbe',uuid=a19161ab-082d-4489-93df-8008cdef83ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.366 239549 DEBUG nova.network.os_vif_util [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converting VIF {"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.367 239549 DEBUG nova.network.os_vif_util [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:29:66,bridge_name='br-int',has_traffic_filtering=True,id=8489a727-801c-4762-8094-7fe19ffe6dc8,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8489a727-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.368 239549 DEBUG nova.objects.instance [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'pci_devices' on Instance uuid a19161ab-082d-4489-93df-8008cdef83ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.382 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <uuid>a19161ab-082d-4489-93df-8008cdef83ce</uuid>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <name>instance-00000008</name>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <nova:name>tempest-VolumesBackupsTest-instance-851722124</nova:name>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:36:21</nova:creationTime>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <nova:user uuid="b10e73971e784c20a0843cf9caf5cbbe">tempest-VolumesBackupsTest-1207235356-project-member</nova:user>
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <nova:project uuid="cd39cd97fc8041569e2a21b01b4ed0db">tempest-VolumesBackupsTest-1207235356</nova:project>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <nova:port uuid="8489a727-801c-4762-8094-7fe19ffe6dc8">
Feb 02 15:36:22 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <system>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <entry name="serial">a19161ab-082d-4489-93df-8008cdef83ce</entry>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <entry name="uuid">a19161ab-082d-4489-93df-8008cdef83ce</entry>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     </system>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <os>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   </os>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <features>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   </features>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/a19161ab-082d-4489-93df-8008cdef83ce_disk">
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       </source>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/a19161ab-082d-4489-93df-8008cdef83ce_disk.config">
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       </source>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:36:22 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:a8:29:66"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <target dev="tap8489a727-80"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce/console.log" append="off"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <video>
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     </video>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:36:22 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:36:22 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:36:22 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:36:22 compute-0 nova_compute[239545]: </domain>
Feb 02 15:36:22 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.383 239549 DEBUG nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Preparing to wait for external event network-vif-plugged-8489a727-801c-4762-8094-7fe19ffe6dc8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.383 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.383 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.384 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.384 239549 DEBUG nova.virt.libvirt.vif [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-851722124',display_name='tempest-VolumesBackupsTest-instance-851722124',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-851722124',id=8,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxGZ42nZmy9eFXLNdGTZZ19oqS2EDCx8WV4lvxPpX26iHVNIzHKrUETtaVbtlSEIzrxlFV11P13FOzbbdPfC/FpLJMgr90TaCBLcQZVsQySCSrgZkjhs8C7ilx+k8W4PA==',key_name='tempest-keypair-1227534713',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd39cd97fc8041569e2a21b01b4ed0db',ramdisk_id='',reservation_id='r-owyyzt3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1207235356',owner_user_name='tempest-VolumesBackupsTest-1207235356-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:36:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b10e73971e784c20a0843cf9caf5cbbe',uuid=a19161ab-082d-4489-93df-8008cdef83ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.385 239549 DEBUG nova.network.os_vif_util [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converting VIF {"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.385 239549 DEBUG nova.network.os_vif_util [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:29:66,bridge_name='br-int',has_traffic_filtering=True,id=8489a727-801c-4762-8094-7fe19ffe6dc8,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8489a727-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.386 239549 DEBUG os_vif [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:29:66,bridge_name='br-int',has_traffic_filtering=True,id=8489a727-801c-4762-8094-7fe19ffe6dc8,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8489a727-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.386 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.387 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.387 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.393 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.393 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8489a727-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.394 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8489a727-80, col_values=(('external_ids', {'iface-id': '8489a727-801c-4762-8094-7fe19ffe6dc8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:29:66', 'vm-uuid': 'a19161ab-082d-4489-93df-8008cdef83ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.395 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:22 compute-0 NetworkManager[49171]: <info>  [1770046582.3958] manager: (tap8489a727-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.396 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.402 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.402 239549 INFO os_vif [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:29:66,bridge_name='br-int',has_traffic_filtering=True,id=8489a727-801c-4762-8094-7fe19ffe6dc8,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8489a727-80')
Feb 02 15:36:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.459 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.460 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.461 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No VIF found with MAC fa:16:3e:a8:29:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.462 239549 INFO nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Using config drive
Feb 02 15:36:22 compute-0 ceph-mon[75334]: pgmap v1131: 305 pgs: 305 active+clean; 2.2 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 9.3 MiB/s wr, 165 op/s
Feb 02 15:36:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/182969709' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1629579830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:22 compute-0 nova_compute[239545]: 2026-02-02 15:36:22.488 239549 DEBUG nova.storage.rbd_utils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image a19161ab-082d-4489-93df-8008cdef83ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.109 239549 INFO nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Creating config drive at /var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce/disk.config
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.114 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqsr3mq2d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.137 239549 DEBUG nova.network.neutron [req-ee9722fd-36da-4634-99f9-6304a1ccc005 req-b951c3b7-7803-4232-aa47-d702f4bb83ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Updated VIF entry in instance network info cache for port 8489a727-801c-4762-8094-7fe19ffe6dc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.138 239549 DEBUG nova.network.neutron [req-ee9722fd-36da-4634-99f9-6304a1ccc005 req-b951c3b7-7803-4232-aa47-d702f4bb83ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Updating instance_info_cache with network_info: [{"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.155 239549 DEBUG oslo_concurrency.lockutils [req-ee9722fd-36da-4634-99f9-6304a1ccc005 req-b951c3b7-7803-4232-aa47-d702f4bb83ed d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.238 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqsr3mq2d" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.263 239549 DEBUG nova.storage.rbd_utils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] rbd image a19161ab-082d-4489-93df-8008cdef83ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.268 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce/disk.config a19161ab-082d-4489-93df-8008cdef83ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.380 239549 DEBUG oslo_concurrency.processutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce/disk.config a19161ab-082d-4489-93df-8008cdef83ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.381 239549 INFO nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Deleting local config drive /var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce/disk.config because it was imported into RBD.
Feb 02 15:36:23 compute-0 NetworkManager[49171]: <info>  [1770046583.4101] manager: (tap8489a727-80): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Feb 02 15:36:23 compute-0 kernel: tap8489a727-80: entered promiscuous mode
Feb 02 15:36:23 compute-0 ovn_controller[144995]: 2026-02-02T15:36:23Z|00087|binding|INFO|Claiming lport 8489a727-801c-4762-8094-7fe19ffe6dc8 for this chassis.
Feb 02 15:36:23 compute-0 ovn_controller[144995]: 2026-02-02T15:36:23Z|00088|binding|INFO|8489a727-801c-4762-8094-7fe19ffe6dc8: Claiming fa:16:3e:a8:29:66 10.100.0.10
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.411 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:23 compute-0 ovn_controller[144995]: 2026-02-02T15:36:23Z|00089|binding|INFO|Setting lport 8489a727-801c-4762-8094-7fe19ffe6dc8 ovn-installed in OVS
Feb 02 15:36:23 compute-0 ovn_controller[144995]: 2026-02-02T15:36:23Z|00090|binding|INFO|Setting lport 8489a727-801c-4762-8094-7fe19ffe6dc8 up in Southbound
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.423 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.421 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:29:66 10.100.0.10'], port_security=['fa:16:3e:a8:29:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a19161ab-082d-4489-93df-8008cdef83ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd39cd97fc8041569e2a21b01b4ed0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '671c18e5-7ce5-4db4-9b07-3da2aec604fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=387ba1e2-c4db-437f-a706-eb9807770b03, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=8489a727-801c-4762-8094-7fe19ffe6dc8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.422 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 8489a727-801c-4762-8094-7fe19ffe6dc8 in datapath 8a81d067-8083-4de2-8ac6-1682b4d8e6bb bound to our chassis
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.423 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a81d067-8083-4de2-8ac6-1682b4d8e6bb
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.425 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.433 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd70985-b957-4c85-bd02-c56c79740e86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.434 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a81d067-81 in ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:36:23 compute-0 systemd-udevd[253046]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.436 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a81d067-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.436 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa8d67c-b840-4902-b428-76dac23cc8a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 systemd-machined[207609]: New machine qemu-8-instance-00000008.
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.437 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8377c88c-d5cd-43be-aa98-227620f7f3c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.440 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:23 compute-0 NetworkManager[49171]: <info>  [1770046583.4443] device (tap8489a727-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:36:23 compute-0 NetworkManager[49171]: <info>  [1770046583.4454] device (tap8489a727-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.447 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[e3aaba5e-5926-4f94-bddc-1d15c24a9d1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.467 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cecb29ff-2079-4696-bfcf-0cc6aebad773]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.493 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5a6820-aa18-49b6-a4e6-cf40b844e221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 NetworkManager[49171]: <info>  [1770046583.4979] manager: (tap8a81d067-80): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.498 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e6c7d7-2ad8-4662-900d-ebcb9a3d140b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 11 MiB/s wr, 177 op/s
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.524 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[363d60c5-ab08-402d-a227-d1c1cff149ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.527 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[82c7477a-e986-46f1-8ef7-31dd3bebb09f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 NetworkManager[49171]: <info>  [1770046583.5470] device (tap8a81d067-80): carrier: link connected
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.550 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[008c9354-29d9-4136-8b8a-5bba8900d881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.560 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0b6137-4acf-4c08-8de5-36535194a319]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a81d067-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:2e:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406946, 'reachable_time': 16500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253078, 'error': None, 'target': 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.571 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[56a6e69f-9d3f-46d7-8ba0-57a7c8c64dd2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:2e9e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 406946, 'tstamp': 406946}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253079, 'error': None, 'target': 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.585 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[084e87c7-083d-4b97-952e-17f1ce1249aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a81d067-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:2e:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406946, 'reachable_time': 16500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253080, 'error': None, 'target': 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.604 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d257b52f-4713-4256-95dd-508aff754215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.636 239549 DEBUG nova.compute.manager [req-bfad1140-34c2-4cd7-9220-ee811b00b90d req-d0261fd8-63a3-4bff-b917-173e44a6dbcc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received event network-vif-plugged-8489a727-801c-4762-8094-7fe19ffe6dc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.636 239549 DEBUG oslo_concurrency.lockutils [req-bfad1140-34c2-4cd7-9220-ee811b00b90d req-d0261fd8-63a3-4bff-b917-173e44a6dbcc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.636 239549 DEBUG oslo_concurrency.lockutils [req-bfad1140-34c2-4cd7-9220-ee811b00b90d req-d0261fd8-63a3-4bff-b917-173e44a6dbcc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.636 239549 DEBUG oslo_concurrency.lockutils [req-bfad1140-34c2-4cd7-9220-ee811b00b90d req-d0261fd8-63a3-4bff-b917-173e44a6dbcc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.637 239549 DEBUG nova.compute.manager [req-bfad1140-34c2-4cd7-9220-ee811b00b90d req-d0261fd8-63a3-4bff-b917-173e44a6dbcc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Processing event network-vif-plugged-8489a727-801c-4762-8094-7fe19ffe6dc8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.643 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4895190b-4343-42f9-b0ce-e0f185c65c07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.644 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a81d067-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.644 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.645 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a81d067-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.646 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:23 compute-0 kernel: tap8a81d067-80: entered promiscuous mode
Feb 02 15:36:23 compute-0 NetworkManager[49171]: <info>  [1770046583.6475] manager: (tap8a81d067-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.655 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a81d067-80, col_values=(('external_ids', {'iface-id': '0e2183d9-9021-4390-95a4-b6c8ee275a55'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.656 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:23 compute-0 ovn_controller[144995]: 2026-02-02T15:36:23Z|00091|binding|INFO|Releasing lport 0e2183d9-9021-4390-95a4-b6c8ee275a55 from this chassis (sb_readonly=0)
Feb 02 15:36:23 compute-0 nova_compute[239545]: 2026-02-02 15:36:23.666 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.668 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a81d067-8083-4de2-8ac6-1682b4d8e6bb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a81d067-8083-4de2-8ac6-1682b4d8e6bb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.668 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[50758eec-3bea-47c6-a927-ae0dbb94e753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.669 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-8a81d067-8083-4de2-8ac6-1682b4d8e6bb
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/8a81d067-8083-4de2-8ac6-1682b4d8e6bb.pid.haproxy
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 8a81d067-8083-4de2-8ac6-1682b4d8e6bb
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:36:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:23.669 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'env', 'PROCESS_TAG=haproxy-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a81d067-8083-4de2-8ac6-1682b4d8e6bb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:36:23 compute-0 podman[253112]: 2026-02-02 15:36:23.956777452 +0000 UTC m=+0.041036009 container create 82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 02 15:36:23 compute-0 systemd[1]: Started libpod-conmon-82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571.scope.
Feb 02 15:36:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2023a184731e3b0868b2fc3dd306d36aeeee21e48e37d6020650edd5b834538c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:36:24 compute-0 podman[253112]: 2026-02-02 15:36:24.020727244 +0000 UTC m=+0.104985821 container init 82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:36:24 compute-0 podman[253112]: 2026-02-02 15:36:24.0247414 +0000 UTC m=+0.108999957 container start 82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:36:24 compute-0 podman[253112]: 2026-02-02 15:36:23.935770246 +0000 UTC m=+0.020028823 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:36:24 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[253127]: [NOTICE]   (253131) : New worker (253133) forked
Feb 02 15:36:24 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[253127]: [NOTICE]   (253131) : Loading success.
Feb 02 15:36:24 compute-0 ceph-mon[75334]: pgmap v1132: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 11 MiB/s wr, 177 op/s
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.099 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046585.0983968, a19161ab-082d-4489-93df-8008cdef83ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.100 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] VM Started (Lifecycle Event)
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.103 239549 DEBUG nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.107 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.111 239549 INFO nova.virt.libvirt.driver [-] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Instance spawned successfully.
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.111 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.130 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.135 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.139 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.140 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.140 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.141 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.141 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.141 239549 DEBUG nova.virt.libvirt.driver [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.197 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.198 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046585.0988271, a19161ab-082d-4489-93df-8008cdef83ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.198 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] VM Paused (Lifecycle Event)
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.228 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.233 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046585.1062562, a19161ab-082d-4489-93df-8008cdef83ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.233 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] VM Resumed (Lifecycle Event)
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.236 239549 INFO nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Took 7.45 seconds to spawn the instance on the hypervisor.
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.237 239549 DEBUG nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.247 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.251 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.301 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.327 239549 INFO nova.compute.manager [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Took 8.43 seconds to build instance.
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.344 239549 DEBUG oslo_concurrency.lockutils [None req-632d3b0b-32eb-44fe-9b9a-780fabe893b7 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 8.0 MiB/s wr, 142 op/s
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.569 239549 DEBUG oslo_concurrency.lockutils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.570 239549 DEBUG oslo_concurrency.lockutils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.584 239549 DEBUG nova.objects.instance [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'flavor' on Instance uuid a39fdefd-dea8-4cde-af15-a9b32e21ec59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:36:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5126 writes, 22K keys, 5126 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 5126 writes, 5126 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1815 writes, 8143 keys, 1815 commit groups, 1.0 writes per commit group, ingest: 10.99 MB, 0.02 MB/s
                                           Interval WAL: 1815 writes, 1815 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    115.6      0.21              0.05        12    0.018       0      0       0.0       0.0
                                             L6      1/0    7.31 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    162.0    133.3      0.60              0.24        11    0.055     49K   5795       0.0       0.0
                                            Sum      1/0    7.31 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    119.7    128.7      0.81              0.29        23    0.035     49K   5795       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6    129.9    130.2      0.35              0.10        10    0.035     24K   2591       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    162.0    133.3      0.60              0.24        11    0.055     49K   5795       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    117.2      0.21              0.05        11    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 0.8 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1f12ef8d0#2 capacity: 304.00 MB usage: 9.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000103 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(568,8.86 MB,2.91558%) FilterBlock(24,142.92 KB,0.0459119%) IndexBlock(24,276.95 KB,0.0889678%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.601 239549 INFO nova.virt.libvirt.driver [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Ignoring supplied device name: /dev/vdb
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.628 239549 DEBUG oslo_concurrency.lockutils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.847 239549 DEBUG oslo_concurrency.lockutils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.847 239549 DEBUG oslo_concurrency.lockutils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.848 239549 INFO nova.compute.manager [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Attaching volume e51a8a52-a8d6-4d5e-9f64-251b0ad7991c to /dev/vdb
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.913 239549 DEBUG nova.compute.manager [req-293904e5-9bc7-4ca5-a005-5f3008896a5c req-4feec1c5-3082-4d95-96f9-40106dcf9dd1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received event network-vif-plugged-8489a727-801c-4762-8094-7fe19ffe6dc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.914 239549 DEBUG oslo_concurrency.lockutils [req-293904e5-9bc7-4ca5-a005-5f3008896a5c req-4feec1c5-3082-4d95-96f9-40106dcf9dd1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.914 239549 DEBUG oslo_concurrency.lockutils [req-293904e5-9bc7-4ca5-a005-5f3008896a5c req-4feec1c5-3082-4d95-96f9-40106dcf9dd1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.915 239549 DEBUG oslo_concurrency.lockutils [req-293904e5-9bc7-4ca5-a005-5f3008896a5c req-4feec1c5-3082-4d95-96f9-40106dcf9dd1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.915 239549 DEBUG nova.compute.manager [req-293904e5-9bc7-4ca5-a005-5f3008896a5c req-4feec1c5-3082-4d95-96f9-40106dcf9dd1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] No waiting events found dispatching network-vif-plugged-8489a727-801c-4762-8094-7fe19ffe6dc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.915 239549 WARNING nova.compute.manager [req-293904e5-9bc7-4ca5-a005-5f3008896a5c req-4feec1c5-3082-4d95-96f9-40106dcf9dd1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received unexpected event network-vif-plugged-8489a727-801c-4762-8094-7fe19ffe6dc8 for instance with vm_state active and task_state None.
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.992 239549 DEBUG os_brick.utils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:36:25 compute-0 nova_compute[239545]: 2026-02-02 15:36:25.993 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.009 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.009 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[a11e201e-0319-41a6-b204-479ec2704bd6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.011 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.018 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.019 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[dabcb4a8-33a3-48fb-a3ca-c8198572d500]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.021 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.029 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.030 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[a775a5fb-99dd-4429-a568-a2a804a5ec62]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.032 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[54004b76-baf9-42c9-8821-81401d39f5ad]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.032 239549 DEBUG oslo_concurrency.processutils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.048 239549 DEBUG oslo_concurrency.processutils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.050 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.050 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.050 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.051 239549 DEBUG os_brick.utils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.051 239549 DEBUG nova.virt.block_device [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Updating existing volume attachment record: 749d51ff-9964-46d2-a1c3-b370e7aa8cb9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:36:26 compute-0 ceph-mon[75334]: pgmap v1133: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 8.0 MiB/s wr, 142 op/s
Feb 02 15:36:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2343935701' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.893 239549 DEBUG nova.objects.instance [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'flavor' on Instance uuid a39fdefd-dea8-4cde-af15-a9b32e21ec59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.918 239549 DEBUG nova.virt.libvirt.driver [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Attempting to attach volume e51a8a52-a8d6-4d5e-9f64-251b0ad7991c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:36:26 compute-0 nova_compute[239545]: 2026-02-02 15:36:26.920 239549 DEBUG nova.virt.libvirt.guest [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:36:26 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:36:26 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-e51a8a52-a8d6-4d5e-9f64-251b0ad7991c">
Feb 02 15:36:26 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:36:26 compute-0 nova_compute[239545]:   </source>
Feb 02 15:36:26 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:36:26 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:36:26 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:36:26 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:36:26 compute-0 nova_compute[239545]:   <serial>e51a8a52-a8d6-4d5e-9f64-251b0ad7991c</serial>
Feb 02 15:36:26 compute-0 nova_compute[239545]: </disk>
Feb 02 15:36:26 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.012 239549 DEBUG nova.virt.libvirt.driver [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.012 239549 DEBUG nova.virt.libvirt.driver [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.012 239549 DEBUG nova.virt.libvirt.driver [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.013 239549 DEBUG nova.virt.libvirt.driver [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] No VIF found with MAC fa:16:3e:fd:a5:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.208 239549 DEBUG oslo_concurrency.lockutils [None req-7c4ea2bb-caef-4e10-9bbe-8fd283bce14b 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.335 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquiring lock "304cd645-9c75-48a4-bef2-e52534374d5e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.336 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.367 239549 DEBUG nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.396 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.448 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.449 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.454 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.455 239549 INFO nova.compute.claims [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:36:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 128 op/s
Feb 02 15:36:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2343935701' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:27 compute-0 nova_compute[239545]: 2026-02-02 15:36:27.604 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.076 239549 DEBUG nova.compute.manager [req-6c67fc71-0d98-49f1-9c9e-06f8baac8ce5 req-0e01c491-1472-4ac6-9a9b-435241c9d91a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received event network-changed-8489a727-801c-4762-8094-7fe19ffe6dc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.076 239549 DEBUG nova.compute.manager [req-6c67fc71-0d98-49f1-9c9e-06f8baac8ce5 req-0e01c491-1472-4ac6-9a9b-435241c9d91a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Refreshing instance network info cache due to event network-changed-8489a727-801c-4762-8094-7fe19ffe6dc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.077 239549 DEBUG oslo_concurrency.lockutils [req-6c67fc71-0d98-49f1-9c9e-06f8baac8ce5 req-0e01c491-1472-4ac6-9a9b-435241c9d91a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.077 239549 DEBUG oslo_concurrency.lockutils [req-6c67fc71-0d98-49f1-9c9e-06f8baac8ce5 req-0e01c491-1472-4ac6-9a9b-435241c9d91a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.077 239549 DEBUG nova.network.neutron [req-6c67fc71-0d98-49f1-9c9e-06f8baac8ce5 req-0e01c491-1472-4ac6-9a9b-435241c9d91a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Refreshing network info cache for port 8489a727-801c-4762-8094-7fe19ffe6dc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:36:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:36:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676823877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.133 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.139 239549 DEBUG nova.compute.provider_tree [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:36:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:36:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105325885' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:36:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105325885' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.154 239549 DEBUG nova.scheduler.client.report [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.181 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.182 239549 DEBUG nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.227 239549 DEBUG nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.227 239549 DEBUG nova.network.neutron [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.247 239549 INFO nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.286 239549 DEBUG nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.339 239549 INFO nova.virt.block_device [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Booting with volume a7ec739a-d3b1-49b0-a843-632e26b65015 at /dev/vdb
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.397 239549 DEBUG nova.policy [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1869bacd75349e1b296189b33fb5426', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '38955a398ac84e6292ec72dd46d5a973', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.443 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.474 239549 DEBUG os_brick.utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.475 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.484 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.484 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2a7a9f-4fd1-46c5-98e1-4a8bd97fbbfa]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.485 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.492 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.493 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb48aa7-e189-4ac2-9f0f-86dc6e4622b4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.494 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.501 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.501 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[71a87a22-9a4f-489f-b49e-03295e723378]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.502 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf484fa-cc92-4b29-badf-b3bc898ee5c6]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.502 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.517 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.519 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.520 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.520 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.520 239549 DEBUG os_brick.utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:36:28 compute-0 nova_compute[239545]: 2026-02-02 15:36:28.521 239549 DEBUG nova.virt.block_device [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Updating existing volume attachment record: c9d102a3-f231-45cb-8053-7e397ed07496 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:36:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Feb 02 15:36:28 compute-0 ceph-mon[75334]: pgmap v1134: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 128 op/s
Feb 02 15:36:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/676823877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/105325885' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/105325885' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Feb 02 15:36:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Feb 02 15:36:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3666616324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.292 239549 DEBUG nova.network.neutron [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Successfully created port: ae24d426-5095-4b1a-9447-99d1205851d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.332 239549 DEBUG nova.network.neutron [req-6c67fc71-0d98-49f1-9c9e-06f8baac8ce5 req-0e01c491-1472-4ac6-9a9b-435241c9d91a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Updated VIF entry in instance network info cache for port 8489a727-801c-4762-8094-7fe19ffe6dc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.333 239549 DEBUG nova.network.neutron [req-6c67fc71-0d98-49f1-9c9e-06f8baac8ce5 req-0e01c491-1472-4ac6-9a9b-435241c9d91a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Updating instance_info_cache with network_info: [{"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.353 239549 DEBUG oslo_concurrency.lockutils [req-6c67fc71-0d98-49f1-9c9e-06f8baac8ce5 req-0e01c491-1472-4ac6-9a9b-435241c9d91a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.509 239549 DEBUG nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.511 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.511 239549 INFO nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Creating image(s)
Feb 02 15:36:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 96 op/s
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.532 239549 DEBUG nova.storage.rbd_utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] rbd image 304cd645-9c75-48a4-bef2-e52534374d5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.553 239549 DEBUG nova.storage.rbd_utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] rbd image 304cd645-9c75-48a4-bef2-e52534374d5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.574 239549 DEBUG nova.storage.rbd_utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] rbd image 304cd645-9c75-48a4-bef2-e52534374d5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.577 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:29 compute-0 ceph-mon[75334]: osdmap e252: 3 total, 3 up, 3 in
Feb 02 15:36:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3666616324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.624 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.625 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.625 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.626 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.647 239549 DEBUG nova.storage.rbd_utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] rbd image 304cd645-9c75-48a4-bef2-e52534374d5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.650 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 304cd645-9c75-48a4-bef2-e52534374d5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.918 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 304cd645-9c75-48a4-bef2-e52534374d5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:29 compute-0 nova_compute[239545]: 2026-02-02 15:36:29.973 239549 DEBUG nova.storage.rbd_utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] resizing rbd image 304cd645-9c75-48a4-bef2-e52534374d5e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.037 239549 DEBUG nova.objects.instance [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lazy-loading 'migration_context' on Instance uuid 304cd645-9c75-48a4-bef2-e52534374d5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.057 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.057 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Ensure instance console log exists: /var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.057 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.058 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.058 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.095 239549 DEBUG nova.network.neutron [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Successfully updated port: ae24d426-5095-4b1a-9447-99d1205851d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.126 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquiring lock "refresh_cache-304cd645-9c75-48a4-bef2-e52534374d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.127 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquired lock "refresh_cache-304cd645-9c75-48a4-bef2-e52534374d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.127 239549 DEBUG nova.network.neutron [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.177 239549 DEBUG nova.compute.manager [req-a3893927-765d-425c-bfc4-83ba4cdf501d req-c16b441e-479f-4f5b-874d-3df6718b248f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received event network-changed-ae24d426-5095-4b1a-9447-99d1205851d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.177 239549 DEBUG nova.compute.manager [req-a3893927-765d-425c-bfc4-83ba4cdf501d req-c16b441e-479f-4f5b-874d-3df6718b248f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Refreshing instance network info cache due to event network-changed-ae24d426-5095-4b1a-9447-99d1205851d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.178 239549 DEBUG oslo_concurrency.lockutils [req-a3893927-765d-425c-bfc4-83ba4cdf501d req-c16b441e-479f-4f5b-874d-3df6718b248f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-304cd645-9c75-48a4-bef2-e52534374d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.254 239549 DEBUG nova.network.neutron [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:36:30 compute-0 podman[253407]: 2026-02-02 15:36:30.302516148 +0000 UTC m=+0.042742431 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 02 15:36:30 compute-0 podman[253406]: 2026-02-02 15:36:30.328916944 +0000 UTC m=+0.068681456 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:36:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Feb 02 15:36:30 compute-0 ceph-mon[75334]: pgmap v1136: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 96 op/s
Feb 02 15:36:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Feb 02 15:36:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Feb 02 15:36:30 compute-0 nova_compute[239545]: 2026-02-02 15:36:30.995 239549 DEBUG nova.network.neutron [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Updating instance_info_cache with network_info: [{"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.018 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Releasing lock "refresh_cache-304cd645-9c75-48a4-bef2-e52534374d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.019 239549 DEBUG nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Instance network_info: |[{"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.019 239549 DEBUG oslo_concurrency.lockutils [req-a3893927-765d-425c-bfc4-83ba4cdf501d req-c16b441e-479f-4f5b-874d-3df6718b248f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-304cd645-9c75-48a4-bef2-e52534374d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.020 239549 DEBUG nova.network.neutron [req-a3893927-765d-425c-bfc4-83ba4cdf501d req-c16b441e-479f-4f5b-874d-3df6718b248f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Refreshing network info cache for port ae24d426-5095-4b1a-9447-99d1205851d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.023 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Start _get_guest_xml network_info=[{"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': 'c9d102a3-f231-45cb-8053-7e397ed07496', 'mount_device': '/dev/vdb', 'boot_index': -1, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a7ec739a-d3b1-49b0-a843-632e26b65015', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a7ec739a-d3b1-49b0-a843-632e26b65015', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '304cd645-9c75-48a4-bef2-e52534374d5e', 'attached_at': '', 'detached_at': '', 'volume_id': 'a7ec739a-d3b1-49b0-a843-632e26b65015', 'serial': 'a7ec739a-d3b1-49b0-a843-632e26b65015'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.027 239549 WARNING nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.032 239549 DEBUG nova.virt.libvirt.host [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.033 239549 DEBUG nova.virt.libvirt.host [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.039 239549 DEBUG nova.virt.libvirt.host [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.040 239549 DEBUG nova.virt.libvirt.host [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.041 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.041 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.042 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.042 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.042 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.042 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.043 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.043 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.043 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.044 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.044 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.044 239549 DEBUG nova.virt.hardware [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.047 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Feb 02 15:36:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1714171415' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.614 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Feb 02 15:36:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Feb 02 15:36:31 compute-0 ceph-mon[75334]: osdmap e253: 3 total, 3 up, 3 in
Feb 02 15:36:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1714171415' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.638 239549 DEBUG nova.storage.rbd_utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] rbd image 304cd645-9c75-48a4-bef2-e52534374d5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:31 compute-0 nova_compute[239545]: 2026-02-02 15:36:31.643 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Feb 02 15:36:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310243793' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.181 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.229 239549 DEBUG nova.network.neutron [req-a3893927-765d-425c-bfc4-83ba4cdf501d req-c16b441e-479f-4f5b-874d-3df6718b248f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Updated VIF entry in instance network info cache for port ae24d426-5095-4b1a-9447-99d1205851d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.230 239549 DEBUG nova.network.neutron [req-a3893927-765d-425c-bfc4-83ba4cdf501d req-c16b441e-479f-4f5b-874d-3df6718b248f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Updating instance_info_cache with network_info: [{"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.258 239549 DEBUG oslo_concurrency.lockutils [req-a3893927-765d-425c-bfc4-83ba4cdf501d req-c16b441e-479f-4f5b-874d-3df6718b248f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-304cd645-9c75-48a4-bef2-e52534374d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.274 239549 DEBUG nova.virt.libvirt.vif [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:36:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1212069114',display_name='tempest-instance-1212069114',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1212069114',id=9,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMqG+1E3NIr10c8CJSJ+GP1kqg+GuUbWR7tkG9T6caPQyltbKlM5hixdyE6JKDdeZ9QJ3HyYVSNI6wBjrKCMNKYeUVJdASpMrALkEdfg0h3qhbDwSVGfPCNcdhpEwtygSw==',key_name='tempest-keypair-450571809',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38955a398ac84e6292ec72dd46d5a973',ramdisk_id='',reservation_id='r-06c5ukij',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-622371638',owner_user_name='tempest-VolumesBackupsTest-622371638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:36:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1869bacd75349e1b296189b33fb5426',uuid=304cd645-9c75-48a4-bef2-e52534374d5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.275 239549 DEBUG nova.network.os_vif_util [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Converting VIF {"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.276 239549 DEBUG nova.network.os_vif_util [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6c:e3,bridge_name='br-int',has_traffic_filtering=True,id=ae24d426-5095-4b1a-9447-99d1205851d0,network=Network(c3ceba88-6072-4e8b-849a-7f0feefeaf73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae24d426-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.277 239549 DEBUG nova.objects.instance [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lazy-loading 'pci_devices' on Instance uuid 304cd645-9c75-48a4-bef2-e52534374d5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.308 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <uuid>304cd645-9c75-48a4-bef2-e52534374d5e</uuid>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <name>instance-00000009</name>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <nova:name>tempest-instance-1212069114</nova:name>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:36:31</nova:creationTime>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <nova:user uuid="f1869bacd75349e1b296189b33fb5426">tempest-VolumesBackupsTest-622371638-project-member</nova:user>
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <nova:project uuid="38955a398ac84e6292ec72dd46d5a973">tempest-VolumesBackupsTest-622371638</nova:project>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <nova:port uuid="ae24d426-5095-4b1a-9447-99d1205851d0">
Feb 02 15:36:32 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <system>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <entry name="serial">304cd645-9c75-48a4-bef2-e52534374d5e</entry>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <entry name="uuid">304cd645-9c75-48a4-bef2-e52534374d5e</entry>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </system>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <os>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   </os>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <features>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   </features>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/304cd645-9c75-48a4-bef2-e52534374d5e_disk">
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       </source>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/304cd645-9c75-48a4-bef2-e52534374d5e_disk.config">
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       </source>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-a7ec739a-d3b1-49b0-a843-632e26b65015">
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       </source>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:36:32 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <target dev="vdb" bus="virtio"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <serial>a7ec739a-d3b1-49b0-a843-632e26b65015</serial>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:5b:6c:e3"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <target dev="tapae24d426-50"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e/console.log" append="off"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <video>
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </video>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:36:32 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:36:32 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:36:32 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:36:32 compute-0 nova_compute[239545]: </domain>
Feb 02 15:36:32 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.313 239549 DEBUG nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Preparing to wait for external event network-vif-plugged-ae24d426-5095-4b1a-9447-99d1205851d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.314 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquiring lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.315 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.315 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.316 239549 DEBUG nova.virt.libvirt.vif [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:36:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1212069114',display_name='tempest-instance-1212069114',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1212069114',id=9,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMqG+1E3NIr10c8CJSJ+GP1kqg+GuUbWR7tkG9T6caPQyltbKlM5hixdyE6JKDdeZ9QJ3HyYVSNI6wBjrKCMNKYeUVJdASpMrALkEdfg0h3qhbDwSVGfPCNcdhpEwtygSw==',key_name='tempest-keypair-450571809',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38955a398ac84e6292ec72dd46d5a973',ramdisk_id='',reservation_id='r-06c5ukij',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-622371638',owner_user_name='tempest-VolumesBackupsTest-622371638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:36:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1869bacd75349e1b296189b33fb5426',uuid=304cd645-9c75-48a4-bef2-e52534374d5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.317 239549 DEBUG nova.network.os_vif_util [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Converting VIF {"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.317 239549 DEBUG nova.network.os_vif_util [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6c:e3,bridge_name='br-int',has_traffic_filtering=True,id=ae24d426-5095-4b1a-9447-99d1205851d0,network=Network(c3ceba88-6072-4e8b-849a-7f0feefeaf73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae24d426-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.318 239549 DEBUG os_vif [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6c:e3,bridge_name='br-int',has_traffic_filtering=True,id=ae24d426-5095-4b1a-9447-99d1205851d0,network=Network(c3ceba88-6072-4e8b-849a-7f0feefeaf73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae24d426-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.318 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.319 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.319 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.322 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.322 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae24d426-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.322 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae24d426-50, col_values=(('external_ids', {'iface-id': 'ae24d426-5095-4b1a-9447-99d1205851d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:6c:e3', 'vm-uuid': '304cd645-9c75-48a4-bef2-e52534374d5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.363 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:32 compute-0 NetworkManager[49171]: <info>  [1770046592.3668] manager: (tapae24d426-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.368 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.371 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.373 239549 INFO os_vif [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:6c:e3,bridge_name='br-int',has_traffic_filtering=True,id=ae24d426-5095-4b1a-9447-99d1205851d0,network=Network(c3ceba88-6072-4e8b-849a-7f0feefeaf73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae24d426-50')
Feb 02 15:36:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.454 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.455 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.455 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.456 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] No VIF found with MAC fa:16:3e:5b:6c:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.457 239549 INFO nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Using config drive
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.484 239549 DEBUG nova.storage.rbd_utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] rbd image 304cd645-9c75-48a4-bef2-e52534374d5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Feb 02 15:36:32 compute-0 ceph-mon[75334]: pgmap v1138: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Feb 02 15:36:32 compute-0 ceph-mon[75334]: osdmap e254: 3 total, 3 up, 3 in
Feb 02 15:36:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3310243793' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Feb 02 15:36:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.917 239549 INFO nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Creating config drive at /var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e/disk.config
Feb 02 15:36:32 compute-0 nova_compute[239545]: 2026-02-02 15:36:32.923 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe2e7xe07 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.046 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe2e7xe07" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.074 239549 DEBUG nova.storage.rbd_utils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] rbd image 304cd645-9c75-48a4-bef2-e52534374d5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.079 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e/disk.config 304cd645-9c75-48a4-bef2-e52534374d5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.379 239549 DEBUG oslo_concurrency.processutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e/disk.config 304cd645-9c75-48a4-bef2-e52534374d5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.380 239549 INFO nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Deleting local config drive /var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e/disk.config because it was imported into RBD.
Feb 02 15:36:33 compute-0 kernel: tapae24d426-50: entered promiscuous mode
Feb 02 15:36:33 compute-0 NetworkManager[49171]: <info>  [1770046593.4176] manager: (tapae24d426-50): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Feb 02 15:36:33 compute-0 systemd-udevd[253585]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.474 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:33 compute-0 ovn_controller[144995]: 2026-02-02T15:36:33Z|00092|binding|INFO|Claiming lport ae24d426-5095-4b1a-9447-99d1205851d0 for this chassis.
Feb 02 15:36:33 compute-0 ovn_controller[144995]: 2026-02-02T15:36:33Z|00093|binding|INFO|ae24d426-5095-4b1a-9447-99d1205851d0: Claiming fa:16:3e:5b:6c:e3 10.100.0.6
Feb 02 15:36:33 compute-0 ovn_controller[144995]: 2026-02-02T15:36:33Z|00094|binding|INFO|Setting lport ae24d426-5095-4b1a-9447-99d1205851d0 ovn-installed in OVS
Feb 02 15:36:33 compute-0 NetworkManager[49171]: <info>  [1770046593.4868] device (tapae24d426-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:36:33 compute-0 NetworkManager[49171]: <info>  [1770046593.4880] device (tapae24d426-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.487 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:6c:e3 10.100.0.6'], port_security=['fa:16:3e:5b:6c:e3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '304cd645-9c75-48a4-bef2-e52534374d5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3ceba88-6072-4e8b-849a-7f0feefeaf73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38955a398ac84e6292ec72dd46d5a973', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa658106-2847-4d06-87ee-90b34f78ae7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bddabc3e-3d66-4dfa-bd39-6fb99a743486, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ae24d426-5095-4b1a-9447-99d1205851d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.489 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ae24d426-5095-4b1a-9447-99d1205851d0 in datapath c3ceba88-6072-4e8b-849a-7f0feefeaf73 bound to our chassis
Feb 02 15:36:33 compute-0 ovn_controller[144995]: 2026-02-02T15:36:33Z|00095|binding|INFO|Setting lport ae24d426-5095-4b1a-9447-99d1205851d0 up in Southbound
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.490 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c3ceba88-6072-4e8b-849a-7f0feefeaf73
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.488 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.503 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[627d3d13-fc7a-4fbf-b393-b8e19dcf2f43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.504 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc3ceba88-61 in ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.506 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc3ceba88-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.506 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e6965b-845c-423e-9ad2-f801f8f92677]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.507 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6e19b4f3-1e76-403d-af0f-2ee82ac4df0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 systemd-machined[207609]: New machine qemu-9-instance-00000009.
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.518 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[8d67e76f-546a-4a9f-8169-5812fa0caa11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Feb 02 15:36:33 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.531 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[24904a71-38c4-4c63-b6e3-d2d676660ff3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.560 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[0827d92c-2cf3-49a6-8f48-ab25eefb76a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 systemd-udevd[253589]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:36:33 compute-0 NetworkManager[49171]: <info>  [1770046593.5673] manager: (tapc3ceba88-60): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.566 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[83953948-9ba7-4299-9ff7-6b1f1b2f2bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.595 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9a2f85-dd79-4d4c-9884-8e060cc5fc8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.599 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcb3161-c188-4976-bd53-a894a1867178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 NetworkManager[49171]: <info>  [1770046593.6174] device (tapc3ceba88-60): carrier: link connected
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.620 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[31c57841-dd5e-41ce-b195-4a6104cd9ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.632 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[81796715-09fe-41cc-b7d8-cbac1d12f33b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3ceba88-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:2a:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407953, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253621, 'error': None, 'target': 'ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.647 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[07a147af-3881-4cee-851d-4dcda7b67e14]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:2a04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407953, 'tstamp': 407953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253622, 'error': None, 'target': 'ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.652 239549 DEBUG nova.compute.manager [req-0e3a078f-19b1-4822-aa4a-97db0aecfdc3 req-29674495-5206-43a6-9eef-19325ea0f75e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received event network-vif-plugged-ae24d426-5095-4b1a-9447-99d1205851d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.653 239549 DEBUG oslo_concurrency.lockutils [req-0e3a078f-19b1-4822-aa4a-97db0aecfdc3 req-29674495-5206-43a6-9eef-19325ea0f75e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.653 239549 DEBUG oslo_concurrency.lockutils [req-0e3a078f-19b1-4822-aa4a-97db0aecfdc3 req-29674495-5206-43a6-9eef-19325ea0f75e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.653 239549 DEBUG oslo_concurrency.lockutils [req-0e3a078f-19b1-4822-aa4a-97db0aecfdc3 req-29674495-5206-43a6-9eef-19325ea0f75e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.653 239549 DEBUG nova.compute.manager [req-0e3a078f-19b1-4822-aa4a-97db0aecfdc3 req-29674495-5206-43a6-9eef-19325ea0f75e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Processing event network-vif-plugged-ae24d426-5095-4b1a-9447-99d1205851d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.659 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1e105f-82ae-4acb-ad9f-df44efa73d00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3ceba88-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:2a:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407953, 'reachable_time': 21184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253623, 'error': None, 'target': 'ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.685 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e11d93a8-6739-4158-9281-58a5c8c5f84c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Feb 02 15:36:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Feb 02 15:36:33 compute-0 ceph-mon[75334]: osdmap e255: 3 total, 3 up, 3 in
Feb 02 15:36:33 compute-0 ceph-mon[75334]: pgmap v1141: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.744 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[552daeff-ab15-4f0d-8b69-e200f2dfc7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.745 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3ceba88-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.745 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.746 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3ceba88-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:33 compute-0 kernel: tapc3ceba88-60: entered promiscuous mode
Feb 02 15:36:33 compute-0 NetworkManager[49171]: <info>  [1770046593.7487] manager: (tapc3ceba88-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.749 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.751 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc3ceba88-60, col_values=(('external_ids', {'iface-id': '3156bb6d-ffcf-4cf9-b8f0-2e49b08f8b4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:33 compute-0 ovn_controller[144995]: 2026-02-02T15:36:33Z|00096|binding|INFO|Releasing lport 3156bb6d-ffcf-4cf9-b8f0-2e49b08f8b4d from this chassis (sb_readonly=0)
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.753 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c3ceba88-6072-4e8b-849a-7f0feefeaf73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c3ceba88-6072-4e8b-849a-7f0feefeaf73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.761 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e229d3f5-59a2-459a-9659-6ee7409f6a61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.762 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-c3ceba88-6072-4e8b-849a-7f0feefeaf73
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/c3ceba88-6072-4e8b-849a-7f0feefeaf73.pid.haproxy
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID c3ceba88-6072-4e8b-849a-7f0feefeaf73
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:36:33 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:33.763 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73', 'env', 'PROCESS_TAG=haproxy-c3ceba88-6072-4e8b-849a-7f0feefeaf73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c3ceba88-6072-4e8b-849a-7f0feefeaf73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.761 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.934 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046593.9340432, 304cd645-9c75-48a4-bef2-e52534374d5e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.934 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] VM Started (Lifecycle Event)
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.937 239549 DEBUG nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.941 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.945 239549 INFO nova.virt.libvirt.driver [-] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Instance spawned successfully.
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.945 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.961 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.969 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.975 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.976 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.977 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.977 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.978 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.979 239549 DEBUG nova.virt.libvirt.driver [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.995 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.996 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046593.9372616, 304cd645-9c75-48a4-bef2-e52534374d5e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:36:33 compute-0 nova_compute[239545]: 2026-02-02 15:36:33.996 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] VM Paused (Lifecycle Event)
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.050 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.054 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046593.9400766, 304cd645-9c75-48a4-bef2-e52534374d5e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.054 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] VM Resumed (Lifecycle Event)
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.077 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.082 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.087 239549 INFO nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Took 4.58 seconds to spawn the instance on the hypervisor.
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.087 239549 DEBUG nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.158 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:36:34 compute-0 podman[253713]: 2026-02-02 15:36:34.114569163 +0000 UTC m=+0.024805129 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:36:34 compute-0 podman[253713]: 2026-02-02 15:36:34.28585844 +0000 UTC m=+0.196094376 container create 0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.292 239549 INFO nova.compute.manager [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Took 6.87 seconds to build instance.
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.396 239549 DEBUG oslo_concurrency.lockutils [None req-4faae473-ffe8-40d1-bd51-0a98a99f04ea f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:34 compute-0 systemd[1]: Started libpod-conmon-0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301.scope.
Feb 02 15:36:34 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e08a0ca02de1c98d299614360898cd0e0d0fc079634ea794131d9edf5364d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:36:34 compute-0 podman[253713]: 2026-02-02 15:36:34.459129395 +0000 UTC m=+0.369365351 container init 0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:36:34 compute-0 podman[253713]: 2026-02-02 15:36:34.465536349 +0000 UTC m=+0.375772285 container start 0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:36:34 compute-0 neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73[253728]: [NOTICE]   (253732) : New worker (253734) forked
Feb 02 15:36:34 compute-0 neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73[253728]: [NOTICE]   (253732) : Loading success.
Feb 02 15:36:34 compute-0 ceph-mon[75334]: osdmap e256: 3 total, 3 up, 3 in
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.956 239549 DEBUG oslo_concurrency.lockutils [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.957 239549 DEBUG oslo_concurrency.lockutils [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:34 compute-0 nova_compute[239545]: 2026-02-02 15:36:34.972 239549 INFO nova.compute.manager [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Detaching volume e51a8a52-a8d6-4d5e-9f64-251b0ad7991c
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.112 239549 DEBUG oslo_concurrency.lockutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.115 239549 INFO nova.virt.block_device [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Attempting to driver detach volume e51a8a52-a8d6-4d5e-9f64-251b0ad7991c from mountpoint /dev/vdb
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.124 239549 DEBUG nova.virt.libvirt.driver [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Attempting to detach device vdb from instance a39fdefd-dea8-4cde-af15-a9b32e21ec59 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.125 239549 DEBUG nova.virt.libvirt.guest [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-e51a8a52-a8d6-4d5e-9f64-251b0ad7991c">
Feb 02 15:36:35 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   </source>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <serial>e51a8a52-a8d6-4d5e-9f64-251b0ad7991c</serial>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:36:35 compute-0 nova_compute[239545]: </disk>
Feb 02 15:36:35 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.136 239549 INFO nova.virt.libvirt.driver [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Successfully detached device vdb from instance a39fdefd-dea8-4cde-af15-a9b32e21ec59 from the persistent domain config.
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.137 239549 DEBUG nova.virt.libvirt.driver [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a39fdefd-dea8-4cde-af15-a9b32e21ec59 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.138 239549 DEBUG nova.virt.libvirt.guest [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-e51a8a52-a8d6-4d5e-9f64-251b0ad7991c">
Feb 02 15:36:35 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   </source>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <serial>e51a8a52-a8d6-4d5e-9f64-251b0ad7991c</serial>
Feb 02 15:36:35 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:36:35 compute-0 nova_compute[239545]: </disk>
Feb 02 15:36:35 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.193 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Received event <DeviceRemovedEvent: 1770046595.1934047, a39fdefd-dea8-4cde-af15-a9b32e21ec59 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.195 239549 DEBUG nova.virt.libvirt.driver [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a39fdefd-dea8-4cde-af15-a9b32e21ec59 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.198 239549 INFO nova.virt.libvirt.driver [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Successfully detached device vdb from instance a39fdefd-dea8-4cde-af15-a9b32e21ec59 from the live domain config.
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.344 239549 DEBUG nova.objects.instance [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'flavor' on Instance uuid a39fdefd-dea8-4cde-af15-a9b32e21ec59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.382 239549 DEBUG oslo_concurrency.lockutils [None req-e37f046c-0079-40fb-b921-a9315c93d046 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.384 239549 DEBUG oslo_concurrency.lockutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.384 239549 DEBUG oslo_concurrency.lockutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.385 239549 DEBUG oslo_concurrency.lockutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.385 239549 DEBUG oslo_concurrency.lockutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.387 239549 INFO nova.compute.manager [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Terminating instance
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.389 239549 DEBUG nova.compute.manager [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:36:35 compute-0 kernel: tap64418707-ba (unregistering): left promiscuous mode
Feb 02 15:36:35 compute-0 NetworkManager[49171]: <info>  [1770046595.4417] device (tap64418707-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:36:35 compute-0 ovn_controller[144995]: 2026-02-02T15:36:35Z|00097|binding|INFO|Releasing lport 64418707-ba84-4b70-969a-d0882e71bae7 from this chassis (sb_readonly=0)
Feb 02 15:36:35 compute-0 ovn_controller[144995]: 2026-02-02T15:36:35Z|00098|binding|INFO|Setting lport 64418707-ba84-4b70-969a-d0882e71bae7 down in Southbound
Feb 02 15:36:35 compute-0 ovn_controller[144995]: 2026-02-02T15:36:35Z|00099|binding|INFO|Removing iface tap64418707-ba ovn-installed in OVS
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.463 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.467 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:a5:a1 10.100.0.12'], port_security=['fa:16:3e:fd:a5:a1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a39fdefd-dea8-4cde-af15-a9b32e21ec59', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '010150769bb34684be4a2dff720d1b35', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3cbd3bf-cbad-4116-898c-fe2794c264e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71d31c89-df7b-4a1a-b202-a6dac026a894, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=64418707-ba84-4b70-969a-d0882e71bae7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.470 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 64418707-ba84-4b70-969a-d0882e71bae7 in datapath 476af4b4-172e-44ce-8fec-4b78aa7603bb unbound from our chassis
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.472 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 476af4b4-172e-44ce-8fec-4b78aa7603bb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.475 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.476 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4c516495-88e0-4d5f-928b-84aa1c615de1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.477 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb namespace which is not needed anymore
Feb 02 15:36:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 656 KiB/s rd, 858 KiB/s wr, 132 op/s
Feb 02 15:36:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb 02 15:36:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 13.877s CPU time.
Feb 02 15:36:35 compute-0 systemd-machined[207609]: Machine qemu-7-instance-00000007 terminated.
Feb 02 15:36:35 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[251972]: [NOTICE]   (251994) : haproxy version is 2.8.14-c23fe91
Feb 02 15:36:35 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[251972]: [NOTICE]   (251994) : path to executable is /usr/sbin/haproxy
Feb 02 15:36:35 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[251972]: [WARNING]  (251994) : Exiting Master process...
Feb 02 15:36:35 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[251972]: [ALERT]    (251994) : Current worker (251996) exited with code 143 (Terminated)
Feb 02 15:36:35 compute-0 neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb[251972]: [WARNING]  (251994) : All workers exited. Exiting... (0)
Feb 02 15:36:35 compute-0 systemd[1]: libpod-7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535.scope: Deactivated successfully.
Feb 02 15:36:35 compute-0 podman[253766]: 2026-02-02 15:36:35.604208257 +0000 UTC m=+0.040298702 container died 7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 02 15:36:35 compute-0 NetworkManager[49171]: <info>  [1770046595.6076] manager: (tap64418707-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Feb 02 15:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535-userdata-shm.mount: Deactivated successfully.
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.629 239549 INFO nova.virt.libvirt.driver [-] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Instance destroyed successfully.
Feb 02 15:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e578b22d5bd12da1662a7c6662adb669079469d0a8ca3e7a73ec2d1ac885607c-merged.mount: Deactivated successfully.
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.632 239549 DEBUG nova.objects.instance [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lazy-loading 'resources' on Instance uuid a39fdefd-dea8-4cde-af15-a9b32e21ec59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:35 compute-0 podman[253766]: 2026-02-02 15:36:35.648990416 +0000 UTC m=+0.085080861 container cleanup 7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.648 239549 DEBUG nova.virt.libvirt.vif [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:35:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-600180089',display_name='tempest-VolumesSnapshotTestJSON-instance-600180089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-600180089',id=7,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIiThGePfK2DrL2AItHEHOGIrdU1smZ3U40keJe8fQQtl5n612JiE/KiPwhNPhY4j3H7qa5W9L8WWgGPcgmddwbzlNN11KVdKqW6TkB0kL+C6GYSzoEU6/cvMXh+RuBnIg==',key_name='tempest-keypair-1188114825',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:35:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='010150769bb34684be4a2dff720d1b35',ramdisk_id='',reservation_id='r-d6bqhw8i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-1645199079',owner_user_name='tempest-VolumesSnapshotTestJSON-1645199079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:35:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2059424184a34c2da768a2a83c23a7f5',uuid=a39fdefd-dea8-4cde-af15-a9b32e21ec59,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.649 239549 DEBUG nova.network.os_vif_util [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converting VIF {"id": "64418707-ba84-4b70-969a-d0882e71bae7", "address": "fa:16:3e:fd:a5:a1", "network": {"id": "476af4b4-172e-44ce-8fec-4b78aa7603bb", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1773590175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010150769bb34684be4a2dff720d1b35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64418707-ba", "ovs_interfaceid": "64418707-ba84-4b70-969a-d0882e71bae7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.650 239549 DEBUG nova.network.os_vif_util [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:a5:a1,bridge_name='br-int',has_traffic_filtering=True,id=64418707-ba84-4b70-969a-d0882e71bae7,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64418707-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.650 239549 DEBUG os_vif [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:a5:a1,bridge_name='br-int',has_traffic_filtering=True,id=64418707-ba84-4b70-969a-d0882e71bae7,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64418707-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.651 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.652 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64418707-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.653 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.654 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.656 239549 INFO os_vif [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:a5:a1,bridge_name='br-int',has_traffic_filtering=True,id=64418707-ba84-4b70-969a-d0882e71bae7,network=Network(476af4b4-172e-44ce-8fec-4b78aa7603bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64418707-ba')
Feb 02 15:36:35 compute-0 systemd[1]: libpod-conmon-7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535.scope: Deactivated successfully.
Feb 02 15:36:35 compute-0 ceph-mon[75334]: pgmap v1143: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 656 KiB/s rd, 858 KiB/s wr, 132 op/s
Feb 02 15:36:35 compute-0 podman[253805]: 2026-02-02 15:36:35.724803403 +0000 UTC m=+0.048316766 container remove 7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.730 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d0984ccd-43c7-41e0-9ca6-d5f1249c8501]: (4, ('Mon Feb  2 03:36:35 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb (7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535)\n7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535\nMon Feb  2 03:36:35 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb (7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535)\n7d728258feab9c7d787b63c455506ce8a9023b8e26fdb27c7adc0de7ba9f9535\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.732 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[16c8323f-8e8e-4ba3-9184-c8a9bd09d2d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.733 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap476af4b4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:36:35 compute-0 kernel: tap476af4b4-10: left promiscuous mode
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.735 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.746 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[134c8378-9e28-4dbb-8b3a-7c2a1075d8fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.758 239549 DEBUG nova.compute.manager [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received event network-vif-plugged-ae24d426-5095-4b1a-9447-99d1205851d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.758 239549 DEBUG oslo_concurrency.lockutils [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.758 239549 DEBUG oslo_concurrency.lockutils [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.759 239549 DEBUG oslo_concurrency.lockutils [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.759 239549 DEBUG nova.compute.manager [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] No waiting events found dispatching network-vif-plugged-ae24d426-5095-4b1a-9447-99d1205851d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.759 239549 WARNING nova.compute.manager [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received unexpected event network-vif-plugged-ae24d426-5095-4b1a-9447-99d1205851d0 for instance with vm_state active and task_state None.
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.759 239549 DEBUG nova.compute.manager [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received event network-vif-unplugged-64418707-ba84-4b70-969a-d0882e71bae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.759 239549 DEBUG oslo_concurrency.lockutils [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.760 239549 DEBUG oslo_concurrency.lockutils [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.760 239549 DEBUG oslo_concurrency.lockutils [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.760 239549 DEBUG nova.compute.manager [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] No waiting events found dispatching network-vif-unplugged-64418707-ba84-4b70-969a-d0882e71bae7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.760 239549 DEBUG nova.compute.manager [req-d0f666af-c831-4600-a2bf-c7c8f5e962bd req-2d5397a8-3dd2-4b2f-b877-998a3833a426 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received event network-vif-unplugged-64418707-ba84-4b70-969a-d0882e71bae7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.774 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc69baf-66b3-4369-83be-36de55964d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.777 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a58e3cf8-679f-4af8-b2c4-1492b1f44401]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.794 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[787ea6f7-0dd7-44fa-ad8e-ecc3a01d5ca0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403542, 'reachable_time': 31535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253836, 'error': None, 'target': 'ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d476af4b4\x2d172e\x2d44ce\x2d8fec\x2d4b78aa7603bb.mount: Deactivated successfully.
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.800 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-476af4b4-172e-44ce-8fec-4b78aa7603bb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:36:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:35.800 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1e5347-90d0-4897-80eb-e062c58ac1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.949 239549 INFO nova.virt.libvirt.driver [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Deleting instance files /var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59_del
Feb 02 15:36:35 compute-0 nova_compute[239545]: 2026-02-02 15:36:35.951 239549 INFO nova.virt.libvirt.driver [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Deletion of /var/lib/nova/instances/a39fdefd-dea8-4cde-af15-a9b32e21ec59_del complete
Feb 02 15:36:36 compute-0 nova_compute[239545]: 2026-02-02 15:36:36.014 239549 INFO nova.compute.manager [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 02 15:36:36 compute-0 nova_compute[239545]: 2026-02-02 15:36:36.016 239549 DEBUG oslo.service.loopingcall [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:36:36 compute-0 nova_compute[239545]: 2026-02-02 15:36:36.017 239549 DEBUG nova.compute.manager [-] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:36:36 compute-0 nova_compute[239545]: 2026-02-02 15:36:36.017 239549 DEBUG nova.network.neutron [-] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:36:36 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 02 15:36:36 compute-0 ovn_controller[144995]: 2026-02-02T15:36:36Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:29:66 10.100.0.10
Feb 02 15:36:36 compute-0 ovn_controller[144995]: 2026-02-02T15:36:36Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:29:66 10.100.0.10
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.159 239549 DEBUG nova.network.neutron [-] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.176 239549 INFO nova.compute.manager [-] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Took 1.16 seconds to deallocate network for instance.
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.259 239549 WARNING nova.volume.cinder [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Attachment 749d51ff-9964-46d2-a1c3-b370e7aa8cb9 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 749d51ff-9964-46d2-a1c3-b370e7aa8cb9. (HTTP 404) (Request-ID: req-4fe58155-bc37-4a4f-a460-9b4dab5b0dfc)
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.259 239549 INFO nova.compute.manager [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Took 0.08 seconds to detach 1 volumes for instance.
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.300 239549 DEBUG oslo_concurrency.lockutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.301 239549 DEBUG oslo_concurrency.lockutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.388 239549 DEBUG oslo_concurrency.processutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.4 MiB/s rd, 732 KiB/s wr, 227 op/s
Feb 02 15:36:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:36:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1779750700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.991 239549 DEBUG oslo_concurrency.processutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.999 239549 DEBUG nova.compute.manager [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received event network-vif-plugged-64418707-ba84-4b70-969a-d0882e71bae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.999 239549 DEBUG oslo_concurrency.lockutils [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.999 239549 DEBUG oslo_concurrency.lockutils [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:37 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.999 239549 DEBUG oslo_concurrency.lockutils [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:37.999 239549 DEBUG nova.compute.manager [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] No waiting events found dispatching network-vif-plugged-64418707-ba84-4b70-969a-d0882e71bae7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.000 239549 WARNING nova.compute.manager [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received unexpected event network-vif-plugged-64418707-ba84-4b70-969a-d0882e71bae7 for instance with vm_state deleted and task_state None.
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.000 239549 DEBUG nova.compute.manager [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received event network-changed-ae24d426-5095-4b1a-9447-99d1205851d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.000 239549 DEBUG nova.compute.manager [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Refreshing instance network info cache due to event network-changed-ae24d426-5095-4b1a-9447-99d1205851d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.000 239549 DEBUG oslo_concurrency.lockutils [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-304cd645-9c75-48a4-bef2-e52534374d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.000 239549 DEBUG oslo_concurrency.lockutils [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-304cd645-9c75-48a4-bef2-e52534374d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.000 239549 DEBUG nova.network.neutron [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Refreshing network info cache for port ae24d426-5095-4b1a-9447-99d1205851d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.004 239549 DEBUG nova.compute.provider_tree [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.021 239549 DEBUG nova.scheduler.client.report [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.055 239549 DEBUG oslo_concurrency.lockutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.088 239549 INFO nova.scheduler.client.report [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Deleted allocations for instance a39fdefd-dea8-4cde-af15-a9b32e21ec59
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.158 239549 DEBUG oslo_concurrency.lockutils [None req-b89b53bf-4f58-4489-ace3-9fa6405947b0 2059424184a34c2da768a2a83c23a7f5 010150769bb34684be4a2dff720d1b35 - - default default] Lock "a39fdefd-dea8-4cde-af15-a9b32e21ec59" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:38 compute-0 nova_compute[239545]: 2026-02-02 15:36:38.485 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:38 compute-0 ceph-mon[75334]: pgmap v1144: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.4 MiB/s rd, 732 KiB/s wr, 227 op/s
Feb 02 15:36:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1779750700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:36:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 202 op/s
Feb 02 15:36:40 compute-0 nova_compute[239545]: 2026-02-02 15:36:40.132 239549 DEBUG nova.network.neutron [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Updated VIF entry in instance network info cache for port ae24d426-5095-4b1a-9447-99d1205851d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:36:40 compute-0 nova_compute[239545]: 2026-02-02 15:36:40.132 239549 DEBUG nova.network.neutron [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Updating instance_info_cache with network_info: [{"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:36:40 compute-0 nova_compute[239545]: 2026-02-02 15:36:40.156 239549 DEBUG oslo_concurrency.lockutils [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-304cd645-9c75-48a4-bef2-e52534374d5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:36:40 compute-0 nova_compute[239545]: 2026-02-02 15:36:40.157 239549 DEBUG nova.compute.manager [req-a6c56421-58c1-4f01-bd44-854649f88edd req-773b805d-9a30-4257-87e1-ec9c457f823f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Received event network-vif-deleted-64418707-ba84-4b70-969a-d0882e71bae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:36:40 compute-0 ceph-mon[75334]: pgmap v1145: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 202 op/s
Feb 02 15:36:40 compute-0 nova_compute[239545]: 2026-02-02 15:36:40.654 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 297 op/s
Feb 02 15:36:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Feb 02 15:36:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Feb 02 15:36:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Feb 02 15:36:42 compute-0 ceph-mon[75334]: pgmap v1146: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 297 op/s
Feb 02 15:36:42 compute-0 ceph-mon[75334]: osdmap e257: 3 total, 3 up, 3 in
Feb 02 15:36:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:36:42
Feb 02 15:36:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:36:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:36:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.meta', 'vms', 'volumes']
Feb 02 15:36:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:36:43 compute-0 nova_compute[239545]: 2026-02-02 15:36:43.487 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:43 compute-0 nova_compute[239545]: 2026-02-02 15:36:43.493 239549 DEBUG oslo_concurrency.lockutils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:43 compute-0 nova_compute[239545]: 2026-02-02 15:36:43.494 239549 DEBUG oslo_concurrency.lockutils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:43 compute-0 nova_compute[239545]: 2026-02-02 15:36:43.510 239549 DEBUG nova.objects.instance [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'flavor' on Instance uuid a19161ab-082d-4489-93df-8008cdef83ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 248 op/s
Feb 02 15:36:43 compute-0 nova_compute[239545]: 2026-02-02 15:36:43.593 239549 INFO nova.virt.libvirt.driver [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Ignoring supplied device name: /dev/vdb
Feb 02 15:36:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Feb 02 15:36:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Feb 02 15:36:43 compute-0 nova_compute[239545]: 2026-02-02 15:36:43.649 239549 DEBUG oslo_concurrency.lockutils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:43 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.095 239549 DEBUG oslo_concurrency.lockutils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.096 239549 DEBUG oslo_concurrency.lockutils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.096 239549 INFO nova.compute.manager [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Attaching volume 9d8b0104-e8e0-41d9-8b53-4c657499398c to /dev/vdb
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.200 239549 DEBUG os_brick.utils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.202 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.211 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.211 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[ddde5de5-00eb-466d-b576-540df58bc126]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.213 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.219 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.219 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[306f7942-ddaa-42a8-96f2-4405bc518674]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.221 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.227 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.227 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[3a421ec2-ddd7-4b74-8ccc-f7eba724f873]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.228 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2b4be1-a700-462a-9744-065ec5da12c8]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.229 239549 DEBUG oslo_concurrency.processutils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.245 239549 DEBUG oslo_concurrency.processutils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.247 239549 DEBUG os_brick.initiator.connectors.lightos [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.247 239549 DEBUG os_brick.initiator.connectors.lightos [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.248 239549 DEBUG os_brick.initiator.connectors.lightos [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.248 239549 DEBUG os_brick.utils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] <== get_connector_properties: return (47ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:36:44 compute-0 nova_compute[239545]: 2026-02-02 15:36:44.249 239549 DEBUG nova.virt.block_device [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Updating existing volume attachment record: 97c34f6b-fddc-4fab-9d75-b99bc7584afb _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:36:44 compute-0 ceph-mon[75334]: pgmap v1148: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 248 op/s
Feb 02 15:36:44 compute-0 ceph-mon[75334]: osdmap e258: 3 total, 3 up, 3 in
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:36:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:36:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/87087485' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:45 compute-0 nova_compute[239545]: 2026-02-02 15:36:45.501 239549 DEBUG nova.objects.instance [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'flavor' on Instance uuid a19161ab-082d-4489-93df-8008cdef83ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:36:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 947 KiB/s rd, 3.2 MiB/s wr, 161 op/s
Feb 02 15:36:45 compute-0 nova_compute[239545]: 2026-02-02 15:36:45.657 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:45 compute-0 nova_compute[239545]: 2026-02-02 15:36:45.686 239549 DEBUG nova.virt.libvirt.driver [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Attempting to attach volume 9d8b0104-e8e0-41d9-8b53-4c657499398c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:36:45 compute-0 nova_compute[239545]: 2026-02-02 15:36:45.688 239549 DEBUG nova.virt.libvirt.guest [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:36:45 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:36:45 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-9d8b0104-e8e0-41d9-8b53-4c657499398c">
Feb 02 15:36:45 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:36:45 compute-0 nova_compute[239545]:   </source>
Feb 02 15:36:45 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:36:45 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:36:45 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:36:45 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:36:45 compute-0 nova_compute[239545]:   <serial>9d8b0104-e8e0-41d9-8b53-4c657499398c</serial>
Feb 02 15:36:45 compute-0 nova_compute[239545]: </disk>
Feb 02 15:36:45 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:36:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/87087485' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:46 compute-0 ceph-mon[75334]: pgmap v1150: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 947 KiB/s rd, 3.2 MiB/s wr, 161 op/s
Feb 02 15:36:46 compute-0 nova_compute[239545]: 2026-02-02 15:36:46.333 239549 DEBUG nova.virt.libvirt.driver [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:46 compute-0 nova_compute[239545]: 2026-02-02 15:36:46.334 239549 DEBUG nova.virt.libvirt.driver [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:46 compute-0 nova_compute[239545]: 2026-02-02 15:36:46.334 239549 DEBUG nova.virt.libvirt.driver [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:36:46 compute-0 nova_compute[239545]: 2026-02-02 15:36:46.335 239549 DEBUG nova.virt.libvirt.driver [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] No VIF found with MAC fa:16:3e:a8:29:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:36:46 compute-0 nova_compute[239545]: 2026-02-02 15:36:46.742 239549 DEBUG oslo_concurrency.lockutils [None req-0bb5f3a0-e962-4d70-8ce4-7dc77b200c5c b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 539 KiB/s rd, 2.4 MiB/s wr, 149 op/s
Feb 02 15:36:47 compute-0 ovn_controller[144995]: 2026-02-02T15:36:47Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:6c:e3 10.100.0.6
Feb 02 15:36:47 compute-0 ovn_controller[144995]: 2026-02-02T15:36:47Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:6c:e3 10.100.0.6
Feb 02 15:36:48 compute-0 nova_compute[239545]: 2026-02-02 15:36:48.528 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:48 compute-0 ceph-mon[75334]: pgmap v1151: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 539 KiB/s rd, 2.4 MiB/s wr, 149 op/s
Feb 02 15:36:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:48 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3261857454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 116 KiB/s rd, 635 KiB/s wr, 53 op/s
Feb 02 15:36:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:36:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3273363652' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:36:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3273363652' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Feb 02 15:36:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Feb 02 15:36:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3261857454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3273363652' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3273363652' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:49 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Feb 02 15:36:50 compute-0 nova_compute[239545]: 2026-02-02 15:36:50.625 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046595.6244094, a39fdefd-dea8-4cde-af15-a9b32e21ec59 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:36:50 compute-0 nova_compute[239545]: 2026-02-02 15:36:50.626 239549 INFO nova.compute.manager [-] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] VM Stopped (Lifecycle Event)
Feb 02 15:36:50 compute-0 nova_compute[239545]: 2026-02-02 15:36:50.648 239549 DEBUG nova.compute.manager [None req-2d6fc99a-d613-4acf-830f-544de9fb3a9b - - - - - -] [instance: a39fdefd-dea8-4cde-af15-a9b32e21ec59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:36:50 compute-0 nova_compute[239545]: 2026-02-02 15:36:50.697 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Feb 02 15:36:50 compute-0 ceph-mon[75334]: pgmap v1152: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 116 KiB/s rd, 635 KiB/s wr, 53 op/s
Feb 02 15:36:50 compute-0 ceph-mon[75334]: osdmap e259: 3 total, 3 up, 3 in
Feb 02 15:36:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Feb 02 15:36:50 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Feb 02 15:36:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 477 KiB/s rd, 3.3 MiB/s wr, 208 op/s
Feb 02 15:36:51 compute-0 ceph-mon[75334]: osdmap e260: 3 total, 3 up, 3 in
Feb 02 15:36:51 compute-0 ceph-mon[75334]: pgmap v1155: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 477 KiB/s rd, 3.3 MiB/s wr, 208 op/s
Feb 02 15:36:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4029831176' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Feb 02 15:36:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4029831176' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Feb 02 15:36:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Feb 02 15:36:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:36:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/746155609' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:36:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/746155609' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 690 KiB/s rd, 4.3 MiB/s wr, 270 op/s
Feb 02 15:36:53 compute-0 nova_compute[239545]: 2026-02-02 15:36:53.531 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Feb 02 15:36:53 compute-0 ceph-mon[75334]: osdmap e261: 3 total, 3 up, 3 in
Feb 02 15:36:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/746155609' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/746155609' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:53 compute-0 ceph-mon[75334]: pgmap v1157: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 690 KiB/s rd, 4.3 MiB/s wr, 270 op/s
Feb 02 15:36:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Feb 02 15:36:54 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011093159737884746 of space, bias 1.0, pg target 0.3327947921365424 quantized to 32 (current 32)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.034101647309759536 of space, bias 1.0, pg target 10.23049419292786 quantized to 32 (current 32)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.00034693110739803646 of space, bias 1.0, pg target 0.10061002114543058 quantized to 32 (current 32)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665991724084704 of space, bias 1.0, pg target 0.19313759998456415 quantized to 32 (current 32)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5940060410298057e-06 of space, bias 4.0, pg target 0.0018490470075945746 quantized to 16 (current 16)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:36:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Feb 02 15:36:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Feb 02 15:36:55 compute-0 ceph-mon[75334]: osdmap e262: 3 total, 3 up, 3 in
Feb 02 15:36:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Feb 02 15:36:55 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Feb 02 15:36:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 181 KiB/s rd, 51 KiB/s wr, 84 op/s
Feb 02 15:36:55 compute-0 nova_compute[239545]: 2026-02-02 15:36:55.700 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:56 compute-0 ceph-mon[75334]: osdmap e263: 3 total, 3 up, 3 in
Feb 02 15:36:56 compute-0 ceph-mon[75334]: pgmap v1160: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 181 KiB/s rd, 51 KiB/s wr, 84 op/s
Feb 02 15:36:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Feb 02 15:36:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Feb 02 15:36:56 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Feb 02 15:36:57 compute-0 ceph-mon[75334]: osdmap e264: 3 total, 3 up, 3 in
Feb 02 15:36:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:36:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3534114132' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:36:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 243 KiB/s rd, 57 KiB/s wr, 157 op/s
Feb 02 15:36:57 compute-0 nova_compute[239545]: 2026-02-02 15:36:57.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:36:58 compute-0 sudo[253888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:36:58 compute-0 sudo[253888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:36:58 compute-0 sudo[253888]: pam_unix(sudo:session): session closed for user root
Feb 02 15:36:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Feb 02 15:36:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3534114132' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:36:58 compute-0 ceph-mon[75334]: pgmap v1162: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 243 KiB/s rd, 57 KiB/s wr, 157 op/s
Feb 02 15:36:58 compute-0 sudo[253913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:36:58 compute-0 sudo[253913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:36:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Feb 02 15:36:58 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Feb 02 15:36:58 compute-0 nova_compute[239545]: 2026-02-02 15:36:58.534 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:36:58 compute-0 sudo[253913]: pam_unix(sudo:session): session closed for user root
Feb 02 15:36:58 compute-0 sudo[253968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:36:58 compute-0 sudo[253968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:36:58 compute-0 sudo[253968]: pam_unix(sudo:session): session closed for user root
Feb 02 15:36:58 compute-0 sudo[253993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 02 15:36:58 compute-0 sudo[253993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4283225548' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4283225548' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:59.249 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:36:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:59.249 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:36:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:36:59.250 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:36:59 compute-0 sudo[253993]: pam_unix(sudo:session): session closed for user root
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:36:59 compute-0 sudo[254038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:36:59 compute-0 sudo[254038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:36:59 compute-0 sudo[254038]: pam_unix(sudo:session): session closed for user root
Feb 02 15:36:59 compute-0 sudo[254063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:36:59 compute-0 sudo[254063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Feb 02 15:36:59 compute-0 ceph-mon[75334]: osdmap e265: 3 total, 3 up, 3 in
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4283225548' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4283225548' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:36:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Feb 02 15:36:59 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Feb 02 15:36:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 95 KiB/s rd, 39 KiB/s wr, 130 op/s
Feb 02 15:36:59 compute-0 podman[254101]: 2026-02-02 15:36:59.637786938 +0000 UTC m=+0.042218618 container create 369c3e44795d21524e695dbee546bd47d723fc3b458185211e24bf169cdb019b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:36:59 compute-0 systemd[1]: Started libpod-conmon-369c3e44795d21524e695dbee546bd47d723fc3b458185211e24bf169cdb019b.scope.
Feb 02 15:36:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:36:59 compute-0 podman[254101]: 2026-02-02 15:36:59.616963036 +0000 UTC m=+0.021394746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:36:59 compute-0 podman[254101]: 2026-02-02 15:36:59.726049615 +0000 UTC m=+0.130481355 container init 369c3e44795d21524e695dbee546bd47d723fc3b458185211e24bf169cdb019b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:36:59 compute-0 podman[254101]: 2026-02-02 15:36:59.732650804 +0000 UTC m=+0.137082484 container start 369c3e44795d21524e695dbee546bd47d723fc3b458185211e24bf169cdb019b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_curran, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:36:59 compute-0 crazy_curran[254118]: 167 167
Feb 02 15:36:59 compute-0 systemd[1]: libpod-369c3e44795d21524e695dbee546bd47d723fc3b458185211e24bf169cdb019b.scope: Deactivated successfully.
Feb 02 15:36:59 compute-0 podman[254101]: 2026-02-02 15:36:59.738975616 +0000 UTC m=+0.143407316 container attach 369c3e44795d21524e695dbee546bd47d723fc3b458185211e24bf169cdb019b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_curran, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:36:59 compute-0 podman[254101]: 2026-02-02 15:36:59.739361845 +0000 UTC m=+0.143793525 container died 369c3e44795d21524e695dbee546bd47d723fc3b458185211e24bf169cdb019b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3471485d33764b2559ca13032ca41b2282ea377d3c1cfc066615500bdeed542c-merged.mount: Deactivated successfully.
Feb 02 15:36:59 compute-0 podman[254101]: 2026-02-02 15:36:59.788412027 +0000 UTC m=+0.192843697 container remove 369c3e44795d21524e695dbee546bd47d723fc3b458185211e24bf169cdb019b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_curran, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:36:59 compute-0 systemd[1]: libpod-conmon-369c3e44795d21524e695dbee546bd47d723fc3b458185211e24bf169cdb019b.scope: Deactivated successfully.
Feb 02 15:36:59 compute-0 podman[254140]: 2026-02-02 15:36:59.941445555 +0000 UTC m=+0.041224025 container create 2f343648862493a0fed4cda8d180b3595919576aca046f048be38f3674db1dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pasteur, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:36:59 compute-0 systemd[1]: Started libpod-conmon-2f343648862493a0fed4cda8d180b3595919576aca046f048be38f3674db1dff.scope.
Feb 02 15:37:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc33da16c65ca6d93dbbc6704aef635c9317bd0a0109418c43b197ebcd4a46d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc33da16c65ca6d93dbbc6704aef635c9317bd0a0109418c43b197ebcd4a46d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc33da16c65ca6d93dbbc6704aef635c9317bd0a0109418c43b197ebcd4a46d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc33da16c65ca6d93dbbc6704aef635c9317bd0a0109418c43b197ebcd4a46d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc33da16c65ca6d93dbbc6704aef635c9317bd0a0109418c43b197ebcd4a46d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:00 compute-0 podman[254140]: 2026-02-02 15:36:59.925039379 +0000 UTC m=+0.024817879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:37:00 compute-0 podman[254140]: 2026-02-02 15:37:00.035161883 +0000 UTC m=+0.134940373 container init 2f343648862493a0fed4cda8d180b3595919576aca046f048be38f3674db1dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pasteur, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:37:00 compute-0 podman[254140]: 2026-02-02 15:37:00.043307729 +0000 UTC m=+0.143086199 container start 2f343648862493a0fed4cda8d180b3595919576aca046f048be38f3674db1dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pasteur, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:37:00 compute-0 podman[254140]: 2026-02-02 15:37:00.047233883 +0000 UTC m=+0.147012373 container attach 2f343648862493a0fed4cda8d180b3595919576aca046f048be38f3674db1dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:37:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Feb 02 15:37:00 compute-0 ceph-mon[75334]: osdmap e266: 3 total, 3 up, 3 in
Feb 02 15:37:00 compute-0 ceph-mon[75334]: pgmap v1165: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 95 KiB/s rd, 39 KiB/s wr, 130 op/s
Feb 02 15:37:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Feb 02 15:37:00 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Feb 02 15:37:00 compute-0 goofy_pasteur[254156]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:37:00 compute-0 goofy_pasteur[254156]: --> All data devices are unavailable
Feb 02 15:37:00 compute-0 systemd[1]: libpod-2f343648862493a0fed4cda8d180b3595919576aca046f048be38f3674db1dff.scope: Deactivated successfully.
Feb 02 15:37:00 compute-0 podman[254140]: 2026-02-02 15:37:00.508308354 +0000 UTC m=+0.608086814 container died 2f343648862493a0fed4cda8d180b3595919576aca046f048be38f3674db1dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pasteur, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfc33da16c65ca6d93dbbc6704aef635c9317bd0a0109418c43b197ebcd4a46d-merged.mount: Deactivated successfully.
Feb 02 15:37:00 compute-0 podman[254140]: 2026-02-02 15:37:00.546977295 +0000 UTC m=+0.646755765 container remove 2f343648862493a0fed4cda8d180b3595919576aca046f048be38f3674db1dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:37:00 compute-0 nova_compute[239545]: 2026-02-02 15:37:00.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:37:00 compute-0 nova_compute[239545]: 2026-02-02 15:37:00.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:37:00 compute-0 nova_compute[239545]: 2026-02-02 15:37:00.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:37:00 compute-0 systemd[1]: libpod-conmon-2f343648862493a0fed4cda8d180b3595919576aca046f048be38f3674db1dff.scope: Deactivated successfully.
Feb 02 15:37:00 compute-0 podman[254184]: 2026-02-02 15:37:00.589069619 +0000 UTC m=+0.053572441 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:37:00 compute-0 sudo[254063]: pam_unix(sudo:session): session closed for user root
Feb 02 15:37:00 compute-0 podman[254176]: 2026-02-02 15:37:00.612126335 +0000 UTC m=+0.078986384 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 02 15:37:00 compute-0 sudo[254229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:37:00 compute-0 sudo[254229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:37:00 compute-0 sudo[254229]: pam_unix(sudo:session): session closed for user root
Feb 02 15:37:00 compute-0 sudo[254256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:37:00 compute-0 sudo[254256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:37:00 compute-0 nova_compute[239545]: 2026-02-02 15:37:00.701 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:00 compute-0 podman[254293]: 2026-02-02 15:37:00.915763702 +0000 UTC m=+0.034171625 container create 8b6067296c6b433c80460992137626bddcc5d57ed4d782b9bcfb1f7893459da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:37:00 compute-0 systemd[1]: Started libpod-conmon-8b6067296c6b433c80460992137626bddcc5d57ed4d782b9bcfb1f7893459da6.scope.
Feb 02 15:37:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:37:00 compute-0 podman[254293]: 2026-02-02 15:37:00.991796664 +0000 UTC m=+0.110204607 container init 8b6067296c6b433c80460992137626bddcc5d57ed4d782b9bcfb1f7893459da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:37:00 compute-0 podman[254293]: 2026-02-02 15:37:00.898865385 +0000 UTC m=+0.017273338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:37:00 compute-0 podman[254293]: 2026-02-02 15:37:00.998180327 +0000 UTC m=+0.116588240 container start 8b6067296c6b433c80460992137626bddcc5d57ed4d782b9bcfb1f7893459da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:37:01 compute-0 podman[254293]: 2026-02-02 15:37:01.001822405 +0000 UTC m=+0.120230378 container attach 8b6067296c6b433c80460992137626bddcc5d57ed4d782b9bcfb1f7893459da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:37:01 compute-0 romantic_montalcini[254310]: 167 167
Feb 02 15:37:01 compute-0 systemd[1]: libpod-8b6067296c6b433c80460992137626bddcc5d57ed4d782b9bcfb1f7893459da6.scope: Deactivated successfully.
Feb 02 15:37:01 compute-0 podman[254293]: 2026-02-02 15:37:01.002582304 +0000 UTC m=+0.120990227 container died 8b6067296c6b433c80460992137626bddcc5d57ed4d782b9bcfb1f7893459da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcaa577f892d342d3dbfb97aebd7f5f33d3f261ca647463b9c00b4e1f04ab6c6-merged.mount: Deactivated successfully.
Feb 02 15:37:01 compute-0 podman[254293]: 2026-02-02 15:37:01.038307444 +0000 UTC m=+0.156715367 container remove 8b6067296c6b433c80460992137626bddcc5d57ed4d782b9bcfb1f7893459da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_montalcini, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:37:01 compute-0 systemd[1]: libpod-conmon-8b6067296c6b433c80460992137626bddcc5d57ed4d782b9bcfb1f7893459da6.scope: Deactivated successfully.
Feb 02 15:37:01 compute-0 nova_compute[239545]: 2026-02-02 15:37:01.058 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:37:01 compute-0 nova_compute[239545]: 2026-02-02 15:37:01.059 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:37:01 compute-0 nova_compute[239545]: 2026-02-02 15:37:01.059 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:37:01 compute-0 nova_compute[239545]: 2026-02-02 15:37:01.059 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a19161ab-082d-4489-93df-8008cdef83ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:37:01 compute-0 podman[254335]: 2026-02-02 15:37:01.166101233 +0000 UTC m=+0.032235047 container create b728e24d24709fe34957b29f8ee49db433a754bb754651b2c698a98401a32ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:37:01 compute-0 systemd[1]: Started libpod-conmon-b728e24d24709fe34957b29f8ee49db433a754bb754651b2c698a98401a32ac1.scope.
Feb 02 15:37:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5045326747fa8ba7358ae763e7318d47ac30ae76b1bd05bc8108e5ff2b997a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5045326747fa8ba7358ae763e7318d47ac30ae76b1bd05bc8108e5ff2b997a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5045326747fa8ba7358ae763e7318d47ac30ae76b1bd05bc8108e5ff2b997a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5045326747fa8ba7358ae763e7318d47ac30ae76b1bd05bc8108e5ff2b997a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:01 compute-0 podman[254335]: 2026-02-02 15:37:01.245518067 +0000 UTC m=+0.111651901 container init b728e24d24709fe34957b29f8ee49db433a754bb754651b2c698a98401a32ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:37:01 compute-0 podman[254335]: 2026-02-02 15:37:01.152414634 +0000 UTC m=+0.018548468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:37:01 compute-0 podman[254335]: 2026-02-02 15:37:01.25475695 +0000 UTC m=+0.120890754 container start b728e24d24709fe34957b29f8ee49db433a754bb754651b2c698a98401a32ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:37:01 compute-0 podman[254335]: 2026-02-02 15:37:01.25765244 +0000 UTC m=+0.123786274 container attach b728e24d24709fe34957b29f8ee49db433a754bb754651b2c698a98401a32ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 02 15:37:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Feb 02 15:37:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Feb 02 15:37:01 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Feb 02 15:37:01 compute-0 ceph-mon[75334]: osdmap e267: 3 total, 3 up, 3 in
Feb 02 15:37:01 compute-0 busy_darwin[254351]: {
Feb 02 15:37:01 compute-0 busy_darwin[254351]:     "0": [
Feb 02 15:37:01 compute-0 busy_darwin[254351]:         {
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "devices": [
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "/dev/loop3"
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             ],
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_name": "ceph_lv0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_size": "21470642176",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "name": "ceph_lv0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "tags": {
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.cluster_name": "ceph",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.crush_device_class": "",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.encrypted": "0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.objectstore": "bluestore",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.osd_id": "0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.type": "block",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.vdo": "0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.with_tpm": "0"
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             },
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "type": "block",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "vg_name": "ceph_vg0"
Feb 02 15:37:01 compute-0 busy_darwin[254351]:         }
Feb 02 15:37:01 compute-0 busy_darwin[254351]:     ],
Feb 02 15:37:01 compute-0 busy_darwin[254351]:     "1": [
Feb 02 15:37:01 compute-0 busy_darwin[254351]:         {
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "devices": [
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "/dev/loop4"
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             ],
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_name": "ceph_lv1",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_size": "21470642176",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "name": "ceph_lv1",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "tags": {
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.cluster_name": "ceph",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.crush_device_class": "",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.encrypted": "0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.objectstore": "bluestore",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.osd_id": "1",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.type": "block",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.vdo": "0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.with_tpm": "0"
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             },
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "type": "block",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "vg_name": "ceph_vg1"
Feb 02 15:37:01 compute-0 busy_darwin[254351]:         }
Feb 02 15:37:01 compute-0 busy_darwin[254351]:     ],
Feb 02 15:37:01 compute-0 busy_darwin[254351]:     "2": [
Feb 02 15:37:01 compute-0 busy_darwin[254351]:         {
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "devices": [
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "/dev/loop5"
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             ],
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_name": "ceph_lv2",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_size": "21470642176",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "name": "ceph_lv2",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "tags": {
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.cluster_name": "ceph",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.crush_device_class": "",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.encrypted": "0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.objectstore": "bluestore",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.osd_id": "2",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.type": "block",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.vdo": "0",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:                 "ceph.with_tpm": "0"
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             },
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "type": "block",
Feb 02 15:37:01 compute-0 busy_darwin[254351]:             "vg_name": "ceph_vg2"
Feb 02 15:37:01 compute-0 busy_darwin[254351]:         }
Feb 02 15:37:01 compute-0 busy_darwin[254351]:     ]
Feb 02 15:37:01 compute-0 busy_darwin[254351]: }
Feb 02 15:37:01 compute-0 systemd[1]: libpod-b728e24d24709fe34957b29f8ee49db433a754bb754651b2c698a98401a32ac1.scope: Deactivated successfully.
Feb 02 15:37:01 compute-0 podman[254335]: 2026-02-02 15:37:01.520170735 +0000 UTC m=+0.386304539 container died b728e24d24709fe34957b29f8ee49db433a754bb754651b2c698a98401a32ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:37:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 2 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 286 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 157 KiB/s rd, 12 KiB/s wr, 210 op/s
Feb 02 15:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5045326747fa8ba7358ae763e7318d47ac30ae76b1bd05bc8108e5ff2b997a96-merged.mount: Deactivated successfully.
Feb 02 15:37:01 compute-0 podman[254335]: 2026-02-02 15:37:01.563598932 +0000 UTC m=+0.429732746 container remove b728e24d24709fe34957b29f8ee49db433a754bb754651b2c698a98401a32ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:37:01 compute-0 systemd[1]: libpod-conmon-b728e24d24709fe34957b29f8ee49db433a754bb754651b2c698a98401a32ac1.scope: Deactivated successfully.
Feb 02 15:37:01 compute-0 sudo[254256]: pam_unix(sudo:session): session closed for user root
Feb 02 15:37:01 compute-0 sudo[254372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:37:01 compute-0 sudo[254372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:37:01 compute-0 sudo[254372]: pam_unix(sudo:session): session closed for user root
Feb 02 15:37:01 compute-0 sudo[254397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:37:01 compute-0 sudo[254397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:37:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1856521192' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1856521192' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:01 compute-0 podman[254434]: 2026-02-02 15:37:01.946606911 +0000 UTC m=+0.038923509 container create f3ebe152153c60fdd1a32bb111850b60fab25afd4aa2c245123af84b13b2d0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:37:01 compute-0 systemd[1]: Started libpod-conmon-f3ebe152153c60fdd1a32bb111850b60fab25afd4aa2c245123af84b13b2d0e0.scope.
Feb 02 15:37:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:37:02 compute-0 podman[254434]: 2026-02-02 15:37:01.926516616 +0000 UTC m=+0.018833234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:37:02 compute-0 podman[254434]: 2026-02-02 15:37:02.053899486 +0000 UTC m=+0.146216114 container init f3ebe152153c60fdd1a32bb111850b60fab25afd4aa2c245123af84b13b2d0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_snyder, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:37:02 compute-0 podman[254434]: 2026-02-02 15:37:02.060597337 +0000 UTC m=+0.152913945 container start f3ebe152153c60fdd1a32bb111850b60fab25afd4aa2c245123af84b13b2d0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_snyder, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:37:02 compute-0 pedantic_snyder[254450]: 167 167
Feb 02 15:37:02 compute-0 systemd[1]: libpod-f3ebe152153c60fdd1a32bb111850b60fab25afd4aa2c245123af84b13b2d0e0.scope: Deactivated successfully.
Feb 02 15:37:02 compute-0 podman[254434]: 2026-02-02 15:37:02.068235152 +0000 UTC m=+0.160551780 container attach f3ebe152153c60fdd1a32bb111850b60fab25afd4aa2c245123af84b13b2d0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:37:02 compute-0 podman[254434]: 2026-02-02 15:37:02.068559389 +0000 UTC m=+0.160875997 container died f3ebe152153c60fdd1a32bb111850b60fab25afd4aa2c245123af84b13b2d0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.072 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Updating instance_info_cache with network_info: [{"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.091 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-a19161ab-082d-4489-93df-8008cdef83ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.092 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-40bf98e88dfa7cde39b9d5262cfa7dfc9b4b815d02cdddf4b2267fa443104f4f-merged.mount: Deactivated successfully.
Feb 02 15:37:02 compute-0 podman[254434]: 2026-02-02 15:37:02.173358185 +0000 UTC m=+0.265674783 container remove f3ebe152153c60fdd1a32bb111850b60fab25afd4aa2c245123af84b13b2d0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:37:02 compute-0 systemd[1]: libpod-conmon-f3ebe152153c60fdd1a32bb111850b60fab25afd4aa2c245123af84b13b2d0e0.scope: Deactivated successfully.
Feb 02 15:37:02 compute-0 podman[254473]: 2026-02-02 15:37:02.327766835 +0000 UTC m=+0.058128701 container create 840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:37:02 compute-0 systemd[1]: Started libpod-conmon-840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d.scope.
Feb 02 15:37:02 compute-0 podman[254473]: 2026-02-02 15:37:02.292219809 +0000 UTC m=+0.022581715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:37:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05340dcb2853e42a1776014dddaf984fd062740312e8ef15cef3edc623f4f91f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05340dcb2853e42a1776014dddaf984fd062740312e8ef15cef3edc623f4f91f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05340dcb2853e42a1776014dddaf984fd062740312e8ef15cef3edc623f4f91f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05340dcb2853e42a1776014dddaf984fd062740312e8ef15cef3edc623f4f91f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:02 compute-0 podman[254473]: 2026-02-02 15:37:02.448638937 +0000 UTC m=+0.179000803 container init 840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:37:02 compute-0 ceph-mon[75334]: osdmap e268: 3 total, 3 up, 3 in
Feb 02 15:37:02 compute-0 ceph-mon[75334]: pgmap v1168: 305 pgs: 2 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 286 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 157 KiB/s rd, 12 KiB/s wr, 210 op/s
Feb 02 15:37:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1856521192' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1856521192' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:02 compute-0 podman[254473]: 2026-02-02 15:37:02.454820837 +0000 UTC m=+0.185182703 container start 840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:37:02 compute-0 podman[254473]: 2026-02-02 15:37:02.462786468 +0000 UTC m=+0.193148344 container attach 840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:37:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Feb 02 15:37:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Feb 02 15:37:02 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.605 239549 DEBUG oslo_concurrency.lockutils [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.606 239549 DEBUG oslo_concurrency.lockutils [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.622 239549 INFO nova.compute.manager [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Detaching volume 9d8b0104-e8e0-41d9-8b53-4c657499398c
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.776 239549 INFO nova.virt.block_device [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Attempting to driver detach volume 9d8b0104-e8e0-41d9-8b53-4c657499398c from mountpoint /dev/vdb
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.784 239549 DEBUG nova.virt.libvirt.driver [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Attempting to detach device vdb from instance a19161ab-082d-4489-93df-8008cdef83ce from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.785 239549 DEBUG nova.virt.libvirt.guest [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-9d8b0104-e8e0-41d9-8b53-4c657499398c">
Feb 02 15:37:02 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   </source>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <serial>9d8b0104-e8e0-41d9-8b53-4c657499398c</serial>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:37:02 compute-0 nova_compute[239545]: </disk>
Feb 02 15:37:02 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.793 239549 INFO nova.virt.libvirt.driver [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Successfully detached device vdb from instance a19161ab-082d-4489-93df-8008cdef83ce from the persistent domain config.
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.794 239549 DEBUG nova.virt.libvirt.driver [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a19161ab-082d-4489-93df-8008cdef83ce from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.794 239549 DEBUG nova.virt.libvirt.guest [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-9d8b0104-e8e0-41d9-8b53-4c657499398c">
Feb 02 15:37:02 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   </source>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <serial>9d8b0104-e8e0-41d9-8b53-4c657499398c</serial>
Feb 02 15:37:02 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:37:02 compute-0 nova_compute[239545]: </disk>
Feb 02 15:37:02 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.893 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Received event <DeviceRemovedEvent: 1770046622.89344, a19161ab-082d-4489-93df-8008cdef83ce => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.896 239549 DEBUG nova.virt.libvirt.driver [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a19161ab-082d-4489-93df-8008cdef83ce _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 15:37:02 compute-0 nova_compute[239545]: 2026-02-02 15:37:02.898 239549 INFO nova.virt.libvirt.driver [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Successfully detached device vdb from instance a19161ab-082d-4489-93df-8008cdef83ce from the live domain config.
Feb 02 15:37:03 compute-0 lvm[254570]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:37:03 compute-0 lvm[254569]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:37:03 compute-0 lvm[254570]: VG ceph_vg1 finished
Feb 02 15:37:03 compute-0 lvm[254569]: VG ceph_vg0 finished
Feb 02 15:37:03 compute-0 lvm[254572]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:37:03 compute-0 lvm[254572]: VG ceph_vg2 finished
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.091 239549 DEBUG nova.objects.instance [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'flavor' on Instance uuid a19161ab-082d-4489-93df-8008cdef83ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.126 239549 DEBUG oslo_concurrency.lockutils [None req-a1928639-f1d7-4bdb-92e3-7a28bef72831 b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2649967924' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2649967924' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:03 compute-0 silly_visvesvaraya[254489]: {}
Feb 02 15:37:03 compute-0 systemd[1]: libpod-840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d.scope: Deactivated successfully.
Feb 02 15:37:03 compute-0 systemd[1]: libpod-840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d.scope: Consumed 1.032s CPU time.
Feb 02 15:37:03 compute-0 podman[254473]: 2026-02-02 15:37:03.161790642 +0000 UTC m=+0.892152518 container died 840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:37:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-05340dcb2853e42a1776014dddaf984fd062740312e8ef15cef3edc623f4f91f-merged.mount: Deactivated successfully.
Feb 02 15:37:03 compute-0 podman[254473]: 2026-02-02 15:37:03.202098003 +0000 UTC m=+0.932459869 container remove 840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:37:03 compute-0 systemd[1]: libpod-conmon-840c5e3653bb0557503aa18bbf49393f5c4e9578df417599731c0c07af22263d.scope: Deactivated successfully.
Feb 02 15:37:03 compute-0 sudo[254397]: pam_unix(sudo:session): session closed for user root
Feb 02 15:37:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:37:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:37:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:37:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:37:03 compute-0 sudo[254585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:37:03 compute-0 sudo[254585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:37:03 compute-0 sudo[254585]: pam_unix(sudo:session): session closed for user root
Feb 02 15:37:03 compute-0 ceph-mon[75334]: osdmap e269: 3 total, 3 up, 3 in
Feb 02 15:37:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2649967924' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2649967924' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:37:03 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.522305) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046623522336, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2451, "num_deletes": 261, "total_data_size": 3484324, "memory_usage": 3552160, "flush_reason": "Manual Compaction"}
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb 02 15:37:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 2 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 286 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 158 KiB/s rd, 12 KiB/s wr, 212 op/s
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.536 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.539 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046623549551, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3404516, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21518, "largest_seqno": 23968, "table_properties": {"data_size": 3393192, "index_size": 7370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 24561, "raw_average_key_size": 21, "raw_value_size": 3370084, "raw_average_value_size": 2927, "num_data_blocks": 320, "num_entries": 1151, "num_filter_entries": 1151, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770046470, "oldest_key_time": 1770046470, "file_creation_time": 1770046623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 27306 microseconds, and 4663 cpu microseconds.
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.549606) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3404516 bytes OK
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.549626) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.551555) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.551570) EVENT_LOG_v1 {"time_micros": 1770046623551565, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.551588) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3473674, prev total WAL file size 3473674, number of live WAL files 2.
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.552232) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3324KB)], [50(7483KB)]
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046623552289, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11067374, "oldest_snapshot_seqno": -1}
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5260 keys, 9305120 bytes, temperature: kUnknown
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046623614629, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9305120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9265882, "index_size": 24984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 129679, "raw_average_key_size": 24, "raw_value_size": 9167042, "raw_average_value_size": 1742, "num_data_blocks": 1029, "num_entries": 5260, "num_filter_entries": 5260, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770046623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.614885) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9305120 bytes
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.615868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.2 rd, 149.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5790, records dropped: 530 output_compression: NoCompression
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.615883) EVENT_LOG_v1 {"time_micros": 1770046623615875, "job": 26, "event": "compaction_finished", "compaction_time_micros": 62442, "compaction_time_cpu_micros": 14793, "output_level": 6, "num_output_files": 1, "total_output_size": 9305120, "num_input_records": 5790, "num_output_records": 5260, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046623616179, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046623616787, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.552138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.616829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.616834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.616836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.616837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:37:03 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:37:03.616838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.914 239549 DEBUG oslo_concurrency.lockutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.914 239549 DEBUG oslo_concurrency.lockutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.914 239549 DEBUG oslo_concurrency.lockutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.915 239549 DEBUG oslo_concurrency.lockutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.915 239549 DEBUG oslo_concurrency.lockutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.916 239549 INFO nova.compute.manager [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Terminating instance
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.917 239549 DEBUG nova.compute.manager [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:37:03 compute-0 kernel: tap8489a727-80 (unregistering): left promiscuous mode
Feb 02 15:37:03 compute-0 NetworkManager[49171]: <info>  [1770046623.9734] device (tap8489a727-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:37:03 compute-0 ovn_controller[144995]: 2026-02-02T15:37:03Z|00100|binding|INFO|Releasing lport 8489a727-801c-4762-8094-7fe19ffe6dc8 from this chassis (sb_readonly=0)
Feb 02 15:37:03 compute-0 ovn_controller[144995]: 2026-02-02T15:37:03Z|00101|binding|INFO|Setting lport 8489a727-801c-4762-8094-7fe19ffe6dc8 down in Southbound
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.981 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:03 compute-0 ovn_controller[144995]: 2026-02-02T15:37:03Z|00102|binding|INFO|Removing iface tap8489a727-80 ovn-installed in OVS
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.983 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:03 compute-0 nova_compute[239545]: 2026-02-02 15:37:03.988 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:03 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:03.988 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:29:66 10.100.0.10'], port_security=['fa:16:3e:a8:29:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a19161ab-082d-4489-93df-8008cdef83ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd39cd97fc8041569e2a21b01b4ed0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '671c18e5-7ce5-4db4-9b07-3da2aec604fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=387ba1e2-c4db-437f-a706-eb9807770b03, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=8489a727-801c-4762-8094-7fe19ffe6dc8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:37:03 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:03.990 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 8489a727-801c-4762-8094-7fe19ffe6dc8 in datapath 8a81d067-8083-4de2-8ac6-1682b4d8e6bb unbound from our chassis
Feb 02 15:37:03 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:03.991 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a81d067-8083-4de2-8ac6-1682b4d8e6bb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:37:03 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:03.992 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[efe695e5-14ed-4f12-82d4-6bed30b32f93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:03 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:03.992 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb namespace which is not needed anymore
Feb 02 15:37:04 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb 02 15:37:04 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 14.576s CPU time.
Feb 02 15:37:04 compute-0 systemd-machined[207609]: Machine qemu-8-instance-00000008 terminated.
Feb 02 15:37:04 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[253127]: [NOTICE]   (253131) : haproxy version is 2.8.14-c23fe91
Feb 02 15:37:04 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[253127]: [NOTICE]   (253131) : path to executable is /usr/sbin/haproxy
Feb 02 15:37:04 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[253127]: [WARNING]  (253131) : Exiting Master process...
Feb 02 15:37:04 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[253127]: [WARNING]  (253131) : Exiting Master process...
Feb 02 15:37:04 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[253127]: [ALERT]    (253131) : Current worker (253133) exited with code 143 (Terminated)
Feb 02 15:37:04 compute-0 neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb[253127]: [WARNING]  (253131) : All workers exited. Exiting... (0)
Feb 02 15:37:04 compute-0 systemd[1]: libpod-82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571.scope: Deactivated successfully.
Feb 02 15:37:04 compute-0 podman[254631]: 2026-02-02 15:37:04.103132524 +0000 UTC m=+0.043297045 container died 82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571-userdata-shm.mount: Deactivated successfully.
Feb 02 15:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2023a184731e3b0868b2fc3dd306d36aeeee21e48e37d6020650edd5b834538c-merged.mount: Deactivated successfully.
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.149 239549 INFO nova.virt.libvirt.driver [-] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Instance destroyed successfully.
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.150 239549 DEBUG nova.objects.instance [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lazy-loading 'resources' on Instance uuid a19161ab-082d-4489-93df-8008cdef83ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:37:04 compute-0 podman[254631]: 2026-02-02 15:37:04.151582632 +0000 UTC m=+0.091747143 container cleanup 82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:37:04 compute-0 systemd[1]: libpod-conmon-82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571.scope: Deactivated successfully.
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.168 239549 DEBUG nova.virt.libvirt.vif [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-851722124',display_name='tempest-VolumesBackupsTest-instance-851722124',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-851722124',id=8,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxGZ42nZmy9eFXLNdGTZZ19oqS2EDCx8WV4lvxPpX26iHVNIzHKrUETtaVbtlSEIzrxlFV11P13FOzbbdPfC/FpLJMgr90TaCBLcQZVsQySCSrgZkjhs8C7ilx+k8W4PA==',key_name='tempest-keypair-1227534713',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:36:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cd39cd97fc8041569e2a21b01b4ed0db',ramdisk_id='',reservation_id='r-owyyzt3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1207235356',owner_user_name='tempest-VolumesBackupsTest-1207235356-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:36:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b10e73971e784c20a0843cf9caf5cbbe',uuid=a19161ab-082d-4489-93df-8008cdef83ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.168 239549 DEBUG nova.network.os_vif_util [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converting VIF {"id": "8489a727-801c-4762-8094-7fe19ffe6dc8", "address": "fa:16:3e:a8:29:66", "network": {"id": "8a81d067-8083-4de2-8ac6-1682b4d8e6bb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-410529581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd39cd97fc8041569e2a21b01b4ed0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8489a727-80", "ovs_interfaceid": "8489a727-801c-4762-8094-7fe19ffe6dc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.169 239549 DEBUG nova.network.os_vif_util [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:29:66,bridge_name='br-int',has_traffic_filtering=True,id=8489a727-801c-4762-8094-7fe19ffe6dc8,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8489a727-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.169 239549 DEBUG os_vif [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:29:66,bridge_name='br-int',has_traffic_filtering=True,id=8489a727-801c-4762-8094-7fe19ffe6dc8,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8489a727-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.170 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.170 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8489a727-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.172 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.174 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.176 239549 INFO os_vif [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:29:66,bridge_name='br-int',has_traffic_filtering=True,id=8489a727-801c-4762-8094-7fe19ffe6dc8,network=Network(8a81d067-8083-4de2-8ac6-1682b4d8e6bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8489a727-80')
Feb 02 15:37:04 compute-0 podman[254670]: 2026-02-02 15:37:04.214269061 +0000 UTC m=+0.043541550 container remove 82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:37:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:04.217 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bbfe79-bcd3-4f74-a7c7-51b5a6a6577e]: (4, ('Mon Feb  2 03:37:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb (82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571)\n82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571\nMon Feb  2 03:37:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb (82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571)\n82ee13dd50eba25698bb222b151c0721639a44de0fb045176c2158662aac6571\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:04.219 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[66c1a676-ae01-46c8-b1cc-8e7f78532d45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:04.219 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a81d067-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.221 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:04 compute-0 kernel: tap8a81d067-80: left promiscuous mode
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.227 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:04.229 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9728ba-ab55-42d5-9b5f-2fe42eb7c090]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.247 239549 DEBUG nova.compute.manager [req-53bffa6f-9042-4313-a7ff-0a8bbd8f2f2c req-58ab0038-e6d7-4e3e-ba2a-41aa9121e715 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received event network-vif-unplugged-8489a727-801c-4762-8094-7fe19ffe6dc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.248 239549 DEBUG oslo_concurrency.lockutils [req-53bffa6f-9042-4313-a7ff-0a8bbd8f2f2c req-58ab0038-e6d7-4e3e-ba2a-41aa9121e715 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.248 239549 DEBUG oslo_concurrency.lockutils [req-53bffa6f-9042-4313-a7ff-0a8bbd8f2f2c req-58ab0038-e6d7-4e3e-ba2a-41aa9121e715 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.248 239549 DEBUG oslo_concurrency.lockutils [req-53bffa6f-9042-4313-a7ff-0a8bbd8f2f2c req-58ab0038-e6d7-4e3e-ba2a-41aa9121e715 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.249 239549 DEBUG nova.compute.manager [req-53bffa6f-9042-4313-a7ff-0a8bbd8f2f2c req-58ab0038-e6d7-4e3e-ba2a-41aa9121e715 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] No waiting events found dispatching network-vif-unplugged-8489a727-801c-4762-8094-7fe19ffe6dc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.249 239549 DEBUG nova.compute.manager [req-53bffa6f-9042-4313-a7ff-0a8bbd8f2f2c req-58ab0038-e6d7-4e3e-ba2a-41aa9121e715 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received event network-vif-unplugged-8489a727-801c-4762-8094-7fe19ffe6dc8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:37:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:04.249 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1161f113-bcf4-471c-897e-b095fd3d9f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:04.250 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a1400278-7149-4aa0-844e-7a94ae1636fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:04.262 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d2cf824b-3cde-4e5a-8045-fd7baea52964]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406940, 'reachable_time': 39767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254701, 'error': None, 'target': 'ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d8a81d067\x2d8083\x2d4de2\x2d8ac6\x2d1682b4d8e6bb.mount: Deactivated successfully.
Feb 02 15:37:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:04.265 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8a81d067-8083-4de2-8ac6-1682b4d8e6bb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:37:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:04.265 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7813bb-5c61-4a40-8d3d-4daac1cdd96e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.432 239549 INFO nova.virt.libvirt.driver [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Deleting instance files /var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce_del
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.433 239549 INFO nova.virt.libvirt.driver [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Deletion of /var/lib/nova/instances/a19161ab-082d-4489-93df-8008cdef83ce_del complete
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.483 239549 INFO nova.compute.manager [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.483 239549 DEBUG oslo.service.loopingcall [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.484 239549 DEBUG nova.compute.manager [-] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.484 239549 DEBUG nova.network.neutron [-] [instance: a19161ab-082d-4489-93df-8008cdef83ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:37:04 compute-0 ceph-mon[75334]: pgmap v1170: 305 pgs: 2 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 286 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 158 KiB/s rd, 12 KiB/s wr, 212 op/s
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.565 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.565 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.565 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.565 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:37:04 compute-0 nova_compute[239545]: 2026-02-02 15:37:04.566 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:37:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596095841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.124 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.216 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.216 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.216 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.364 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.365 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4384MB free_disk=59.92172449082136GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.365 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.365 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.444 239549 DEBUG nova.network.neutron [-] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.453 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance a19161ab-082d-4489-93df-8008cdef83ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.453 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 304cd645-9c75-48a4-bef2-e52534374d5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.453 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.453 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.465 239549 INFO nova.compute.manager [-] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Took 0.98 seconds to deallocate network for instance.
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.471 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing inventories for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.508 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating ProviderTree inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.509 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.523 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing aggregate associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.527 239549 DEBUG oslo_concurrency.lockutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 2 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 286 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 174 KiB/s rd, 10 KiB/s wr, 231 op/s
Feb 02 15:37:05 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2596095841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.545 239549 DEBUG nova.compute.manager [req-f6184b94-0732-45c6-a88b-5a9a63ae75d2 req-ec4cbc1f-68bd-4c83-bf6f-f6f59ca583a1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received event network-vif-deleted-8489a727-801c-4762-8094-7fe19ffe6dc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.553 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing trait associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, traits: COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 15:37:05 compute-0 nova_compute[239545]: 2026-02-02 15:37:05.607 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:37:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3124370871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.134 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.138 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.155 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.177 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.177 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.177 239549 DEBUG oslo_concurrency.lockutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.228 239549 DEBUG oslo_concurrency.processutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.322 239549 DEBUG nova.compute.manager [req-8939f413-8b2e-4fe8-9264-ea403aa1b38c req-adbaf257-9227-4d49-8252-96413efa96f1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received event network-vif-plugged-8489a727-801c-4762-8094-7fe19ffe6dc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.323 239549 DEBUG oslo_concurrency.lockutils [req-8939f413-8b2e-4fe8-9264-ea403aa1b38c req-adbaf257-9227-4d49-8252-96413efa96f1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "a19161ab-082d-4489-93df-8008cdef83ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.323 239549 DEBUG oslo_concurrency.lockutils [req-8939f413-8b2e-4fe8-9264-ea403aa1b38c req-adbaf257-9227-4d49-8252-96413efa96f1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.323 239549 DEBUG oslo_concurrency.lockutils [req-8939f413-8b2e-4fe8-9264-ea403aa1b38c req-adbaf257-9227-4d49-8252-96413efa96f1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.323 239549 DEBUG nova.compute.manager [req-8939f413-8b2e-4fe8-9264-ea403aa1b38c req-adbaf257-9227-4d49-8252-96413efa96f1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] No waiting events found dispatching network-vif-plugged-8489a727-801c-4762-8094-7fe19ffe6dc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.323 239549 WARNING nova.compute.manager [req-8939f413-8b2e-4fe8-9264-ea403aa1b38c req-adbaf257-9227-4d49-8252-96413efa96f1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Received unexpected event network-vif-plugged-8489a727-801c-4762-8094-7fe19ffe6dc8 for instance with vm_state deleted and task_state None.
Feb 02 15:37:06 compute-0 ceph-mon[75334]: pgmap v1171: 305 pgs: 2 active+clean+snaptrim, 17 active+clean+snaptrim_wait, 286 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 174 KiB/s rd, 10 KiB/s wr, 231 op/s
Feb 02 15:37:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3124370871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:06 compute-0 ovn_controller[144995]: 2026-02-02T15:37:06Z|00103|binding|INFO|Releasing lport 3156bb6d-ffcf-4cf9-b8f0-2e49b08f8b4d from this chassis (sb_readonly=0)
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.701 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:37:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314345975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.763 239549 DEBUG oslo_concurrency.processutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.768 239549 DEBUG nova.compute.provider_tree [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.783 239549 DEBUG nova.scheduler.client.report [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.817 239549 DEBUG oslo_concurrency.lockutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.844 239549 INFO nova.scheduler.client.report [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Deleted allocations for instance a19161ab-082d-4489-93df-8008cdef83ce
Feb 02 15:37:06 compute-0 nova_compute[239545]: 2026-02-02 15:37:06.918 239549 DEBUG oslo_concurrency.lockutils [None req-988fbdd5-a423-4ecb-9c73-60376a3c9c7d b10e73971e784c20a0843cf9caf5cbbe cd39cd97fc8041569e2a21b01b4ed0db - - default default] Lock "a19161ab-082d-4489-93df-8008cdef83ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:07 compute-0 nova_compute[239545]: 2026-02-02 15:37:07.178 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:37:07 compute-0 nova_compute[239545]: 2026-02-02 15:37:07.179 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:37:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 158 KiB/s rd, 7.9 KiB/s wr, 210 op/s
Feb 02 15:37:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1314345975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.538 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.573 239549 DEBUG oslo_concurrency.lockutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquiring lock "304cd645-9c75-48a4-bef2-e52534374d5e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.574 239549 DEBUG oslo_concurrency.lockutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.574 239549 DEBUG oslo_concurrency.lockutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquiring lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.574 239549 DEBUG oslo_concurrency.lockutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.575 239549 DEBUG oslo_concurrency.lockutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.576 239549 INFO nova.compute.manager [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Terminating instance
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.577 239549 DEBUG nova.compute.manager [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:37:08 compute-0 ceph-mon[75334]: pgmap v1172: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 158 KiB/s rd, 7.9 KiB/s wr, 210 op/s
Feb 02 15:37:08 compute-0 kernel: tapae24d426-50 (unregistering): left promiscuous mode
Feb 02 15:37:08 compute-0 NetworkManager[49171]: <info>  [1770046628.6278] device (tapae24d426-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:37:08 compute-0 ovn_controller[144995]: 2026-02-02T15:37:08Z|00104|binding|INFO|Releasing lport ae24d426-5095-4b1a-9447-99d1205851d0 from this chassis (sb_readonly=0)
Feb 02 15:37:08 compute-0 ovn_controller[144995]: 2026-02-02T15:37:08Z|00105|binding|INFO|Setting lport ae24d426-5095-4b1a-9447-99d1205851d0 down in Southbound
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.663 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 ovn_controller[144995]: 2026-02-02T15:37:08Z|00106|binding|INFO|Removing iface tapae24d426-50 ovn-installed in OVS
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.665 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.670 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.673 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:6c:e3 10.100.0.6'], port_security=['fa:16:3e:5b:6c:e3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '304cd645-9c75-48a4-bef2-e52534374d5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3ceba88-6072-4e8b-849a-7f0feefeaf73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38955a398ac84e6292ec72dd46d5a973', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa658106-2847-4d06-87ee-90b34f78ae7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bddabc3e-3d66-4dfa-bd39-6fb99a743486, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ae24d426-5095-4b1a-9447-99d1205851d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.674 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ae24d426-5095-4b1a-9447-99d1205851d0 in datapath c3ceba88-6072-4e8b-849a-7f0feefeaf73 unbound from our chassis
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.675 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c3ceba88-6072-4e8b-849a-7f0feefeaf73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.675 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[578e1390-8e22-4efd-871b-0e96b39f9f65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.676 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73 namespace which is not needed anymore
Feb 02 15:37:08 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Feb 02 15:37:08 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 14.258s CPU time.
Feb 02 15:37:08 compute-0 systemd-machined[207609]: Machine qemu-9-instance-00000009 terminated.
Feb 02 15:37:08 compute-0 neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73[253728]: [NOTICE]   (253732) : haproxy version is 2.8.14-c23fe91
Feb 02 15:37:08 compute-0 neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73[253728]: [NOTICE]   (253732) : path to executable is /usr/sbin/haproxy
Feb 02 15:37:08 compute-0 neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73[253728]: [WARNING]  (253732) : Exiting Master process...
Feb 02 15:37:08 compute-0 neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73[253728]: [ALERT]    (253732) : Current worker (253734) exited with code 143 (Terminated)
Feb 02 15:37:08 compute-0 neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73[253728]: [WARNING]  (253732) : All workers exited. Exiting... (0)
Feb 02 15:37:08 compute-0 systemd[1]: libpod-0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301.scope: Deactivated successfully.
Feb 02 15:37:08 compute-0 podman[254794]: 2026-02-02 15:37:08.777815437 +0000 UTC m=+0.040064400 container died 0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.793 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.807 239549 INFO nova.virt.libvirt.driver [-] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Instance destroyed successfully.
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.807 239549 DEBUG nova.objects.instance [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lazy-loading 'resources' on Instance uuid 304cd645-9c75-48a4-bef2-e52534374d5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301-userdata-shm.mount: Deactivated successfully.
Feb 02 15:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4e08a0ca02de1c98d299614360898cd0e0d0fc079634ea794131d9edf5364d2-merged.mount: Deactivated successfully.
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.822 239549 DEBUG nova.virt.libvirt.vif [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:36:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-1212069114',display_name='tempest-instance-1212069114',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1212069114',id=9,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMqG+1E3NIr10c8CJSJ+GP1kqg+GuUbWR7tkG9T6caPQyltbKlM5hixdyE6JKDdeZ9QJ3HyYVSNI6wBjrKCMNKYeUVJdASpMrALkEdfg0h3qhbDwSVGfPCNcdhpEwtygSw==',key_name='tempest-keypair-450571809',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:36:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='38955a398ac84e6292ec72dd46d5a973',ramdisk_id='',reservation_id='r-06c5ukij',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-622371638',owner_user_name='tempest-VolumesBackupsTest-622371638-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:36:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1869bacd75349e1b296189b33fb5426',uuid=304cd645-9c75-48a4-bef2-e52534374d5e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.822 239549 DEBUG nova.network.os_vif_util [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Converting VIF {"id": "ae24d426-5095-4b1a-9447-99d1205851d0", "address": "fa:16:3e:5b:6c:e3", "network": {"id": "c3ceba88-6072-4e8b-849a-7f0feefeaf73", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-457450034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38955a398ac84e6292ec72dd46d5a973", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae24d426-50", "ovs_interfaceid": "ae24d426-5095-4b1a-9447-99d1205851d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.823 239549 DEBUG nova.network.os_vif_util [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:6c:e3,bridge_name='br-int',has_traffic_filtering=True,id=ae24d426-5095-4b1a-9447-99d1205851d0,network=Network(c3ceba88-6072-4e8b-849a-7f0feefeaf73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae24d426-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.823 239549 DEBUG os_vif [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:6c:e3,bridge_name='br-int',has_traffic_filtering=True,id=ae24d426-5095-4b1a-9447-99d1205851d0,network=Network(c3ceba88-6072-4e8b-849a-7f0feefeaf73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae24d426-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.825 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.825 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae24d426-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:08 compute-0 podman[254794]: 2026-02-02 15:37:08.825639652 +0000 UTC m=+0.087888595 container cleanup 0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.826 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.828 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.830 239549 INFO os_vif [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:6c:e3,bridge_name='br-int',has_traffic_filtering=True,id=ae24d426-5095-4b1a-9447-99d1205851d0,network=Network(c3ceba88-6072-4e8b-849a-7f0feefeaf73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae24d426-50')
Feb 02 15:37:08 compute-0 systemd[1]: libpod-conmon-0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301.scope: Deactivated successfully.
Feb 02 15:37:08 compute-0 podman[254833]: 2026-02-02 15:37:08.884021433 +0000 UTC m=+0.042695393 container remove 0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.889 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[dac1741c-3b36-496c-b750-c5a6b952316e]: (4, ('Mon Feb  2 03:37:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73 (0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301)\n0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301\nMon Feb  2 03:37:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73 (0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301)\n0747ad17c421c0fb1b201cb11ed32273cb7d1a569b11d6e07a387d2f849fe301\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.891 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a63b00c3-2633-4d3e-adc3-26e9359f4a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.892 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3ceba88-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.894 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 kernel: tapc3ceba88-60: left promiscuous mode
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.896 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.898 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f47709cc-cfba-4421-b7d7-e7d31beb56a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:08 compute-0 nova_compute[239545]: 2026-02-02 15:37:08.902 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.913 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c48afb61-2252-4765-9724-3cd8357a1bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.915 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf77ce90-8467-481a-86d9-8aa6c8da6b3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.927 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6571fe21-20e5-4a73-b143-055fc9c58dca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407946, 'reachable_time': 36142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254866, 'error': None, 'target': 'ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:08 compute-0 systemd[1]: run-netns-ovnmeta\x2dc3ceba88\x2d6072\x2d4e8b\x2d849a\x2d7f0feefeaf73.mount: Deactivated successfully.
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.930 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c3ceba88-6072-4e8b-849a-7f0feefeaf73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:37:08 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:08.930 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[6c078979-65de-4217-bc0a-a459b32618ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.046 239549 DEBUG nova.compute.manager [req-15dc8e83-9e44-4198-9ca7-de4ba520e2fb req-03cff74e-ff27-4534-a49d-90c1401a9fd7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received event network-vif-unplugged-ae24d426-5095-4b1a-9447-99d1205851d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.047 239549 DEBUG oslo_concurrency.lockutils [req-15dc8e83-9e44-4198-9ca7-de4ba520e2fb req-03cff74e-ff27-4534-a49d-90c1401a9fd7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.047 239549 DEBUG oslo_concurrency.lockutils [req-15dc8e83-9e44-4198-9ca7-de4ba520e2fb req-03cff74e-ff27-4534-a49d-90c1401a9fd7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.047 239549 DEBUG oslo_concurrency.lockutils [req-15dc8e83-9e44-4198-9ca7-de4ba520e2fb req-03cff74e-ff27-4534-a49d-90c1401a9fd7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.047 239549 DEBUG nova.compute.manager [req-15dc8e83-9e44-4198-9ca7-de4ba520e2fb req-03cff74e-ff27-4534-a49d-90c1401a9fd7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] No waiting events found dispatching network-vif-unplugged-ae24d426-5095-4b1a-9447-99d1205851d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.048 239549 DEBUG nova.compute.manager [req-15dc8e83-9e44-4198-9ca7-de4ba520e2fb req-03cff74e-ff27-4534-a49d-90c1401a9fd7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received event network-vif-unplugged-ae24d426-5095-4b1a-9447-99d1205851d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.082 239549 INFO nova.virt.libvirt.driver [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Deleting instance files /var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e_del
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.082 239549 INFO nova.virt.libvirt.driver [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Deletion of /var/lib/nova/instances/304cd645-9c75-48a4-bef2-e52534374d5e_del complete
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.134 239549 INFO nova.compute.manager [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Took 0.56 seconds to destroy the instance on the hypervisor.
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.134 239549 DEBUG oslo.service.loopingcall [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.135 239549 DEBUG nova.compute.manager [-] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:37:09 compute-0 nova_compute[239545]: 2026-02-02 15:37:09.135 239549 DEBUG nova.network.neutron [-] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:37:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 83 KiB/s rd, 3.4 KiB/s wr, 114 op/s
Feb 02 15:37:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Feb 02 15:37:10 compute-0 ceph-mon[75334]: pgmap v1173: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 83 KiB/s rd, 3.4 KiB/s wr, 114 op/s
Feb 02 15:37:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Feb 02 15:37:10 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Feb 02 15:37:10 compute-0 nova_compute[239545]: 2026-02-02 15:37:10.967 239549 DEBUG nova.network.neutron [-] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:37:10 compute-0 nova_compute[239545]: 2026-02-02 15:37:10.986 239549 INFO nova.compute.manager [-] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Took 1.85 seconds to deallocate network for instance.
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.143 239549 INFO nova.compute.manager [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Took 0.16 seconds to detach 1 volumes for instance.
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.161 239549 DEBUG nova.compute.manager [req-70e4d859-d9f6-49c3-bbc1-b034a8a39bac req-419d583f-12d6-49d0-a75b-89de5905d159 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received event network-vif-plugged-ae24d426-5095-4b1a-9447-99d1205851d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.161 239549 DEBUG oslo_concurrency.lockutils [req-70e4d859-d9f6-49c3-bbc1-b034a8a39bac req-419d583f-12d6-49d0-a75b-89de5905d159 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.162 239549 DEBUG oslo_concurrency.lockutils [req-70e4d859-d9f6-49c3-bbc1-b034a8a39bac req-419d583f-12d6-49d0-a75b-89de5905d159 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.162 239549 DEBUG oslo_concurrency.lockutils [req-70e4d859-d9f6-49c3-bbc1-b034a8a39bac req-419d583f-12d6-49d0-a75b-89de5905d159 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.162 239549 DEBUG nova.compute.manager [req-70e4d859-d9f6-49c3-bbc1-b034a8a39bac req-419d583f-12d6-49d0-a75b-89de5905d159 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] No waiting events found dispatching network-vif-plugged-ae24d426-5095-4b1a-9447-99d1205851d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.162 239549 WARNING nova.compute.manager [req-70e4d859-d9f6-49c3-bbc1-b034a8a39bac req-419d583f-12d6-49d0-a75b-89de5905d159 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received unexpected event network-vif-plugged-ae24d426-5095-4b1a-9447-99d1205851d0 for instance with vm_state active and task_state deleting.
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.163 239549 DEBUG nova.compute.manager [req-70e4d859-d9f6-49c3-bbc1-b034a8a39bac req-419d583f-12d6-49d0-a75b-89de5905d159 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Received event network-vif-deleted-ae24d426-5095-4b1a-9447-99d1205851d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.188 239549 DEBUG oslo_concurrency.lockutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.188 239549 DEBUG oslo_concurrency.lockutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.223 239549 DEBUG oslo_concurrency.processutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 96 KiB/s rd, 6.4 KiB/s wr, 133 op/s
Feb 02 15:37:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Feb 02 15:37:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Feb 02 15:37:11 compute-0 ceph-mon[75334]: osdmap e270: 3 total, 3 up, 3 in
Feb 02 15:37:11 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Feb 02 15:37:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:37:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/67986243' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.818 239549 DEBUG oslo_concurrency.processutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.823 239549 DEBUG nova.compute.provider_tree [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.838 239549 DEBUG nova.scheduler.client.report [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.866 239549 DEBUG oslo_concurrency.lockutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.893 239549 INFO nova.scheduler.client.report [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Deleted allocations for instance 304cd645-9c75-48a4-bef2-e52534374d5e
Feb 02 15:37:11 compute-0 nova_compute[239545]: 2026-02-02 15:37:11.951 239549 DEBUG oslo_concurrency.lockutils [None req-07528807-4720-415b-a0d2-2604176d2d70 f1869bacd75349e1b296189b33fb5426 38955a398ac84e6292ec72dd46d5a973 - - default default] Lock "304cd645-9c75-48a4-bef2-e52534374d5e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Feb 02 15:37:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Feb 02 15:37:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Feb 02 15:37:12 compute-0 ceph-mon[75334]: pgmap v1175: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 96 KiB/s rd, 6.4 KiB/s wr, 133 op/s
Feb 02 15:37:12 compute-0 ceph-mon[75334]: osdmap e271: 3 total, 3 up, 3 in
Feb 02 15:37:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/67986243' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:12 compute-0 ceph-mon[75334]: osdmap e272: 3 total, 3 up, 3 in
Feb 02 15:37:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2105315236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2105315236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:37:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/480187474' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:37:13 compute-0 nova_compute[239545]: 2026-02-02 15:37:13.407 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 2.2 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 47 KiB/s rd, 5.0 KiB/s wr, 67 op/s
Feb 02 15:37:13 compute-0 nova_compute[239545]: 2026-02-02 15:37:13.541 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Feb 02 15:37:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2105315236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2105315236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/480187474' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:37:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Feb 02 15:37:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Feb 02 15:37:13 compute-0 nova_compute[239545]: 2026-02-02 15:37:13.827 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:14.082 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:37:14 compute-0 nova_compute[239545]: 2026-02-02 15:37:14.082 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:14.084 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:37:14 compute-0 ceph-mon[75334]: pgmap v1178: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 2.2 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 47 KiB/s rd, 5.0 KiB/s wr, 67 op/s
Feb 02 15:37:14 compute-0 ceph-mon[75334]: osdmap e273: 3 total, 3 up, 3 in
Feb 02 15:37:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Feb 02 15:37:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Feb 02 15:37:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Feb 02 15:37:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:37:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:37:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:37:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:37:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:37:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:37:15 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:15.087 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 106 op/s
Feb 02 15:37:15 compute-0 ceph-mon[75334]: osdmap e274: 3 total, 3 up, 3 in
Feb 02 15:37:15 compute-0 ceph-mon[75334]: pgmap v1181: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 106 op/s
Feb 02 15:37:16 compute-0 nova_compute[239545]: 2026-02-02 15:37:16.305 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Feb 02 15:37:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Feb 02 15:37:16 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Feb 02 15:37:17 compute-0 nova_compute[239545]: 2026-02-02 15:37:17.026 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:17 compute-0 nova_compute[239545]: 2026-02-02 15:37:17.102 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.5 MiB/s wr, 169 op/s
Feb 02 15:37:17 compute-0 ceph-mon[75334]: osdmap e275: 3 total, 3 up, 3 in
Feb 02 15:37:17 compute-0 ceph-mon[75334]: pgmap v1183: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.5 MiB/s wr, 169 op/s
Feb 02 15:37:18 compute-0 nova_compute[239545]: 2026-02-02 15:37:18.542 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:18 compute-0 nova_compute[239545]: 2026-02-02 15:37:18.832 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Feb 02 15:37:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Feb 02 15:37:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Feb 02 15:37:19 compute-0 nova_compute[239545]: 2026-02-02 15:37:19.147 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046624.1468782, a19161ab-082d-4489-93df-8008cdef83ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:37:19 compute-0 nova_compute[239545]: 2026-02-02 15:37:19.148 239549 INFO nova.compute.manager [-] [instance: a19161ab-082d-4489-93df-8008cdef83ce] VM Stopped (Lifecycle Event)
Feb 02 15:37:19 compute-0 nova_compute[239545]: 2026-02-02 15:37:19.178 239549 DEBUG nova.compute.manager [None req-99d36ce9-9a52-4982-8212-90e0d694c992 - - - - - -] [instance: a19161ab-082d-4489-93df-8008cdef83ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:37:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 173 op/s
Feb 02 15:37:19 compute-0 ceph-mon[75334]: osdmap e276: 3 total, 3 up, 3 in
Feb 02 15:37:19 compute-0 ceph-mon[75334]: pgmap v1185: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 173 op/s
Feb 02 15:37:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 2.2 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 161 op/s
Feb 02 15:37:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Feb 02 15:37:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Feb 02 15:37:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Feb 02 15:37:22 compute-0 ceph-mon[75334]: pgmap v1186: 305 pgs: 305 active+clean; 2.2 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 161 op/s
Feb 02 15:37:22 compute-0 ceph-mon[75334]: osdmap e277: 3 total, 3 up, 3 in
Feb 02 15:37:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.2 MiB/s wr, 94 op/s
Feb 02 15:37:23 compute-0 nova_compute[239545]: 2026-02-02 15:37:23.543 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:23 compute-0 nova_compute[239545]: 2026-02-02 15:37:23.805 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046628.8044753, 304cd645-9c75-48a4-bef2-e52534374d5e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:37:23 compute-0 nova_compute[239545]: 2026-02-02 15:37:23.806 239549 INFO nova.compute.manager [-] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] VM Stopped (Lifecycle Event)
Feb 02 15:37:23 compute-0 nova_compute[239545]: 2026-02-02 15:37:23.830 239549 DEBUG nova.compute.manager [None req-6c209595-8e14-4dc5-b6b3-787c0976ca9f - - - - - -] [instance: 304cd645-9c75-48a4-bef2-e52534374d5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:37:23 compute-0 nova_compute[239545]: 2026-02-02 15:37:23.834 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:37:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1723223947' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:37:24 compute-0 ceph-mon[75334]: pgmap v1188: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.2 MiB/s wr, 94 op/s
Feb 02 15:37:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1723223947' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:37:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.2 MiB/s rd, 7.0 MiB/s wr, 88 op/s
Feb 02 15:37:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:37:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 5382 syncs, 3.03 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 23.73 MB, 0.04 MB/s
                                           Interval WAL: 10K writes, 4383 syncs, 2.39 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 15:37:25 compute-0 ceph-mon[75334]: pgmap v1189: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.2 MiB/s rd, 7.0 MiB/s wr, 88 op/s
Feb 02 15:37:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Feb 02 15:37:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Feb 02 15:37:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.5 MiB/s rd, 18 MiB/s wr, 135 op/s
Feb 02 15:37:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Feb 02 15:37:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4242243806' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4242243806' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:28 compute-0 nova_compute[239545]: 2026-02-02 15:37:28.546 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:28 compute-0 ceph-mon[75334]: pgmap v1190: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.5 MiB/s rd, 18 MiB/s wr, 135 op/s
Feb 02 15:37:28 compute-0 ceph-mon[75334]: osdmap e278: 3 total, 3 up, 3 in
Feb 02 15:37:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4242243806' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4242243806' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:28 compute-0 nova_compute[239545]: 2026-02-02 15:37:28.835 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:37:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 18K writes, 75K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 6125 syncs, 3.03 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 30.76 MB, 0.05 MB/s
                                           Interval WAL: 11K writes, 4716 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 15:37:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 43 MiB/s wr, 164 op/s
Feb 02 15:37:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Feb 02 15:37:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Feb 02 15:37:29 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Feb 02 15:37:30 compute-0 ceph-mon[75334]: pgmap v1192: 305 pgs: 305 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 43 MiB/s wr, 164 op/s
Feb 02 15:37:30 compute-0 ceph-mon[75334]: osdmap e279: 3 total, 3 up, 3 in
Feb 02 15:37:31 compute-0 podman[254894]: 2026-02-02 15:37:31.30889211 +0000 UTC m=+0.041097935 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 02 15:37:31 compute-0 podman[254893]: 2026-02-02 15:37:31.333529315 +0000 UTC m=+0.065934524 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 02 15:37:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 3.1 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 242 KiB/s rd, 120 MiB/s wr, 404 op/s
Feb 02 15:37:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2153431961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2153431961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2153431961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2153431961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.037 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.038 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.059 239549 DEBUG nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.155 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.155 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.162 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.163 239549 INFO nova.compute.claims [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.272 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Feb 02 15:37:32 compute-0 ceph-mon[75334]: pgmap v1194: 305 pgs: 305 active+clean; 3.1 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 242 KiB/s rd, 120 MiB/s wr, 404 op/s
Feb 02 15:37:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Feb 02 15:37:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Feb 02 15:37:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:37:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3233292099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3034190513' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3034190513' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.810 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.816 239549 DEBUG nova.compute.provider_tree [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.831 239549 DEBUG nova.scheduler.client.report [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.861 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.862 239549 DEBUG nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.912 239549 DEBUG nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.913 239549 DEBUG nova.network.neutron [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.937 239549 INFO nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:37:32 compute-0 nova_compute[239545]: 2026-02-02 15:37:32.960 239549 DEBUG nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.071 239549 DEBUG nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.072 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.072 239549 INFO nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Creating image(s)
Feb 02 15:37:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:37:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 57K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3863 syncs, 3.33 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7133 writes, 33K keys, 7133 commit groups, 1.0 writes per commit group, ingest: 16.93 MB, 0.03 MB/s
                                           Interval WAL: 7133 writes, 2965 syncs, 2.41 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.091 239549 DEBUG nova.storage.rbd_utils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] rbd image e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.111 239549 DEBUG nova.storage.rbd_utils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] rbd image e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.129 239549 DEBUG nova.storage.rbd_utils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] rbd image e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.132 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.146 239549 DEBUG nova.policy [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b51e52171e514748b1584f228f0231ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '115560eaceb947abbaeaf329e9ab5683', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.188 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.189 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.190 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.190 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.216 239549 DEBUG nova.storage.rbd_utils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] rbd image e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.220 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.464 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.521 239549 DEBUG nova.storage.rbd_utils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] resizing rbd image e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:37:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.9 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 260 KiB/s rd, 149 MiB/s wr, 451 op/s
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.571 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.662 239549 DEBUG nova.objects.instance [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lazy-loading 'migration_context' on Instance uuid e39fbf7a-5b10-4f35-b531-efb11df8a34b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:37:33 compute-0 ceph-mon[75334]: osdmap e280: 3 total, 3 up, 3 in
Feb 02 15:37:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3233292099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:37:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3034190513' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3034190513' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:33 compute-0 ceph-mon[75334]: pgmap v1196: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.9 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 260 KiB/s rd, 149 MiB/s wr, 451 op/s
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.681 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.682 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Ensure instance console log exists: /var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.682 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.683 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.683 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:33 compute-0 nova_compute[239545]: 2026-02-02 15:37:33.878 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/623075893' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/623075893' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:34 compute-0 nova_compute[239545]: 2026-02-02 15:37:34.281 239549 DEBUG nova.network.neutron [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Successfully created port: f11c6544-0831-4f4d-9959-e8a813d59f02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:37:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/623075893' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/623075893' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:35 compute-0 nova_compute[239545]: 2026-02-02 15:37:35.322 239549 DEBUG nova.network.neutron [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Successfully updated port: f11c6544-0831-4f4d-9959-e8a813d59f02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:37:35 compute-0 nova_compute[239545]: 2026-02-02 15:37:35.349 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "refresh_cache-e39fbf7a-5b10-4f35-b531-efb11df8a34b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:37:35 compute-0 nova_compute[239545]: 2026-02-02 15:37:35.349 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquired lock "refresh_cache-e39fbf7a-5b10-4f35-b531-efb11df8a34b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:37:35 compute-0 nova_compute[239545]: 2026-02-02 15:37:35.349 239549 DEBUG nova.network.neutron [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:37:35 compute-0 ceph-mgr[75628]: [devicehealth INFO root] Check health
Feb 02 15:37:35 compute-0 nova_compute[239545]: 2026-02-02 15:37:35.472 239549 DEBUG nova.compute.manager [req-08887a50-bceb-4347-accc-95b0b90431ca req-beee186c-c39a-42e8-bd37-6db6c4915021 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received event network-changed-f11c6544-0831-4f4d-9959-e8a813d59f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:35 compute-0 nova_compute[239545]: 2026-02-02 15:37:35.472 239549 DEBUG nova.compute.manager [req-08887a50-bceb-4347-accc-95b0b90431ca req-beee186c-c39a-42e8-bd37-6db6c4915021 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Refreshing instance network info cache due to event network-changed-f11c6544-0831-4f4d-9959-e8a813d59f02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:37:35 compute-0 nova_compute[239545]: 2026-02-02 15:37:35.472 239549 DEBUG oslo_concurrency.lockutils [req-08887a50-bceb-4347-accc-95b0b90431ca req-beee186c-c39a-42e8-bd37-6db6c4915021 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-e39fbf7a-5b10-4f35-b531-efb11df8a34b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:37:35 compute-0 nova_compute[239545]: 2026-02-02 15:37:35.538 239549 DEBUG nova.network.neutron [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:37:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 307 KiB/s rd, 114 MiB/s wr, 517 op/s
Feb 02 15:37:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Feb 02 15:37:35 compute-0 ceph-mon[75334]: pgmap v1197: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 307 KiB/s rd, 114 MiB/s wr, 517 op/s
Feb 02 15:37:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Feb 02 15:37:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.524 239549 DEBUG nova.network.neutron [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Updating instance_info_cache with network_info: [{"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.549 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Releasing lock "refresh_cache-e39fbf7a-5b10-4f35-b531-efb11df8a34b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.550 239549 DEBUG nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Instance network_info: |[{"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.551 239549 DEBUG oslo_concurrency.lockutils [req-08887a50-bceb-4347-accc-95b0b90431ca req-beee186c-c39a-42e8-bd37-6db6c4915021 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-e39fbf7a-5b10-4f35-b531-efb11df8a34b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.551 239549 DEBUG nova.network.neutron [req-08887a50-bceb-4347-accc-95b0b90431ca req-beee186c-c39a-42e8-bd37-6db6c4915021 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Refreshing network info cache for port f11c6544-0831-4f4d-9959-e8a813d59f02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.557 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Start _get_guest_xml network_info=[{"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.563 239549 WARNING nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.575 239549 DEBUG nova.virt.libvirt.host [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.576 239549 DEBUG nova.virt.libvirt.host [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.579 239549 DEBUG nova.virt.libvirt.host [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.580 239549 DEBUG nova.virt.libvirt.host [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.580 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.580 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.581 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.581 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.581 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.581 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.582 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.582 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.582 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.582 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.582 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.582 239549 DEBUG nova.virt.hardware [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:37:36 compute-0 nova_compute[239545]: 2026-02-02 15:37:36.585 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Feb 02 15:37:36 compute-0 ceph-mon[75334]: osdmap e281: 3 total, 3 up, 3 in
Feb 02 15:37:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Feb 02 15:37:36 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Feb 02 15:37:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:37:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573917847' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.146 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.166 239549 DEBUG nova.storage.rbd_utils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] rbd image e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.169 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1034684067' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1034684067' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 170 KiB/s rd, 18 MiB/s wr, 282 op/s
Feb 02 15:37:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:37:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763507549' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.708 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.710 239549 DEBUG nova.virt.libvirt.vif [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:37:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1268133418',display_name='tempest-TestEncryptedCinderVolumes-server-1268133418',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1268133418',id=10,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJLlMvuNCYtL11DjRJ1K5ygIqHM/lR7WnMbq8DojV+lv2C2/WKdvjdC2b5d3qqOO33vsgTNfmOxGVgH90dQgZIdYWO430u/oR9Jo6xHCtxYNxJboO7WvaiIF21O8RQkmw==',key_name='tempest-keypair-974496074',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='115560eaceb947abbaeaf329e9ab5683',ramdisk_id='',reservation_id='r-rkzwuwa9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1499518443',owner_user_name='tempest-TestEncryptedCinderVolumes-1499518443-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:37:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b51e52171e514748b1584f228f0231ac',uuid=e39fbf7a-5b10-4f35-b531-efb11df8a34b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.711 239549 DEBUG nova.network.os_vif_util [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Converting VIF {"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.712 239549 DEBUG nova.network.os_vif_util [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:87:92,bridge_name='br-int',has_traffic_filtering=True,id=f11c6544-0831-4f4d-9959-e8a813d59f02,network=Network(4e969847-ba87-4ece-858b-96e1806f85b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11c6544-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.713 239549 DEBUG nova.objects.instance [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lazy-loading 'pci_devices' on Instance uuid e39fbf7a-5b10-4f35-b531-efb11df8a34b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.730 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <uuid>e39fbf7a-5b10-4f35-b531-efb11df8a34b</uuid>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <name>instance-0000000a</name>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1268133418</nova:name>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:37:36</nova:creationTime>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <nova:user uuid="b51e52171e514748b1584f228f0231ac">tempest-TestEncryptedCinderVolumes-1499518443-project-member</nova:user>
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <nova:project uuid="115560eaceb947abbaeaf329e9ab5683">tempest-TestEncryptedCinderVolumes-1499518443</nova:project>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <nova:port uuid="f11c6544-0831-4f4d-9959-e8a813d59f02">
Feb 02 15:37:37 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <system>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <entry name="serial">e39fbf7a-5b10-4f35-b531-efb11df8a34b</entry>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <entry name="uuid">e39fbf7a-5b10-4f35-b531-efb11df8a34b</entry>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     </system>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <os>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   </os>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <features>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   </features>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk">
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       </source>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk.config">
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       </source>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:37:37 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:b3:87:92"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <target dev="tapf11c6544-08"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b/console.log" append="off"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <video>
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     </video>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:37:37 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:37:37 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:37:37 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:37:37 compute-0 nova_compute[239545]: </domain>
Feb 02 15:37:37 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.732 239549 DEBUG nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Preparing to wait for external event network-vif-plugged-f11c6544-0831-4f4d-9959-e8a813d59f02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.732 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.732 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.733 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.734 239549 DEBUG nova.virt.libvirt.vif [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:37:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1268133418',display_name='tempest-TestEncryptedCinderVolumes-server-1268133418',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1268133418',id=10,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJLlMvuNCYtL11DjRJ1K5ygIqHM/lR7WnMbq8DojV+lv2C2/WKdvjdC2b5d3qqOO33vsgTNfmOxGVgH90dQgZIdYWO430u/oR9Jo6xHCtxYNxJboO7WvaiIF21O8RQkmw==',key_name='tempest-keypair-974496074',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='115560eaceb947abbaeaf329e9ab5683',ramdisk_id='',reservation_id='r-rkzwuwa9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1499518443',owner_user_name='tempest-TestEncryptedCinderVolumes-1499518443-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:37:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b51e52171e514748b1584f228f0231ac',uuid=e39fbf7a-5b10-4f35-b531-efb11df8a34b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.734 239549 DEBUG nova.network.os_vif_util [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Converting VIF {"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.735 239549 DEBUG nova.network.os_vif_util [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:87:92,bridge_name='br-int',has_traffic_filtering=True,id=f11c6544-0831-4f4d-9959-e8a813d59f02,network=Network(4e969847-ba87-4ece-858b-96e1806f85b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11c6544-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.735 239549 DEBUG os_vif [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:87:92,bridge_name='br-int',has_traffic_filtering=True,id=f11c6544-0831-4f4d-9959-e8a813d59f02,network=Network(4e969847-ba87-4ece-858b-96e1806f85b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11c6544-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.736 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.736 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.737 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.740 239549 DEBUG nova.network.neutron [req-08887a50-bceb-4347-accc-95b0b90431ca req-beee186c-c39a-42e8-bd37-6db6c4915021 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Updated VIF entry in instance network info cache for port f11c6544-0831-4f4d-9959-e8a813d59f02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.741 239549 DEBUG nova.network.neutron [req-08887a50-bceb-4347-accc-95b0b90431ca req-beee186c-c39a-42e8-bd37-6db6c4915021 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Updating instance_info_cache with network_info: [{"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.743 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.743 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf11c6544-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.744 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf11c6544-08, col_values=(('external_ids', {'iface-id': 'f11c6544-0831-4f4d-9959-e8a813d59f02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:87:92', 'vm-uuid': 'e39fbf7a-5b10-4f35-b531-efb11df8a34b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.745 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:37 compute-0 NetworkManager[49171]: <info>  [1770046657.7469] manager: (tapf11c6544-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.748 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.750 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.751 239549 INFO os_vif [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:87:92,bridge_name='br-int',has_traffic_filtering=True,id=f11c6544-0831-4f4d-9959-e8a813d59f02,network=Network(4e969847-ba87-4ece-858b-96e1806f85b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11c6544-08')
Feb 02 15:37:37 compute-0 ceph-mon[75334]: osdmap e282: 3 total, 3 up, 3 in
Feb 02 15:37:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2573917847' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:37:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1034684067' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.772 239549 DEBUG oslo_concurrency.lockutils [req-08887a50-bceb-4347-accc-95b0b90431ca req-beee186c-c39a-42e8-bd37-6db6c4915021 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-e39fbf7a-5b10-4f35-b531-efb11df8a34b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:37:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1034684067' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:37 compute-0 ceph-mon[75334]: pgmap v1200: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 170 KiB/s rd, 18 MiB/s wr, 282 op/s
Feb 02 15:37:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2763507549' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.800 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.801 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.801 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] No VIF found with MAC fa:16:3e:b3:87:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.801 239549 INFO nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Using config drive
Feb 02 15:37:37 compute-0 nova_compute[239545]: 2026-02-02 15:37:37.819 239549 DEBUG nova.storage.rbd_utils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] rbd image e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.262 239549 INFO nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Creating config drive at /var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b/disk.config
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.266 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2brgzch5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.383 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2brgzch5" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.404 239549 DEBUG nova.storage.rbd_utils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] rbd image e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.407 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b/disk.config e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.512 239549 DEBUG oslo_concurrency.processutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b/disk.config e39fbf7a-5b10-4f35-b531-efb11df8a34b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.513 239549 INFO nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Deleting local config drive /var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b/disk.config because it was imported into RBD.
Feb 02 15:37:38 compute-0 kernel: tapf11c6544-08: entered promiscuous mode
Feb 02 15:37:38 compute-0 NetworkManager[49171]: <info>  [1770046658.5424] manager: (tapf11c6544-08): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Feb 02 15:37:38 compute-0 ovn_controller[144995]: 2026-02-02T15:37:38Z|00107|binding|INFO|Claiming lport f11c6544-0831-4f4d-9959-e8a813d59f02 for this chassis.
Feb 02 15:37:38 compute-0 ovn_controller[144995]: 2026-02-02T15:37:38Z|00108|binding|INFO|f11c6544-0831-4f4d-9959-e8a813d59f02: Claiming fa:16:3e:b3:87:92 10.100.0.12
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.546 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.549 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:38 compute-0 systemd-machined[207609]: New machine qemu-10-instance-0000000a.
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.564 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:87:92 10.100.0.12'], port_security=['fa:16:3e:b3:87:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e39fbf7a-5b10-4f35-b531-efb11df8a34b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e969847-ba87-4ece-858b-96e1806f85b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '115560eaceb947abbaeaf329e9ab5683', 'neutron:revision_number': '2', 'neutron:security_group_ids': '562adfb0-4a8a-4daf-b4fb-8eb4927d45ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5933466e-0821-4132-8ebe-9aba51456576, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=f11c6544-0831-4f4d-9959-e8a813d59f02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.565 154982 INFO neutron.agent.ovn.metadata.agent [-] Port f11c6544-0831-4f4d-9959-e8a813d59f02 in datapath 4e969847-ba87-4ece-858b-96e1806f85b1 bound to our chassis
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.566 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4e969847-ba87-4ece-858b-96e1806f85b1
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.572 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef7be7c-80c6-4744-8f30-7e27345813d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.572 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4e969847-b1 in ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:37:38 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Feb 02 15:37:38 compute-0 ovn_controller[144995]: 2026-02-02T15:37:38Z|00109|binding|INFO|Setting lport f11c6544-0831-4f4d-9959-e8a813d59f02 ovn-installed in OVS
Feb 02 15:37:38 compute-0 ovn_controller[144995]: 2026-02-02T15:37:38Z|00110|binding|INFO|Setting lport f11c6544-0831-4f4d-9959-e8a813d59f02 up in Southbound
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.577 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4e969847-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.577 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[eb79eea9-af0e-4a7d-9ac6-362f28a4586f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.577 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.578 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f1817607-1e77-41c6-a666-bf6c353ba476]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 systemd-udevd[255262]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.585 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[456e9954-ee55-4174-a3fc-1fd1b8a7dfe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 NetworkManager[49171]: <info>  [1770046658.5908] device (tapf11c6544-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:37:38 compute-0 NetworkManager[49171]: <info>  [1770046658.5913] device (tapf11c6544-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.595 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6aab17ec-7c3c-49b8-9777-6f48db59bc6f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.612 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[994ac873-c0ec-4529-b876-0ea52b5a56ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.615 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b00468-9b1d-4dc8-be0c-9616df0809e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 NetworkManager[49171]: <info>  [1770046658.6160] manager: (tap4e969847-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.636 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1a576a-5b66-4114-9d9c-2f83d0fd2c96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.638 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[65316a7b-801b-478f-b241-c29dad3c7e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 NetworkManager[49171]: <info>  [1770046658.6498] device (tap4e969847-b0): carrier: link connected
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.651 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[03b2c8c1-bab2-4887-b486-3583b251448a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.663 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[05db2ca9-1be4-4744-a8eb-0f3671bae7d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e969847-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7c:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414456, 'reachable_time': 40720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255294, 'error': None, 'target': 'ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.673 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e93c2c1b-cb97-4f2c-b9b8-cf1ed1e18302]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:7c5d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414456, 'tstamp': 414456}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255295, 'error': None, 'target': 'ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.684 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[055efcd9-0dd4-4497-a97e-878d34b44e20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e969847-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7c:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414456, 'reachable_time': 40720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255296, 'error': None, 'target': 'ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.704 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ad187129-2b32-4523-a472-0ee0f0cfde2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.743 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[821df9e4-5cdd-4813-a935-33618dc9b1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.744 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e969847-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.745 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.745 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e969847-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.746 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:38 compute-0 kernel: tap4e969847-b0: entered promiscuous mode
Feb 02 15:37:38 compute-0 NetworkManager[49171]: <info>  [1770046658.7476] manager: (tap4e969847-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.748 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.753 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4e969847-b0, col_values=(('external_ids', {'iface-id': '3f6bfea0-19c6-4d81-a791-fdf9e4477758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.755 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:38 compute-0 ovn_controller[144995]: 2026-02-02T15:37:38Z|00111|binding|INFO|Releasing lport 3f6bfea0-19c6-4d81-a791-fdf9e4477758 from this chassis (sb_readonly=0)
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.758 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4e969847-ba87-4ece-858b-96e1806f85b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4e969847-ba87-4ece-858b-96e1806f85b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.758 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[835e76a5-bdec-4750-a327-b6de7e36c1b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.759 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-4e969847-ba87-4ece-858b-96e1806f85b1
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/4e969847-ba87-4ece-858b-96e1806f85b1.pid.haproxy
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 4e969847-ba87-4ece-858b-96e1806f85b1
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:37:38 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:38.760 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1', 'env', 'PROCESS_TAG=haproxy-4e969847-ba87-4ece-858b-96e1806f85b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4e969847-ba87-4ece-858b-96e1806f85b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.762 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.780 239549 DEBUG nova.compute.manager [req-3f434f7f-cb20-4885-90cd-ca8974a9baef req-145b0f5b-c144-42cc-a9a8-28144fabdee9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received event network-vif-plugged-f11c6544-0831-4f4d-9959-e8a813d59f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.781 239549 DEBUG oslo_concurrency.lockutils [req-3f434f7f-cb20-4885-90cd-ca8974a9baef req-145b0f5b-c144-42cc-a9a8-28144fabdee9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.781 239549 DEBUG oslo_concurrency.lockutils [req-3f434f7f-cb20-4885-90cd-ca8974a9baef req-145b0f5b-c144-42cc-a9a8-28144fabdee9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.781 239549 DEBUG oslo_concurrency.lockutils [req-3f434f7f-cb20-4885-90cd-ca8974a9baef req-145b0f5b-c144-42cc-a9a8-28144fabdee9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.781 239549 DEBUG nova.compute.manager [req-3f434f7f-cb20-4885-90cd-ca8974a9baef req-145b0f5b-c144-42cc-a9a8-28144fabdee9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Processing event network-vif-plugged-f11c6544-0831-4f4d-9959-e8a813d59f02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:37:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Feb 02 15:37:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Feb 02 15:37:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Feb 02 15:37:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514701614' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514701614' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.950 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046658.949891, e39fbf7a-5b10-4f35-b531-efb11df8a34b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.951 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] VM Started (Lifecycle Event)
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.952 239549 DEBUG nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.955 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.958 239549 INFO nova.virt.libvirt.driver [-] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Instance spawned successfully.
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.958 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.976 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.979 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.991 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.991 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.992 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.992 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.993 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:37:38 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.993 239549 DEBUG nova.virt.libvirt.driver [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:38.999 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.000 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046658.950093, e39fbf7a-5b10-4f35-b531-efb11df8a34b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.000 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] VM Paused (Lifecycle Event)
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.024 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.027 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046658.954372, e39fbf7a-5b10-4f35-b531-efb11df8a34b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.027 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] VM Resumed (Lifecycle Event)
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.043 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.046 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.050 239549 INFO nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Took 5.98 seconds to spawn the instance on the hypervisor.
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.050 239549 DEBUG nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.078 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.110 239549 INFO nova.compute.manager [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Took 6.98 seconds to build instance.
Feb 02 15:37:39 compute-0 nova_compute[239545]: 2026-02-02 15:37:39.126 239549 DEBUG oslo_concurrency.lockutils [None req-5d56add5-6a44-476f-a649-366beb9147dc b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:39 compute-0 podman[255370]: 2026-02-02 15:37:39.169181424 +0000 UTC m=+0.043521612 container create 745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:37:39 compute-0 systemd[1]: Started libpod-conmon-745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47.scope.
Feb 02 15:37:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:37:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8481155138db3627c94078e08e7397c1b95c3139b06d0c44f00233952fe03f5d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:37:39 compute-0 podman[255370]: 2026-02-02 15:37:39.146300881 +0000 UTC m=+0.020641099 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:37:39 compute-0 podman[255370]: 2026-02-02 15:37:39.246867441 +0000 UTC m=+0.121207699 container init 745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:37:39 compute-0 podman[255370]: 2026-02-02 15:37:39.25097663 +0000 UTC m=+0.125316848 container start 745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:37:39 compute-0 neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1[255385]: [NOTICE]   (255389) : New worker (255391) forked
Feb 02 15:37:39 compute-0 neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1[255385]: [NOTICE]   (255389) : Loading success.
Feb 02 15:37:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 186 KiB/s rd, 3.6 MiB/s wr, 289 op/s
Feb 02 15:37:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Feb 02 15:37:39 compute-0 ceph-mon[75334]: osdmap e283: 3 total, 3 up, 3 in
Feb 02 15:37:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1514701614' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1514701614' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:39 compute-0 ceph-mon[75334]: pgmap v1202: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 186 KiB/s rd, 3.6 MiB/s wr, 289 op/s
Feb 02 15:37:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Feb 02 15:37:39 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Feb 02 15:37:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1740526772' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1740526772' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492754804' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492754804' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:40 compute-0 ceph-mon[75334]: osdmap e284: 3 total, 3 up, 3 in
Feb 02 15:37:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1740526772' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1740526772' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3492754804' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3492754804' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:40 compute-0 nova_compute[239545]: 2026-02-02 15:37:40.855 239549 DEBUG nova.compute.manager [req-4e5d0915-c4b0-472f-b7cc-a6080b6e2420 req-77ac9a10-4ace-4515-8378-150fc9deea9c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received event network-vif-plugged-f11c6544-0831-4f4d-9959-e8a813d59f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:40 compute-0 nova_compute[239545]: 2026-02-02 15:37:40.856 239549 DEBUG oslo_concurrency.lockutils [req-4e5d0915-c4b0-472f-b7cc-a6080b6e2420 req-77ac9a10-4ace-4515-8378-150fc9deea9c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:40 compute-0 nova_compute[239545]: 2026-02-02 15:37:40.856 239549 DEBUG oslo_concurrency.lockutils [req-4e5d0915-c4b0-472f-b7cc-a6080b6e2420 req-77ac9a10-4ace-4515-8378-150fc9deea9c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:40 compute-0 nova_compute[239545]: 2026-02-02 15:37:40.856 239549 DEBUG oslo_concurrency.lockutils [req-4e5d0915-c4b0-472f-b7cc-a6080b6e2420 req-77ac9a10-4ace-4515-8378-150fc9deea9c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:40 compute-0 nova_compute[239545]: 2026-02-02 15:37:40.856 239549 DEBUG nova.compute.manager [req-4e5d0915-c4b0-472f-b7cc-a6080b6e2420 req-77ac9a10-4ace-4515-8378-150fc9deea9c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] No waiting events found dispatching network-vif-plugged-f11c6544-0831-4f4d-9959-e8a813d59f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:37:40 compute-0 nova_compute[239545]: 2026-02-02 15:37:40.856 239549 WARNING nova.compute.manager [req-4e5d0915-c4b0-472f-b7cc-a6080b6e2420 req-77ac9a10-4ace-4515-8378-150fc9deea9c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received unexpected event network-vif-plugged-f11c6544-0831-4f4d-9959-e8a813d59f02 for instance with vm_state active and task_state None.
Feb 02 15:37:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 741 KiB/s wr, 287 op/s
Feb 02 15:37:41 compute-0 ceph-mon[75334]: pgmap v1204: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 741 KiB/s wr, 287 op/s
Feb 02 15:37:42 compute-0 NetworkManager[49171]: <info>  [1770046662.4929] manager: (patch-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Feb 02 15:37:42 compute-0 nova_compute[239545]: 2026-02-02 15:37:42.493 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:42 compute-0 NetworkManager[49171]: <info>  [1770046662.4946] manager: (patch-br-int-to-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Feb 02 15:37:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Feb 02 15:37:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Feb 02 15:37:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Feb 02 15:37:42 compute-0 nova_compute[239545]: 2026-02-02 15:37:42.574 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:42 compute-0 ovn_controller[144995]: 2026-02-02T15:37:42Z|00112|binding|INFO|Releasing lport 3f6bfea0-19c6-4d81-a791-fdf9e4477758 from this chassis (sb_readonly=0)
Feb 02 15:37:42 compute-0 nova_compute[239545]: 2026-02-02 15:37:42.596 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:42 compute-0 nova_compute[239545]: 2026-02-02 15:37:42.745 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:37:42
Feb 02 15:37:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:37:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:37:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'images', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Feb 02 15:37:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:37:42 compute-0 nova_compute[239545]: 2026-02-02 15:37:42.920 239549 DEBUG nova.compute.manager [req-3d4b91de-c2fe-439a-9b42-6b794fdd888b req-fcb94026-d580-440e-9ef3-ed92cb397121 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received event network-changed-f11c6544-0831-4f4d-9959-e8a813d59f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:37:42 compute-0 nova_compute[239545]: 2026-02-02 15:37:42.920 239549 DEBUG nova.compute.manager [req-3d4b91de-c2fe-439a-9b42-6b794fdd888b req-fcb94026-d580-440e-9ef3-ed92cb397121 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Refreshing instance network info cache due to event network-changed-f11c6544-0831-4f4d-9959-e8a813d59f02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:37:42 compute-0 nova_compute[239545]: 2026-02-02 15:37:42.921 239549 DEBUG oslo_concurrency.lockutils [req-3d4b91de-c2fe-439a-9b42-6b794fdd888b req-fcb94026-d580-440e-9ef3-ed92cb397121 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-e39fbf7a-5b10-4f35-b531-efb11df8a34b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:37:42 compute-0 nova_compute[239545]: 2026-02-02 15:37:42.921 239549 DEBUG oslo_concurrency.lockutils [req-3d4b91de-c2fe-439a-9b42-6b794fdd888b req-fcb94026-d580-440e-9ef3-ed92cb397121 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-e39fbf7a-5b10-4f35-b531-efb11df8a34b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:37:42 compute-0 nova_compute[239545]: 2026-02-02 15:37:42.921 239549 DEBUG nova.network.neutron [req-3d4b91de-c2fe-439a-9b42-6b794fdd888b req-fcb94026-d580-440e-9ef3-ed92cb397121 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Refreshing network info cache for port f11c6544-0831-4f4d-9959-e8a813d59f02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:37:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:37:43 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655806298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:37:43 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655806298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:43 compute-0 ceph-mon[75334]: osdmap e285: 3 total, 3 up, 3 in
Feb 02 15:37:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3655806298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:37:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3655806298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:37:43 compute-0 nova_compute[239545]: 2026-02-02 15:37:43.551 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 1.3 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 4.0 MiB/s rd, 720 KiB/s wr, 394 op/s
Feb 02 15:37:44 compute-0 ceph-mon[75334]: pgmap v1206: 305 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 295 active+clean; 1.3 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 4.0 MiB/s rd, 720 KiB/s wr, 394 op/s
Feb 02 15:37:44 compute-0 nova_compute[239545]: 2026-02-02 15:37:44.717 239549 DEBUG nova.network.neutron [req-3d4b91de-c2fe-439a-9b42-6b794fdd888b req-fcb94026-d580-440e-9ef3-ed92cb397121 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Updated VIF entry in instance network info cache for port f11c6544-0831-4f4d-9959-e8a813d59f02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:37:44 compute-0 nova_compute[239545]: 2026-02-02 15:37:44.718 239549 DEBUG nova.network.neutron [req-3d4b91de-c2fe-439a-9b42-6b794fdd888b req-fcb94026-d580-440e-9ef3-ed92cb397121 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Updating instance_info_cache with network_info: [{"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:37:44 compute-0 nova_compute[239545]: 2026-02-02 15:37:44.743 239549 DEBUG oslo_concurrency.lockutils [req-3d4b91de-c2fe-439a-9b42-6b794fdd888b req-fcb94026-d580-440e-9ef3-ed92cb397121 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-e39fbf7a-5b10-4f35-b531-efb11df8a34b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:37:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:37:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 138 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 32 KiB/s wr, 364 op/s
Feb 02 15:37:45 compute-0 ovn_controller[144995]: 2026-02-02T15:37:45Z|00113|binding|INFO|Releasing lport 3f6bfea0-19c6-4d81-a791-fdf9e4477758 from this chassis (sb_readonly=0)
Feb 02 15:37:45 compute-0 nova_compute[239545]: 2026-02-02 15:37:45.762 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:46 compute-0 ceph-mon[75334]: pgmap v1207: 305 pgs: 305 active+clean; 138 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 32 KiB/s wr, 364 op/s
Feb 02 15:37:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Feb 02 15:37:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Feb 02 15:37:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Feb 02 15:37:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 138 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 28 KiB/s wr, 318 op/s
Feb 02 15:37:47 compute-0 nova_compute[239545]: 2026-02-02 15:37:47.747 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:48 compute-0 nova_compute[239545]: 2026-02-02 15:37:48.429 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:48 compute-0 ceph-mon[75334]: osdmap e286: 3 total, 3 up, 3 in
Feb 02 15:37:48 compute-0 ceph-mon[75334]: pgmap v1209: 305 pgs: 305 active+clean; 138 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 28 KiB/s wr, 318 op/s
Feb 02 15:37:48 compute-0 nova_compute[239545]: 2026-02-02 15:37:48.553 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 134 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 578 KiB/s rd, 2.2 KiB/s wr, 138 op/s
Feb 02 15:37:49 compute-0 ceph-mon[75334]: pgmap v1210: 305 pgs: 305 active+clean; 134 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 578 KiB/s rd, 2.2 KiB/s wr, 138 op/s
Feb 02 15:37:50 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 02 15:37:51 compute-0 ovn_controller[144995]: 2026-02-02T15:37:51Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:87:92 10.100.0.12
Feb 02 15:37:51 compute-0 ovn_controller[144995]: 2026-02-02T15:37:51Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:87:92 10.100.0.12
Feb 02 15:37:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Feb 02 15:37:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Feb 02 15:37:51 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Feb 02 15:37:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 149 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Feb 02 15:37:51 compute-0 nova_compute[239545]: 2026-02-02 15:37:51.798 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:52 compute-0 ceph-mon[75334]: osdmap e287: 3 total, 3 up, 3 in
Feb 02 15:37:52 compute-0 ceph-mon[75334]: pgmap v1212: 305 pgs: 305 active+clean; 149 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Feb 02 15:37:52 compute-0 nova_compute[239545]: 2026-02-02 15:37:52.750 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 157 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 2.2 MiB/s wr, 47 op/s
Feb 02 15:37:53 compute-0 nova_compute[239545]: 2026-02-02 15:37:53.590 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000640491819198518 of space, bias 1.0, pg target 0.1921475457595554 quantized to 32 (current 32)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035849593770436464 of space, bias 1.0, pg target 0.10754878131130939 quantized to 32 (current 32)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.1248537950266725e-06 of space, bias 1.0, pg target 0.00033745613850800176 quantized to 32 (current 32)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659778291424434 of space, bias 1.0, pg target 0.19979334874273302 quantized to 32 (current 32)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.369156377610653e-06 of space, bias 4.0, pg target 0.0016429876531327836 quantized to 16 (current 16)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:37:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:37:54 compute-0 ceph-mon[75334]: pgmap v1213: 305 pgs: 305 active+clean; 157 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 2.2 MiB/s wr, 47 op/s
Feb 02 15:37:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 445 KiB/s rd, 3.2 MiB/s wr, 115 op/s
Feb 02 15:37:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Feb 02 15:37:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Feb 02 15:37:55 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Feb 02 15:37:56 compute-0 nova_compute[239545]: 2026-02-02 15:37:56.263 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:56 compute-0 ceph-mon[75334]: pgmap v1214: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 445 KiB/s rd, 3.2 MiB/s wr, 115 op/s
Feb 02 15:37:56 compute-0 ceph-mon[75334]: osdmap e288: 3 total, 3 up, 3 in
Feb 02 15:37:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:37:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 447 KiB/s rd, 3.2 MiB/s wr, 115 op/s
Feb 02 15:37:57 compute-0 nova_compute[239545]: 2026-02-02 15:37:57.752 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:58 compute-0 nova_compute[239545]: 2026-02-02 15:37:58.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:37:58 compute-0 nova_compute[239545]: 2026-02-02 15:37:58.591 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Feb 02 15:37:58 compute-0 ceph-mon[75334]: pgmap v1216: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 447 KiB/s rd, 3.2 MiB/s wr, 115 op/s
Feb 02 15:37:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Feb 02 15:37:58 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Feb 02 15:37:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:59.250 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:59.250 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:37:59.251 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:59 compute-0 nova_compute[239545]: 2026-02-02 15:37:59.270 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:37:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 379 KiB/s rd, 1.0 MiB/s wr, 105 op/s
Feb 02 15:37:59 compute-0 nova_compute[239545]: 2026-02-02 15:37:59.647 239549 DEBUG oslo_concurrency.lockutils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:59 compute-0 nova_compute[239545]: 2026-02-02 15:37:59.647 239549 DEBUG oslo_concurrency.lockutils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:59 compute-0 nova_compute[239545]: 2026-02-02 15:37:59.662 239549 DEBUG nova.objects.instance [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lazy-loading 'flavor' on Instance uuid e39fbf7a-5b10-4f35-b531-efb11df8a34b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:37:59 compute-0 ceph-mon[75334]: osdmap e289: 3 total, 3 up, 3 in
Feb 02 15:37:59 compute-0 ceph-mon[75334]: pgmap v1218: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 379 KiB/s rd, 1.0 MiB/s wr, 105 op/s
Feb 02 15:37:59 compute-0 nova_compute[239545]: 2026-02-02 15:37:59.714 239549 DEBUG oslo_concurrency.lockutils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:37:59 compute-0 nova_compute[239545]: 2026-02-02 15:37:59.979 239549 DEBUG oslo_concurrency.lockutils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:37:59 compute-0 nova_compute[239545]: 2026-02-02 15:37:59.980 239549 DEBUG oslo_concurrency.lockutils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:37:59 compute-0 nova_compute[239545]: 2026-02-02 15:37:59.981 239549 INFO nova.compute.manager [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Attaching volume 2318c826-8ab2-4990-9416-1613c3176940 to /dev/vdb
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.140 239549 DEBUG os_brick.utils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.141 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.150 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.151 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1de6d6-8eb3-4759-9f6b-fa5ed592dd7c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.152 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.156 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.157 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[46483404-4de5-4c60-a703-15963e85178b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.158 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.166 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.166 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[74cb442b-a7e0-4196-ae0d-4c58a1f342c1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.168 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e6a6a9-82bd-4717-9986-2c2b75093caf]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.168 239549 DEBUG oslo_concurrency.processutils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.185 239549 DEBUG oslo_concurrency.processutils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.187 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.188 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.188 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.188 239549 DEBUG os_brick.utils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] <== get_connector_properties: return (47ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:38:00 compute-0 nova_compute[239545]: 2026-02-02 15:38:00.189 239549 DEBUG nova.virt.block_device [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Updating existing volume attachment record: b553b471-1546-4c62-b133-3b5a95bfbbdc _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:38:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:38:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912220868' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2912220868' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.259 239549 DEBUG os_brick.encryptors [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Using volume encryption metadata '{'encryption_key_id': '264af8bd-ba59-478d-b073-6d26e51d45ff', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2318c826-8ab2-4990-9416-1613c3176940', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2318c826-8ab2-4990-9416-1613c3176940', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e39fbf7a-5b10-4f35-b531-efb11df8a34b', 'attached_at': '', 'detached_at': '', 'volume_id': '2318c826-8ab2-4990-9416-1613c3176940', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.264 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.288 239549 DEBUG barbicanclient.v1.secrets [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/264af8bd-ba59-478d-b073-6d26e51d45ff get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.289 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.314 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.315 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.337 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.338 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.362 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.363 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.384 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.384 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.405 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.406 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.435 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.435 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.458 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.459 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.482 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.482 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.502 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.502 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.530 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.531 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.551 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.552 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.567 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.577 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.577 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 420 KiB/s rd, 1004 KiB/s wr, 159 op/s
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.597 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.598 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.627 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.628 239549 INFO barbicanclient.base [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Calculated Secrets uuid ref: secrets/264af8bd-ba59-478d-b073-6d26e51d45ff
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.648 239549 DEBUG barbicanclient.client [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.649 239549 DEBUG nova.virt.libvirt.host [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb 02 15:38:01 compute-0 nova_compute[239545]:   <usage type="volume">
Feb 02 15:38:01 compute-0 nova_compute[239545]:     <volume>2318c826-8ab2-4990-9416-1613c3176940</volume>
Feb 02 15:38:01 compute-0 nova_compute[239545]:   </usage>
Feb 02 15:38:01 compute-0 nova_compute[239545]: </secret>
Feb 02 15:38:01 compute-0 nova_compute[239545]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.657 239549 DEBUG nova.objects.instance [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lazy-loading 'flavor' on Instance uuid e39fbf7a-5b10-4f35-b531-efb11df8a34b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.680 239549 DEBUG nova.virt.libvirt.driver [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Attempting to attach volume 2318c826-8ab2-4990-9416-1613c3176940 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:38:01 compute-0 nova_compute[239545]: 2026-02-02 15:38:01.682 239549 DEBUG nova.virt.libvirt.guest [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:38:01 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:38:01 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-2318c826-8ab2-4990-9416-1613c3176940">
Feb 02 15:38:01 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:01 compute-0 nova_compute[239545]:   </source>
Feb 02 15:38:01 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:38:01 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:38:01 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:38:01 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:38:01 compute-0 nova_compute[239545]:   <serial>2318c826-8ab2-4990-9416-1613c3176940</serial>
Feb 02 15:38:01 compute-0 nova_compute[239545]:   <encryption format="luks">
Feb 02 15:38:01 compute-0 nova_compute[239545]:     <secret type="passphrase" uuid="e39d9b4d-4006-4c9b-b9d0-0af6f94613e5"/>
Feb 02 15:38:01 compute-0 nova_compute[239545]:   </encryption>
Feb 02 15:38:01 compute-0 nova_compute[239545]: </disk>
Feb 02 15:38:01 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:38:02 compute-0 ceph-mon[75334]: pgmap v1219: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 420 KiB/s rd, 1004 KiB/s wr, 159 op/s
Feb 02 15:38:02 compute-0 podman[255430]: 2026-02-02 15:38:02.319532241 +0000 UTC m=+0.056976588 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 15:38:02 compute-0 podman[255429]: 2026-02-02 15:38:02.365796459 +0000 UTC m=+0.100096949 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 02 15:38:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:02 compute-0 nova_compute[239545]: 2026-02-02 15:38:02.754 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2114850052' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2114850052' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2114850052' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:03 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2114850052' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:03 compute-0 sudo[255473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:38:03 compute-0 sudo[255473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:03 compute-0 sudo[255473]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:03 compute-0 sudo[255498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 02 15:38:03 compute-0 sudo[255498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:03 compute-0 nova_compute[239545]: 2026-02-02 15:38:03.562 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 24 KiB/s wr, 127 op/s
Feb 02 15:38:03 compute-0 nova_compute[239545]: 2026-02-02 15:38:03.645 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:03 compute-0 sudo[255498]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:38:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:38:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:03 compute-0 sudo[255544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:38:03 compute-0 sudo[255544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:03 compute-0 sudo[255544]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:03 compute-0 sudo[255569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:38:03 compute-0 sudo[255569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:03 compute-0 nova_compute[239545]: 2026-02-02 15:38:03.966 239549 DEBUG nova.virt.libvirt.driver [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:03 compute-0 nova_compute[239545]: 2026-02-02 15:38:03.967 239549 DEBUG nova.virt.libvirt.driver [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:03 compute-0 nova_compute[239545]: 2026-02-02 15:38:03.967 239549 DEBUG nova.virt.libvirt.driver [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:03 compute-0 nova_compute[239545]: 2026-02-02 15:38:03.968 239549 DEBUG nova.virt.libvirt.driver [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] No VIF found with MAC fa:16:3e:b3:87:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:38:04 compute-0 ceph-mon[75334]: pgmap v1220: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 24 KiB/s wr, 127 op/s
Feb 02 15:38:04 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:04 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:04 compute-0 nova_compute[239545]: 2026-02-02 15:38:04.128 239549 DEBUG oslo_concurrency.lockutils [None req-7d9628f8-9ef5-480e-aaae-12134de3b814 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:04 compute-0 sudo[255569]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:04 compute-0 sudo[255625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:38:04 compute-0 sudo[255625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:04 compute-0 sudo[255625]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:04 compute-0 sudo[255650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- inventory --format=json-pretty --filter-for-batch
Feb 02 15:38:04 compute-0 sudo[255650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:04 compute-0 nova_compute[239545]: 2026-02-02 15:38:04.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Feb 02 15:38:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Feb 02 15:38:04 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Feb 02 15:38:04 compute-0 podman[255687]: 2026-02-02 15:38:04.818769689 +0000 UTC m=+0.040656933 container create f7a16f6b3d569807b032c3fb167667ee849e17b8cefe66cb43d71bab2dedfaa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_satoshi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:38:04 compute-0 systemd[1]: Started libpod-conmon-f7a16f6b3d569807b032c3fb167667ee849e17b8cefe66cb43d71bab2dedfaa5.scope.
Feb 02 15:38:04 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:04 compute-0 podman[255687]: 2026-02-02 15:38:04.801331988 +0000 UTC m=+0.023219262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:38:04 compute-0 podman[255687]: 2026-02-02 15:38:04.903807114 +0000 UTC m=+0.125694458 container init f7a16f6b3d569807b032c3fb167667ee849e17b8cefe66cb43d71bab2dedfaa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:38:04 compute-0 podman[255687]: 2026-02-02 15:38:04.9106745 +0000 UTC m=+0.132561754 container start f7a16f6b3d569807b032c3fb167667ee849e17b8cefe66cb43d71bab2dedfaa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_satoshi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:38:04 compute-0 wonderful_satoshi[255704]: 167 167
Feb 02 15:38:04 compute-0 systemd[1]: libpod-f7a16f6b3d569807b032c3fb167667ee849e17b8cefe66cb43d71bab2dedfaa5.scope: Deactivated successfully.
Feb 02 15:38:04 compute-0 podman[255687]: 2026-02-02 15:38:04.9148244 +0000 UTC m=+0.136711734 container attach f7a16f6b3d569807b032c3fb167667ee849e17b8cefe66cb43d71bab2dedfaa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:38:04 compute-0 podman[255687]: 2026-02-02 15:38:04.916139742 +0000 UTC m=+0.138026996 container died f7a16f6b3d569807b032c3fb167667ee849e17b8cefe66cb43d71bab2dedfaa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_satoshi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ced828abbf2238bf4def87edb4f51a6c78b873974bcf538e6a339c972b006bd-merged.mount: Deactivated successfully.
Feb 02 15:38:04 compute-0 podman[255687]: 2026-02-02 15:38:04.979483933 +0000 UTC m=+0.201371177 container remove f7a16f6b3d569807b032c3fb167667ee849e17b8cefe66cb43d71bab2dedfaa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:38:04 compute-0 systemd[1]: libpod-conmon-f7a16f6b3d569807b032c3fb167667ee849e17b8cefe66cb43d71bab2dedfaa5.scope: Deactivated successfully.
Feb 02 15:38:05 compute-0 podman[255729]: 2026-02-02 15:38:05.100873445 +0000 UTC m=+0.037626929 container create cdf54e1a84deabd0490041738074af3c061f3429990ffec410c3cd0d1ea5fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_darwin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 15:38:05 compute-0 systemd[1]: Started libpod-conmon-cdf54e1a84deabd0490041738074af3c061f3429990ffec410c3cd0d1ea5fe6f.scope.
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.143 239549 DEBUG oslo_concurrency.lockutils [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.145 239549 DEBUG oslo_concurrency.lockutils [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2478840198' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2478840198' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.169 239549 INFO nova.compute.manager [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Detaching volume 2318c826-8ab2-4990-9416-1613c3176940
Feb 02 15:38:05 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc092460899e68657a3eb77c56921683731603513a160f5f6208026d9f589d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc092460899e68657a3eb77c56921683731603513a160f5f6208026d9f589d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc092460899e68657a3eb77c56921683731603513a160f5f6208026d9f589d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc092460899e68657a3eb77c56921683731603513a160f5f6208026d9f589d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:05 compute-0 podman[255729]: 2026-02-02 15:38:05.082367718 +0000 UTC m=+0.019121132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:38:05 compute-0 podman[255729]: 2026-02-02 15:38:05.187791726 +0000 UTC m=+0.124545130 container init cdf54e1a84deabd0490041738074af3c061f3429990ffec410c3cd0d1ea5fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_darwin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:38:05 compute-0 podman[255729]: 2026-02-02 15:38:05.195232805 +0000 UTC m=+0.131986180 container start cdf54e1a84deabd0490041738074af3c061f3429990ffec410c3cd0d1ea5fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 02 15:38:05 compute-0 podman[255729]: 2026-02-02 15:38:05.199401256 +0000 UTC m=+0.136154660 container attach cdf54e1a84deabd0490041738074af3c061f3429990ffec410c3cd0d1ea5fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_darwin, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.289 239549 INFO nova.virt.block_device [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Attempting to driver detach volume 2318c826-8ab2-4990-9416-1613c3176940 from mountpoint /dev/vdb
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.396 239549 DEBUG os_brick.encryptors [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Using volume encryption metadata '{'encryption_key_id': '264af8bd-ba59-478d-b073-6d26e51d45ff', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2318c826-8ab2-4990-9416-1613c3176940', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2318c826-8ab2-4990-9416-1613c3176940', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e39fbf7a-5b10-4f35-b531-efb11df8a34b', 'attached_at': '', 'detached_at': '', 'volume_id': '2318c826-8ab2-4990-9416-1613c3176940', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.403 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.403 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.411 239549 DEBUG nova.virt.libvirt.driver [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Attempting to detach device vdb from instance e39fbf7a-5b10-4f35-b531-efb11df8a34b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.412 239549 DEBUG nova.virt.libvirt.guest [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-2318c826-8ab2-4990-9416-1613c3176940">
Feb 02 15:38:05 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   </source>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <serial>2318c826-8ab2-4990-9416-1613c3176940</serial>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <encryption format="luks">
Feb 02 15:38:05 compute-0 nova_compute[239545]:     <secret type="passphrase" uuid="e39d9b4d-4006-4c9b-b9d0-0af6f94613e5"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   </encryption>
Feb 02 15:38:05 compute-0 nova_compute[239545]: </disk>
Feb 02 15:38:05 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.419 239549 DEBUG nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.423 239549 INFO nova.virt.libvirt.driver [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Successfully detached device vdb from instance e39fbf7a-5b10-4f35-b531-efb11df8a34b from the persistent domain config.
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.423 239549 DEBUG nova.virt.libvirt.driver [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e39fbf7a-5b10-4f35-b531-efb11df8a34b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.424 239549 DEBUG nova.virt.libvirt.guest [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-2318c826-8ab2-4990-9416-1613c3176940">
Feb 02 15:38:05 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   </source>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <serial>2318c826-8ab2-4990-9416-1613c3176940</serial>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   <encryption format="luks">
Feb 02 15:38:05 compute-0 nova_compute[239545]:     <secret type="passphrase" uuid="e39d9b4d-4006-4c9b-b9d0-0af6f94613e5"/>
Feb 02 15:38:05 compute-0 nova_compute[239545]:   </encryption>
Feb 02 15:38:05 compute-0 nova_compute[239545]: </disk>
Feb 02 15:38:05 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.500 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.501 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.506 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.507 239549 INFO nova.compute.claims [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.527 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Received event <DeviceRemovedEvent: 1770046685.5269117, e39fbf7a-5b10-4f35-b531-efb11df8a34b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.528 239549 DEBUG nova.virt.libvirt.driver [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e39fbf7a-5b10-4f35-b531-efb11df8a34b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.530 239549 INFO nova.virt.libvirt.driver [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Successfully detached device vdb from instance e39fbf7a-5b10-4f35-b531-efb11df8a34b from the live domain config.
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.562 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 28 KiB/s wr, 179 op/s
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.615 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.676 239549 DEBUG nova.objects.instance [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lazy-loading 'flavor' on Instance uuid e39fbf7a-5b10-4f35-b531-efb11df8a34b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]: [
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:     {
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         "available": false,
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         "being_replaced": false,
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         "ceph_device_lvm": false,
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         "lsm_data": {},
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         "lvs": [],
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         "path": "/dev/sr0",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         "rejected_reasons": [
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "Insufficient space (<5GB)",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "Has a FileSystem"
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         ],
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         "sys_api": {
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "actuators": null,
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "device_nodes": [
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:                 "sr0"
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             ],
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "devname": "sr0",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "human_readable_size": "482.00 KB",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "id_bus": "ata",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "model": "QEMU DVD-ROM",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "nr_requests": "2",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "parent": "/dev/sr0",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "partitions": {},
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "path": "/dev/sr0",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "removable": "1",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "rev": "2.5+",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "ro": "0",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "rotational": "1",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "sas_address": "",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "sas_device_handle": "",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "scheduler_mode": "mq-deadline",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "sectors": 0,
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "sectorsize": "2048",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "size": 493568.0,
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "support_discard": "2048",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "type": "disk",
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:             "vendor": "QEMU"
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:         }
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]:     }
Feb 02 15:38:05 compute-0 dreamy_darwin[255746]: ]
Feb 02 15:38:05 compute-0 systemd[1]: libpod-cdf54e1a84deabd0490041738074af3c061f3429990ffec410c3cd0d1ea5fe6f.scope: Deactivated successfully.
Feb 02 15:38:05 compute-0 podman[255729]: 2026-02-02 15:38:05.716262705 +0000 UTC m=+0.653016089 container died cdf54e1a84deabd0490041738074af3c061f3429990ffec410c3cd0d1ea5fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_darwin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 02 15:38:05 compute-0 nova_compute[239545]: 2026-02-02 15:38:05.716 239549 DEBUG oslo_concurrency.lockutils [None req-ed5e36ea-e2b9-4258-8bb4-1813048a41fd b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bc092460899e68657a3eb77c56921683731603513a160f5f6208026d9f589d2-merged.mount: Deactivated successfully.
Feb 02 15:38:05 compute-0 podman[255729]: 2026-02-02 15:38:05.759843718 +0000 UTC m=+0.696597102 container remove cdf54e1a84deabd0490041738074af3c061f3429990ffec410c3cd0d1ea5fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_darwin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: osdmap e290: 3 total, 3 up, 3 in
Feb 02 15:38:05 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2478840198' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:05 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2478840198' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:05 compute-0 ceph-mon[75334]: pgmap v1222: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 28 KiB/s wr, 179 op/s
Feb 02 15:38:05 compute-0 systemd[1]: libpod-conmon-cdf54e1a84deabd0490041738074af3c061f3429990ffec410c3cd0d1ea5fe6f.scope: Deactivated successfully.
Feb 02 15:38:05 compute-0 sudo[255650]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:38:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:38:05 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:38:05 compute-0 sudo[256663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:38:05 compute-0 sudo[256663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:05 compute-0 sudo[256663]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:05 compute-0 sudo[256688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:38:05 compute-0 sudo[256688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:38:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073223373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.228 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:06 compute-0 podman[256726]: 2026-02-02 15:38:06.231137226 +0000 UTC m=+0.039546787 container create 6873d0b958db9d9beb43cd42e7157cc27146e6b8b9844efb16f528034ec63ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.235 239549 DEBUG nova.compute.provider_tree [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.258 239549 DEBUG nova.scheduler.client.report [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:38:06 compute-0 systemd[1]: Started libpod-conmon-6873d0b958db9d9beb43cd42e7157cc27146e6b8b9844efb16f528034ec63ef2.scope.
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.279 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.280 239549 DEBUG nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.282 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.282 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.282 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.282 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:06 compute-0 podman[256726]: 2026-02-02 15:38:06.305620475 +0000 UTC m=+0.114030066 container init 6873d0b958db9d9beb43cd42e7157cc27146e6b8b9844efb16f528034ec63ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leavitt, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:38:06 compute-0 podman[256726]: 2026-02-02 15:38:06.216255296 +0000 UTC m=+0.024664877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:38:06 compute-0 podman[256726]: 2026-02-02 15:38:06.315031753 +0000 UTC m=+0.123441314 container start 6873d0b958db9d9beb43cd42e7157cc27146e6b8b9844efb16f528034ec63ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:38:06 compute-0 friendly_leavitt[256744]: 167 167
Feb 02 15:38:06 compute-0 systemd[1]: libpod-6873d0b958db9d9beb43cd42e7157cc27146e6b8b9844efb16f528034ec63ef2.scope: Deactivated successfully.
Feb 02 15:38:06 compute-0 podman[256726]: 2026-02-02 15:38:06.319855369 +0000 UTC m=+0.128264930 container attach 6873d0b958db9d9beb43cd42e7157cc27146e6b8b9844efb16f528034ec63ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:38:06 compute-0 podman[256726]: 2026-02-02 15:38:06.320183707 +0000 UTC m=+0.128593268 container died 6873d0b958db9d9beb43cd42e7157cc27146e6b8b9844efb16f528034ec63ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leavitt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.337 239549 DEBUG nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.337 239549 DEBUG nova.network.neutron [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-74c901ee283af1564dfe52d75de3509a8fffefc8d7f6aed465e9087265595901-merged.mount: Deactivated successfully.
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.360 239549 INFO nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:38:06 compute-0 podman[256726]: 2026-02-02 15:38:06.364144479 +0000 UTC m=+0.172554050 container remove 6873d0b958db9d9beb43cd42e7157cc27146e6b8b9844efb16f528034ec63ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:38:06 compute-0 systemd[1]: libpod-conmon-6873d0b958db9d9beb43cd42e7157cc27146e6b8b9844efb16f528034ec63ef2.scope: Deactivated successfully.
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.389 239549 DEBUG nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.491 239549 DEBUG nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.492 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.492 239549 INFO nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Creating image(s)
Feb 02 15:38:06 compute-0 podman[256787]: 2026-02-02 15:38:06.496086577 +0000 UTC m=+0.035261623 container create acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.513 239549 DEBUG nova.storage.rbd_utils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] rbd image 0478993b-8261-4780-971f-04d18afc9603_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:06 compute-0 systemd[1]: Started libpod-conmon-acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc.scope.
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.539 239549 DEBUG nova.storage.rbd_utils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] rbd image 0478993b-8261-4780-971f-04d18afc9603_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7030fff81050787f2df3798c9a14882ba1454cde921b36141e7331a608e1223/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.560 239549 DEBUG nova.storage.rbd_utils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] rbd image 0478993b-8261-4780-971f-04d18afc9603_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7030fff81050787f2df3798c9a14882ba1454cde921b36141e7331a608e1223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7030fff81050787f2df3798c9a14882ba1454cde921b36141e7331a608e1223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7030fff81050787f2df3798c9a14882ba1454cde921b36141e7331a608e1223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7030fff81050787f2df3798c9a14882ba1454cde921b36141e7331a608e1223/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.566 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:06 compute-0 podman[256787]: 2026-02-02 15:38:06.481176447 +0000 UTC m=+0.020351523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:38:06 compute-0 podman[256787]: 2026-02-02 15:38:06.586489952 +0000 UTC m=+0.125664998 container init acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:38:06 compute-0 podman[256787]: 2026-02-02 15:38:06.591038512 +0000 UTC m=+0.130213558 container start acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:38:06 compute-0 podman[256787]: 2026-02-02 15:38:06.595980631 +0000 UTC m=+0.135155687 container attach acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.614 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.614 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.615 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.615 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.633 239549 DEBUG nova.storage.rbd_utils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] rbd image 0478993b-8261-4780-971f-04d18afc9603_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.637 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 0478993b-8261-4780-971f-04d18afc9603_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.692 239549 DEBUG oslo_concurrency.lockutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.692 239549 DEBUG oslo_concurrency.lockutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.693 239549 DEBUG oslo_concurrency.lockutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.693 239549 DEBUG oslo_concurrency.lockutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.693 239549 DEBUG oslo_concurrency.lockutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.694 239549 INFO nova.compute.manager [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Terminating instance
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.695 239549 DEBUG nova.compute.manager [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:38:06 compute-0 kernel: tapf11c6544-08 (unregistering): left promiscuous mode
Feb 02 15:38:06 compute-0 NetworkManager[49171]: <info>  [1770046686.7340] device (tapf11c6544-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:38:06 compute-0 ovn_controller[144995]: 2026-02-02T15:38:06Z|00114|binding|INFO|Releasing lport f11c6544-0831-4f4d-9959-e8a813d59f02 from this chassis (sb_readonly=0)
Feb 02 15:38:06 compute-0 ovn_controller[144995]: 2026-02-02T15:38:06Z|00115|binding|INFO|Setting lport f11c6544-0831-4f4d-9959-e8a813d59f02 down in Southbound
Feb 02 15:38:06 compute-0 ovn_controller[144995]: 2026-02-02T15:38:06Z|00116|binding|INFO|Removing iface tapf11c6544-08 ovn-installed in OVS
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.755 239549 DEBUG nova.policy [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '630312472f584d3aa673cad217006b1c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ab4d9435497e4a81a051bfaeef7c7de5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.761 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:06.765 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:87:92 10.100.0.12'], port_security=['fa:16:3e:b3:87:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e39fbf7a-5b10-4f35-b531-efb11df8a34b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e969847-ba87-4ece-858b-96e1806f85b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '115560eaceb947abbaeaf329e9ab5683', 'neutron:revision_number': '4', 'neutron:security_group_ids': '562adfb0-4a8a-4daf-b4fb-8eb4927d45ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5933466e-0821-4132-8ebe-9aba51456576, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=f11c6544-0831-4f4d-9959-e8a813d59f02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:38:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:06.766 154982 INFO neutron.agent.ovn.metadata.agent [-] Port f11c6544-0831-4f4d-9959-e8a813d59f02 in datapath 4e969847-ba87-4ece-858b-96e1806f85b1 unbound from our chassis
Feb 02 15:38:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:06.768 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e969847-ba87-4ece-858b-96e1806f85b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:38:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:06.771 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6606bc-4846-4de5-a9b3-9358292f873e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:06.775 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1 namespace which is not needed anymore
Feb 02 15:38:06 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 02 15:38:06 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 14.394s CPU time.
Feb 02 15:38:06 compute-0 systemd-machined[207609]: Machine qemu-10-instance-0000000a terminated.
Feb 02 15:38:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:38:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:38:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:38:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:38:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:38:06 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2073223373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Feb 02 15:38:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:38:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540431715' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Feb 02 15:38:06 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.863 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.866 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 0478993b-8261-4780-971f-04d18afc9603_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:06 compute-0 neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1[255385]: [NOTICE]   (255389) : haproxy version is 2.8.14-c23fe91
Feb 02 15:38:06 compute-0 neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1[255385]: [NOTICE]   (255389) : path to executable is /usr/sbin/haproxy
Feb 02 15:38:06 compute-0 neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1[255385]: [WARNING]  (255389) : Exiting Master process...
Feb 02 15:38:06 compute-0 neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1[255385]: [ALERT]    (255389) : Current worker (255391) exited with code 143 (Terminated)
Feb 02 15:38:06 compute-0 neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1[255385]: [WARNING]  (255389) : All workers exited. Exiting... (0)
Feb 02 15:38:06 compute-0 systemd[1]: libpod-745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47.scope: Deactivated successfully.
Feb 02 15:38:06 compute-0 podman[256932]: 2026-02-02 15:38:06.895649531 +0000 UTC m=+0.056479665 container died 745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8481155138db3627c94078e08e7397c1b95c3139b06d0c44f00233952fe03f5d-merged.mount: Deactivated successfully.
Feb 02 15:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47-userdata-shm.mount: Deactivated successfully.
Feb 02 15:38:06 compute-0 podman[256932]: 2026-02-02 15:38:06.937252627 +0000 UTC m=+0.098082751 container cleanup 745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Feb 02 15:38:06 compute-0 systemd[1]: libpod-conmon-745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47.scope: Deactivated successfully.
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.958 239549 DEBUG nova.storage.rbd_utils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] resizing rbd image 0478993b-8261-4780-971f-04d18afc9603_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.988 239549 INFO nova.virt.libvirt.driver [-] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Instance destroyed successfully.
Feb 02 15:38:06 compute-0 tender_hellman[256837]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:38:06 compute-0 tender_hellman[256837]: --> All data devices are unavailable
Feb 02 15:38:06 compute-0 nova_compute[239545]: 2026-02-02 15:38:06.991 239549 DEBUG nova.objects.instance [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lazy-loading 'resources' on Instance uuid e39fbf7a-5b10-4f35-b531-efb11df8a34b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:07 compute-0 systemd[1]: libpod-acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc.scope: Deactivated successfully.
Feb 02 15:38:07 compute-0 conmon[256837]: conmon acedc2e8e82372d379a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc.scope/container/memory.events
Feb 02 15:38:07 compute-0 podman[256787]: 2026-02-02 15:38:07.013577231 +0000 UTC m=+0.552752317 container died acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:38:07 compute-0 podman[257013]: 2026-02-02 15:38:07.033136774 +0000 UTC m=+0.075383143 container remove 745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:38:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:07.037 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3d0411-99a2-4cdc-8a48-9c0196e7ac03]: (4, ('Mon Feb  2 03:38:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1 (745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47)\n745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47\nMon Feb  2 03:38:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1 (745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47)\n745803b9a3d921f0022738dd96d15a300216e68b392fe1d425be220101b2ef47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:07.041 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3bbce7-e447-4689-87d8-0fd282b64c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:07.042 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e969847-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:07 compute-0 kernel: tap4e969847-b0: left promiscuous mode
Feb 02 15:38:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:07.058 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b3740669-0b8e-424f-b017-c1d8a9a5b157]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:07 compute-0 podman[256787]: 2026-02-02 15:38:07.075678192 +0000 UTC m=+0.614853238 container remove acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:38:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:07.078 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9669baed-8569-4236-88dd-506e0a7cc2f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:07 compute-0 systemd[1]: libpod-conmon-acedc2e8e82372d379a0308b20732983cfd0a13fe05e1be229af94a4400e95dc.scope: Deactivated successfully.
Feb 02 15:38:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:07.081 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3b64b728-c40b-402b-93fe-383611a66978]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.089 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.091 239549 DEBUG nova.virt.libvirt.vif [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:37:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1268133418',display_name='tempest-TestEncryptedCinderVolumes-server-1268133418',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1268133418',id=10,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJLlMvuNCYtL11DjRJ1K5ygIqHM/lR7WnMbq8DojV+lv2C2/WKdvjdC2b5d3qqOO33vsgTNfmOxGVgH90dQgZIdYWO430u/oR9Jo6xHCtxYNxJboO7WvaiIF21O8RQkmw==',key_name='tempest-keypair-974496074',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:37:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='115560eaceb947abbaeaf329e9ab5683',ramdisk_id='',reservation_id='r-rkzwuwa9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1499518443',owner_user_name='tempest-TestEncryptedCinderVolumes-1499518443-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:37:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b51e52171e514748b1584f228f0231ac',uuid=e39fbf7a-5b10-4f35-b531-efb11df8a34b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.091 239549 DEBUG nova.network.os_vif_util [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Converting VIF {"id": "f11c6544-0831-4f4d-9959-e8a813d59f02", "address": "fa:16:3e:b3:87:92", "network": {"id": "4e969847-ba87-4ece-858b-96e1806f85b1", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1394112360-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "115560eaceb947abbaeaf329e9ab5683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf11c6544-08", "ovs_interfaceid": "f11c6544-0831-4f4d-9959-e8a813d59f02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.092 239549 DEBUG nova.network.os_vif_util [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:87:92,bridge_name='br-int',has_traffic_filtering=True,id=f11c6544-0831-4f4d-9959-e8a813d59f02,network=Network(4e969847-ba87-4ece-858b-96e1806f85b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11c6544-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.092 239549 DEBUG os_vif [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:87:92,bridge_name='br-int',has_traffic_filtering=True,id=f11c6544-0831-4f4d-9959-e8a813d59f02,network=Network(4e969847-ba87-4ece-858b-96e1806f85b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11c6544-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.094 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.094 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf11c6544-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:07.094 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[856d4112-ef39-4ac2-9c37-3765046156b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414452, 'reachable_time': 29289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257083, 'error': None, 'target': 'ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:07.096 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4e969847-ba87-4ece-858b-96e1806f85b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:38:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:07.097 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[62797196-2490-4ee4-afea-d534afc6fc35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.100 239549 DEBUG nova.objects.instance [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lazy-loading 'migration_context' on Instance uuid 0478993b-8261-4780-971f-04d18afc9603 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.101 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.102 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.105 239549 INFO os_vif [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:87:92,bridge_name='br-int',has_traffic_filtering=True,id=f11c6544-0831-4f4d-9959-e8a813d59f02,network=Network(4e969847-ba87-4ece-858b-96e1806f85b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf11c6544-08')
Feb 02 15:38:07 compute-0 sudo[256688]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.124 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.125 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Ensure instance console log exists: /var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.125 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.126 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.126 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.150 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.150 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:38:07 compute-0 sudo[257101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:38:07 compute-0 sudo[257101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:07 compute-0 sudo[257101]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.202 239549 DEBUG nova.compute.manager [req-c044be72-d5a2-4787-81bb-0efc3e759dc8 req-0514b205-f8b7-4b91-8a61-f08d675cc263 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received event network-vif-unplugged-f11c6544-0831-4f4d-9959-e8a813d59f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.202 239549 DEBUG oslo_concurrency.lockutils [req-c044be72-d5a2-4787-81bb-0efc3e759dc8 req-0514b205-f8b7-4b91-8a61-f08d675cc263 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.203 239549 DEBUG oslo_concurrency.lockutils [req-c044be72-d5a2-4787-81bb-0efc3e759dc8 req-0514b205-f8b7-4b91-8a61-f08d675cc263 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.203 239549 DEBUG oslo_concurrency.lockutils [req-c044be72-d5a2-4787-81bb-0efc3e759dc8 req-0514b205-f8b7-4b91-8a61-f08d675cc263 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.203 239549 DEBUG nova.compute.manager [req-c044be72-d5a2-4787-81bb-0efc3e759dc8 req-0514b205-f8b7-4b91-8a61-f08d675cc263 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] No waiting events found dispatching network-vif-unplugged-f11c6544-0831-4f4d-9959-e8a813d59f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.203 239549 DEBUG nova.compute.manager [req-c044be72-d5a2-4787-81bb-0efc3e759dc8 req-0514b205-f8b7-4b91-8a61-f08d675cc263 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received event network-vif-unplugged-f11c6544-0831-4f4d-9959-e8a813d59f02 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:38:07 compute-0 sudo[257129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:38:07 compute-0 sudo[257129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7030fff81050787f2df3798c9a14882ba1454cde921b36141e7331a608e1223-merged.mount: Deactivated successfully.
Feb 02 15:38:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d4e969847\x2dba87\x2d4ece\x2d858b\x2d96e1806f85b1.mount: Deactivated successfully.
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.320 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.321 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4422MB free_disk=59.94263072125614GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.322 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.322 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.397 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance e39fbf7a-5b10-4f35-b531-efb11df8a34b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.397 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0478993b-8261-4780-971f-04d18afc9603 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.397 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.397 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.453 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483573546' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483573546' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:07 compute-0 podman[257166]: 2026-02-02 15:38:07.506650945 +0000 UTC m=+0.057824839 container create 857b65dd211cdd3bb57e77979d20db291739a6f7e7a32cfaafe4f4f2d079675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:38:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:07 compute-0 systemd[1]: Started libpod-conmon-857b65dd211cdd3bb57e77979d20db291739a6f7e7a32cfaafe4f4f2d079675e.scope.
Feb 02 15:38:07 compute-0 podman[257166]: 2026-02-02 15:38:07.47583228 +0000 UTC m=+0.027006204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:38:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 28 KiB/s wr, 152 op/s
Feb 02 15:38:07 compute-0 podman[257166]: 2026-02-02 15:38:07.632691471 +0000 UTC m=+0.183865375 container init 857b65dd211cdd3bb57e77979d20db291739a6f7e7a32cfaafe4f4f2d079675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_turing, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:38:07 compute-0 podman[257166]: 2026-02-02 15:38:07.639036343 +0000 UTC m=+0.190210237 container start 857b65dd211cdd3bb57e77979d20db291739a6f7e7a32cfaafe4f4f2d079675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:38:07 compute-0 nervous_turing[257183]: 167 167
Feb 02 15:38:07 compute-0 systemd[1]: libpod-857b65dd211cdd3bb57e77979d20db291739a6f7e7a32cfaafe4f4f2d079675e.scope: Deactivated successfully.
Feb 02 15:38:07 compute-0 podman[257166]: 2026-02-02 15:38:07.651479084 +0000 UTC m=+0.202652998 container attach 857b65dd211cdd3bb57e77979d20db291739a6f7e7a32cfaafe4f4f2d079675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:38:07 compute-0 podman[257166]: 2026-02-02 15:38:07.652190212 +0000 UTC m=+0.203364106 container died 857b65dd211cdd3bb57e77979d20db291739a6f7e7a32cfaafe4f4f2d079675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_turing, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-124683b6b2494fc2a9c4173bbf0f4f3bc54128f140532b60f504ae2d26e2571f-merged.mount: Deactivated successfully.
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.692 239549 INFO nova.virt.libvirt.driver [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Deleting instance files /var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b_del
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.693 239549 INFO nova.virt.libvirt.driver [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Deletion of /var/lib/nova/instances/e39fbf7a-5b10-4f35-b531-efb11df8a34b_del complete
Feb 02 15:38:07 compute-0 podman[257166]: 2026-02-02 15:38:07.694502264 +0000 UTC m=+0.245676178 container remove 857b65dd211cdd3bb57e77979d20db291739a6f7e7a32cfaafe4f4f2d079675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:38:07 compute-0 systemd[1]: libpod-conmon-857b65dd211cdd3bb57e77979d20db291739a6f7e7a32cfaafe4f4f2d079675e.scope: Deactivated successfully.
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.759 239549 INFO nova.compute.manager [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Took 1.06 seconds to destroy the instance on the hypervisor.
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.760 239549 DEBUG oslo.service.loopingcall [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.760 239549 DEBUG nova.compute.manager [-] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:38:07 compute-0 nova_compute[239545]: 2026-02-02 15:38:07.760 239549 DEBUG nova.network.neutron [-] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:38:07 compute-0 podman[257224]: 2026-02-02 15:38:07.826742789 +0000 UTC m=+0.037467296 container create 23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:38:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2540431715' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:07 compute-0 ceph-mon[75334]: osdmap e291: 3 total, 3 up, 3 in
Feb 02 15:38:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/483573546' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/483573546' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:07 compute-0 ceph-mon[75334]: pgmap v1224: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 28 KiB/s wr, 152 op/s
Feb 02 15:38:07 compute-0 systemd[1]: Started libpod-conmon-23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c.scope.
Feb 02 15:38:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6caf7d6839618f5848f7ada08b4b543d13bcc8f55264a762bd1ec85ba8d4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6caf7d6839618f5848f7ada08b4b543d13bcc8f55264a762bd1ec85ba8d4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6caf7d6839618f5848f7ada08b4b543d13bcc8f55264a762bd1ec85ba8d4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6caf7d6839618f5848f7ada08b4b543d13bcc8f55264a762bd1ec85ba8d4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:07 compute-0 podman[257224]: 2026-02-02 15:38:07.810754813 +0000 UTC m=+0.021479350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:38:07 compute-0 podman[257224]: 2026-02-02 15:38:07.916810796 +0000 UTC m=+0.127535323 container init 23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_vaughan, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:38:07 compute-0 podman[257224]: 2026-02-02 15:38:07.924309747 +0000 UTC m=+0.135034254 container start 23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_vaughan, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:38:07 compute-0 podman[257224]: 2026-02-02 15:38:07.928372185 +0000 UTC m=+0.139096742 container attach 23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_vaughan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:38:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:38:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849188310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:08 compute-0 nova_compute[239545]: 2026-02-02 15:38:08.017 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:08 compute-0 nova_compute[239545]: 2026-02-02 15:38:08.025 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:38:08 compute-0 nova_compute[239545]: 2026-02-02 15:38:08.040 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:38:08 compute-0 nova_compute[239545]: 2026-02-02 15:38:08.061 239549 DEBUG nova.network.neutron [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Successfully created port: 5cd195ef-887e-43b8-a695-421365d8d1ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:38:08 compute-0 nova_compute[239545]: 2026-02-02 15:38:08.065 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:38:08 compute-0 nova_compute[239545]: 2026-02-02 15:38:08.066 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:08 compute-0 elated_vaughan[257240]: {
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:     "0": [
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:         {
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "devices": [
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "/dev/loop3"
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             ],
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_name": "ceph_lv0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_size": "21470642176",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "name": "ceph_lv0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "tags": {
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.cluster_name": "ceph",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.crush_device_class": "",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.encrypted": "0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.objectstore": "bluestore",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.osd_id": "0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.type": "block",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.vdo": "0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.with_tpm": "0"
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             },
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "type": "block",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "vg_name": "ceph_vg0"
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:         }
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:     ],
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:     "1": [
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:         {
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "devices": [
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "/dev/loop4"
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             ],
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_name": "ceph_lv1",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_size": "21470642176",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "name": "ceph_lv1",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "tags": {
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.cluster_name": "ceph",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.crush_device_class": "",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.encrypted": "0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.objectstore": "bluestore",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.osd_id": "1",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.type": "block",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.vdo": "0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.with_tpm": "0"
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             },
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "type": "block",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "vg_name": "ceph_vg1"
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:         }
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:     ],
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:     "2": [
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:         {
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "devices": [
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "/dev/loop5"
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             ],
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_name": "ceph_lv2",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_size": "21470642176",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "name": "ceph_lv2",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "tags": {
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.cluster_name": "ceph",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.crush_device_class": "",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.encrypted": "0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.objectstore": "bluestore",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.osd_id": "2",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.type": "block",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.vdo": "0",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:                 "ceph.with_tpm": "0"
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             },
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "type": "block",
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:             "vg_name": "ceph_vg2"
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:         }
Feb 02 15:38:08 compute-0 elated_vaughan[257240]:     ]
Feb 02 15:38:08 compute-0 elated_vaughan[257240]: }
Feb 02 15:38:08 compute-0 systemd[1]: libpod-23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c.scope: Deactivated successfully.
Feb 02 15:38:08 compute-0 conmon[257240]: conmon 23e70de1446b727bd31a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c.scope/container/memory.events
Feb 02 15:38:08 compute-0 podman[257224]: 2026-02-02 15:38:08.213882744 +0000 UTC m=+0.424607251 container died 23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-afb6caf7d6839618f5848f7ada08b4b543d13bcc8f55264a762bd1ec85ba8d4d-merged.mount: Deactivated successfully.
Feb 02 15:38:08 compute-0 podman[257224]: 2026-02-02 15:38:08.25842531 +0000 UTC m=+0.469149817 container remove 23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:38:08 compute-0 systemd[1]: libpod-conmon-23e70de1446b727bd31aa82c041172a786289bd3e242da23a5d77cace3c7bd9c.scope: Deactivated successfully.
Feb 02 15:38:08 compute-0 sudo[257129]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:08 compute-0 sudo[257264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:38:08 compute-0 sudo[257264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:08 compute-0 sudo[257264]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:08 compute-0 sudo[257289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:38:08 compute-0 sudo[257289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:08 compute-0 nova_compute[239545]: 2026-02-02 15:38:08.679 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:08 compute-0 podman[257325]: 2026-02-02 15:38:08.687946577 +0000 UTC m=+0.069466538 container create cfcc27f87d761cb819bbc4ea3986617a3bc56462325e4a48d727c75a509585d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:38:08 compute-0 systemd[1]: Started libpod-conmon-cfcc27f87d761cb819bbc4ea3986617a3bc56462325e4a48d727c75a509585d3.scope.
Feb 02 15:38:08 compute-0 podman[257325]: 2026-02-02 15:38:08.639340584 +0000 UTC m=+0.020860565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:38:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:08 compute-0 podman[257325]: 2026-02-02 15:38:08.772279476 +0000 UTC m=+0.153799457 container init cfcc27f87d761cb819bbc4ea3986617a3bc56462325e4a48d727c75a509585d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:38:08 compute-0 podman[257325]: 2026-02-02 15:38:08.777231145 +0000 UTC m=+0.158751106 container start cfcc27f87d761cb819bbc4ea3986617a3bc56462325e4a48d727c75a509585d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:38:08 compute-0 suspicious_leavitt[257341]: 167 167
Feb 02 15:38:08 compute-0 systemd[1]: libpod-cfcc27f87d761cb819bbc4ea3986617a3bc56462325e4a48d727c75a509585d3.scope: Deactivated successfully.
Feb 02 15:38:08 compute-0 podman[257325]: 2026-02-02 15:38:08.783049745 +0000 UTC m=+0.164569706 container attach cfcc27f87d761cb819bbc4ea3986617a3bc56462325e4a48d727c75a509585d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:38:08 compute-0 podman[257325]: 2026-02-02 15:38:08.783406095 +0000 UTC m=+0.164926046 container died cfcc27f87d761cb819bbc4ea3986617a3bc56462325e4a48d727c75a509585d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5330338619f900c918ed395053f6775c45047645378222599434b545a8166fa1-merged.mount: Deactivated successfully.
Feb 02 15:38:08 compute-0 podman[257325]: 2026-02-02 15:38:08.848774624 +0000 UTC m=+0.230294595 container remove cfcc27f87d761cb819bbc4ea3986617a3bc56462325e4a48d727c75a509585d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:38:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/849188310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:08 compute-0 systemd[1]: libpod-conmon-cfcc27f87d761cb819bbc4ea3986617a3bc56462325e4a48d727c75a509585d3.scope: Deactivated successfully.
Feb 02 15:38:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609205953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609205953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:09 compute-0 podman[257365]: 2026-02-02 15:38:09.007985071 +0000 UTC m=+0.041626567 container create a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_golick, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.036 239549 DEBUG nova.network.neutron [-] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:38:09 compute-0 systemd[1]: Started libpod-conmon-a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991.scope.
Feb 02 15:38:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3dd838ad6d83a0334b1a939f30436b9a136987e40ca7cb9518224fa812b85b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3dd838ad6d83a0334b1a939f30436b9a136987e40ca7cb9518224fa812b85b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3dd838ad6d83a0334b1a939f30436b9a136987e40ca7cb9518224fa812b85b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3dd838ad6d83a0334b1a939f30436b9a136987e40ca7cb9518224fa812b85b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.066 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.068 239549 INFO nova.compute.manager [-] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Took 1.31 seconds to deallocate network for instance.
Feb 02 15:38:09 compute-0 podman[257365]: 2026-02-02 15:38:09.078836853 +0000 UTC m=+0.112478379 container init a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:38:09 compute-0 podman[257365]: 2026-02-02 15:38:08.986624455 +0000 UTC m=+0.020265981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:38:09 compute-0 podman[257365]: 2026-02-02 15:38:09.084151621 +0000 UTC m=+0.117793117 container start a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:38:09 compute-0 podman[257365]: 2026-02-02 15:38:09.087770909 +0000 UTC m=+0.121412485 container attach a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.093 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.093 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.093 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.124 239549 DEBUG oslo_concurrency.lockutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.125 239549 DEBUG oslo_concurrency.lockutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.198 239549 DEBUG oslo_concurrency.processutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.376 239549 DEBUG nova.compute.manager [req-2fa43829-95d3-4baf-bc9d-e061afaf33fe req-7975f1f3-1807-43c2-8eaf-af0f33a4e36e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received event network-vif-plugged-f11c6544-0831-4f4d-9959-e8a813d59f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.377 239549 DEBUG oslo_concurrency.lockutils [req-2fa43829-95d3-4baf-bc9d-e061afaf33fe req-7975f1f3-1807-43c2-8eaf-af0f33a4e36e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.380 239549 DEBUG oslo_concurrency.lockutils [req-2fa43829-95d3-4baf-bc9d-e061afaf33fe req-7975f1f3-1807-43c2-8eaf-af0f33a4e36e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.380 239549 DEBUG oslo_concurrency.lockutils [req-2fa43829-95d3-4baf-bc9d-e061afaf33fe req-7975f1f3-1807-43c2-8eaf-af0f33a4e36e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.381 239549 DEBUG nova.compute.manager [req-2fa43829-95d3-4baf-bc9d-e061afaf33fe req-7975f1f3-1807-43c2-8eaf-af0f33a4e36e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] No waiting events found dispatching network-vif-plugged-f11c6544-0831-4f4d-9959-e8a813d59f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.381 239549 WARNING nova.compute.manager [req-2fa43829-95d3-4baf-bc9d-e061afaf33fe req-7975f1f3-1807-43c2-8eaf-af0f33a4e36e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received unexpected event network-vif-plugged-f11c6544-0831-4f4d-9959-e8a813d59f02 for instance with vm_state deleted and task_state None.
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.381 239549 DEBUG nova.compute.manager [req-2fa43829-95d3-4baf-bc9d-e061afaf33fe req-7975f1f3-1807-43c2-8eaf-af0f33a4e36e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Received event network-vif-deleted-f11c6544-0831-4f4d-9959-e8a813d59f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 173 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 1.6 MiB/s wr, 154 op/s
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.606 239549 DEBUG nova.network.neutron [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Successfully updated port: 5cd195ef-887e-43b8-a695-421365d8d1ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.636 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "refresh_cache-0478993b-8261-4780-971f-04d18afc9603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.636 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquired lock "refresh_cache-0478993b-8261-4780-971f-04d18afc9603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.636 239549 DEBUG nova.network.neutron [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:38:09 compute-0 lvm[257476]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:38:09 compute-0 lvm[257476]: VG ceph_vg0 finished
Feb 02 15:38:09 compute-0 lvm[257478]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:38:09 compute-0 lvm[257478]: VG ceph_vg1 finished
Feb 02 15:38:09 compute-0 lvm[257479]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:38:09 compute-0 lvm[257479]: VG ceph_vg2 finished
Feb 02 15:38:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:38:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321988294' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.719 239549 DEBUG oslo_concurrency.processutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.726 239549 DEBUG nova.compute.provider_tree [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:38:09 compute-0 exciting_golick[257381]: {}
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.747 239549 DEBUG nova.scheduler.client.report [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:38:09 compute-0 systemd[1]: libpod-a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991.scope: Deactivated successfully.
Feb 02 15:38:09 compute-0 systemd[1]: libpod-a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991.scope: Consumed 1.060s CPU time.
Feb 02 15:38:09 compute-0 podman[257365]: 2026-02-02 15:38:09.769160363 +0000 UTC m=+0.802801869 container died a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.774 239549 DEBUG oslo_concurrency.lockutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3dd838ad6d83a0334b1a939f30436b9a136987e40ca7cb9518224fa812b85b0-merged.mount: Deactivated successfully.
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.805 239549 INFO nova.scheduler.client.report [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Deleted allocations for instance e39fbf7a-5b10-4f35-b531-efb11df8a34b
Feb 02 15:38:09 compute-0 podman[257365]: 2026-02-02 15:38:09.81127544 +0000 UTC m=+0.844916936 container remove a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_golick, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:38:09 compute-0 systemd[1]: libpod-conmon-a27b6e755a950a8aa14920f87da6774dfce3859cf05eb2ff12b3bdff480db991.scope: Deactivated successfully.
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.816 239549 DEBUG nova.network.neutron [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:38:09 compute-0 sudo[257289]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:38:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:38:09 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3609205953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3609205953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:09 compute-0 ceph-mon[75334]: pgmap v1225: 305 pgs: 305 active+clean; 173 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 1.6 MiB/s wr, 154 op/s
Feb 02 15:38:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3321988294' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:09 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:09 compute-0 nova_compute[239545]: 2026-02-02 15:38:09.890 239549 DEBUG oslo_concurrency.lockutils [None req-0d0ea7cf-9ae7-4489-8475-f11929517639 b51e52171e514748b1584f228f0231ac 115560eaceb947abbaeaf329e9ab5683 - - default default] Lock "e39fbf7a-5b10-4f35-b531-efb11df8a34b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:09 compute-0 sudo[257497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:38:09 compute-0 sudo[257497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:38:09 compute-0 sudo[257497]: pam_unix(sudo:session): session closed for user root
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.500 239549 DEBUG nova.network.neutron [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Updating instance_info_cache with network_info: [{"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.516 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Releasing lock "refresh_cache-0478993b-8261-4780-971f-04d18afc9603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.517 239549 DEBUG nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Instance network_info: |[{"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.519 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Start _get_guest_xml network_info=[{"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.525 239549 WARNING nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.532 239549 DEBUG nova.virt.libvirt.host [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.533 239549 DEBUG nova.virt.libvirt.host [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.538 239549 DEBUG nova.virt.libvirt.host [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.539 239549 DEBUG nova.virt.libvirt.host [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.540 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.540 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.541 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.541 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.541 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.541 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.542 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.542 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.542 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.542 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.542 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.543 239549 DEBUG nova.virt.hardware [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.546 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:10 compute-0 nova_compute[239545]: 2026-02-02 15:38:10.565 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:38:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:38:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347487978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.126 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.143 239549 DEBUG nova.storage.rbd_utils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] rbd image 0478993b-8261-4780-971f-04d18afc9603_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.146 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.461 239549 DEBUG nova.compute.manager [req-5777d83a-161f-4e7a-87d4-b1c879f40800 req-867878d1-d3f8-448a-bc02-777bb0538073 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received event network-changed-5cd195ef-887e-43b8-a695-421365d8d1ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.462 239549 DEBUG nova.compute.manager [req-5777d83a-161f-4e7a-87d4-b1c879f40800 req-867878d1-d3f8-448a-bc02-777bb0538073 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Refreshing instance network info cache due to event network-changed-5cd195ef-887e-43b8-a695-421365d8d1ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.463 239549 DEBUG oslo_concurrency.lockutils [req-5777d83a-161f-4e7a-87d4-b1c879f40800 req-867878d1-d3f8-448a-bc02-777bb0538073 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-0478993b-8261-4780-971f-04d18afc9603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.463 239549 DEBUG oslo_concurrency.lockutils [req-5777d83a-161f-4e7a-87d4-b1c879f40800 req-867878d1-d3f8-448a-bc02-777bb0538073 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-0478993b-8261-4780-971f-04d18afc9603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.463 239549 DEBUG nova.network.neutron [req-5777d83a-161f-4e7a-87d4-b1c879f40800 req-867878d1-d3f8-448a-bc02-777bb0538073 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Refreshing network info cache for port 5cd195ef-887e-43b8-a695-421365d8d1ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:38:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 134 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 2.7 MiB/s wr, 255 op/s
Feb 02 15:38:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:38:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2786814992' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.796 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.798 239549 DEBUG nova.virt.libvirt.vif [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:38:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-341732747',display_name='tempest-VolumesExtendAttachedTest-instance-341732747',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-341732747',id=11,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEi/ALaNSxgnlRX9FWWX8/c1Ia8XVxdvnoVBEwpV4DB09JZxKVCw9PFfREBYGQ87IQepjlJFnyjBPA3f1kTTLzyU9D/7EuGc5PAv2tfhhQ31//kTu2bw0CgFDNnlISartA==',key_name='tempest-keypair-1969230109',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ab4d9435497e4a81a051bfaeef7c7de5',ramdisk_id='',reservation_id='r-0f8mn8e5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1890784903',owner_user_name='tempest-VolumesExtendAttachedTest-1890784903-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:38:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='630312472f584d3aa673cad217006b1c',uuid=0478993b-8261-4780-971f-04d18afc9603,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.799 239549 DEBUG nova.network.os_vif_util [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Converting VIF {"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.799 239549 DEBUG nova.network.os_vif_util [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:13:b8,bridge_name='br-int',has_traffic_filtering=True,id=5cd195ef-887e-43b8-a695-421365d8d1ca,network=Network(c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cd195ef-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.802 239549 DEBUG nova.objects.instance [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0478993b-8261-4780-971f-04d18afc9603 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.832 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <uuid>0478993b-8261-4780-971f-04d18afc9603</uuid>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <name>instance-0000000b</name>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <nova:name>tempest-VolumesExtendAttachedTest-instance-341732747</nova:name>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:38:10</nova:creationTime>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <nova:user uuid="630312472f584d3aa673cad217006b1c">tempest-VolumesExtendAttachedTest-1890784903-project-member</nova:user>
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <nova:project uuid="ab4d9435497e4a81a051bfaeef7c7de5">tempest-VolumesExtendAttachedTest-1890784903</nova:project>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <nova:port uuid="5cd195ef-887e-43b8-a695-421365d8d1ca">
Feb 02 15:38:11 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <system>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <entry name="serial">0478993b-8261-4780-971f-04d18afc9603</entry>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <entry name="uuid">0478993b-8261-4780-971f-04d18afc9603</entry>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     </system>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <os>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   </os>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <features>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   </features>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/0478993b-8261-4780-971f-04d18afc9603_disk">
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       </source>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/0478993b-8261-4780-971f-04d18afc9603_disk.config">
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       </source>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:38:11 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:b8:13:b8"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <target dev="tap5cd195ef-88"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603/console.log" append="off"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <video>
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     </video>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:38:11 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:38:11 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:38:11 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:38:11 compute-0 nova_compute[239545]: </domain>
Feb 02 15:38:11 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.832 239549 DEBUG nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Preparing to wait for external event network-vif-plugged-5cd195ef-887e-43b8-a695-421365d8d1ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.833 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.833 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.833 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.834 239549 DEBUG nova.virt.libvirt.vif [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:38:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-341732747',display_name='tempest-VolumesExtendAttachedTest-instance-341732747',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-341732747',id=11,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEi/ALaNSxgnlRX9FWWX8/c1Ia8XVxdvnoVBEwpV4DB09JZxKVCw9PFfREBYGQ87IQepjlJFnyjBPA3f1kTTLzyU9D/7EuGc5PAv2tfhhQ31//kTu2bw0CgFDNnlISartA==',key_name='tempest-keypair-1969230109',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ab4d9435497e4a81a051bfaeef7c7de5',ramdisk_id='',reservation_id='r-0f8mn8e5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1890784903',owner_user_name='tempest-VolumesExtendAttachedTest-1890784903-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:38:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='630312472f584d3aa673cad217006b1c',uuid=0478993b-8261-4780-971f-04d18afc9603,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.834 239549 DEBUG nova.network.os_vif_util [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Converting VIF {"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.835 239549 DEBUG nova.network.os_vif_util [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:13:b8,bridge_name='br-int',has_traffic_filtering=True,id=5cd195ef-887e-43b8-a695-421365d8d1ca,network=Network(c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cd195ef-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.835 239549 DEBUG os_vif [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:13:b8,bridge_name='br-int',has_traffic_filtering=True,id=5cd195ef-887e-43b8-a695-421365d8d1ca,network=Network(c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cd195ef-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.836 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.836 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.837 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.842 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.842 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cd195ef-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.843 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5cd195ef-88, col_values=(('external_ids', {'iface-id': '5cd195ef-887e-43b8-a695-421365d8d1ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:13:b8', 'vm-uuid': '0478993b-8261-4780-971f-04d18afc9603'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.844 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:11 compute-0 NetworkManager[49171]: <info>  [1770046691.8456] manager: (tap5cd195ef-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.848 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.851 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.852 239549 INFO os_vif [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:13:b8,bridge_name='br-int',has_traffic_filtering=True,id=5cd195ef-887e-43b8-a695-421365d8d1ca,network=Network(c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cd195ef-88')
Feb 02 15:38:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Feb 02 15:38:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3347487978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:11 compute-0 ceph-mon[75334]: pgmap v1226: 305 pgs: 305 active+clean; 134 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 2.7 MiB/s wr, 255 op/s
Feb 02 15:38:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2786814992' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.913 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.913 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.914 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] No VIF found with MAC fa:16:3e:b8:13:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.915 239549 INFO nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Using config drive
Feb 02 15:38:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Feb 02 15:38:11 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Feb 02 15:38:11 compute-0 nova_compute[239545]: 2026-02-02 15:38:11.952 239549 DEBUG nova.storage.rbd_utils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] rbd image 0478993b-8261-4780-971f-04d18afc9603_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.258 239549 INFO nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Creating config drive at /var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603/disk.config
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.262 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6qju5fvw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.378 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6qju5fvw" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.400 239549 DEBUG nova.storage.rbd_utils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] rbd image 0478993b-8261-4780-971f-04d18afc9603_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.404 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603/disk.config 0478993b-8261-4780-971f-04d18afc9603_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.526 239549 DEBUG oslo_concurrency.processutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603/disk.config 0478993b-8261-4780-971f-04d18afc9603_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.526 239549 INFO nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Deleting local config drive /var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603/disk.config because it was imported into RBD.
Feb 02 15:38:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Feb 02 15:38:12 compute-0 kernel: tap5cd195ef-88: entered promiscuous mode
Feb 02 15:38:12 compute-0 systemd-udevd[257484]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:38:12 compute-0 NetworkManager[49171]: <info>  [1770046692.5675] manager: (tap5cd195ef-88): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Feb 02 15:38:12 compute-0 ovn_controller[144995]: 2026-02-02T15:38:12Z|00117|binding|INFO|Claiming lport 5cd195ef-887e-43b8-a695-421365d8d1ca for this chassis.
Feb 02 15:38:12 compute-0 ovn_controller[144995]: 2026-02-02T15:38:12Z|00118|binding|INFO|5cd195ef-887e-43b8-a695-421365d8d1ca: Claiming fa:16:3e:b8:13:b8 10.100.0.4
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.567 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.576 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:13:b8 10.100.0.4'], port_security=['fa:16:3e:b8:13:b8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0478993b-8261-4780-971f-04d18afc9603', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ab4d9435497e4a81a051bfaeef7c7de5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '068c39b8-0553-41d7-9e5a-98b8ea9dd66e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b279c3b-4584-48ef-ad70-f37da209c47b, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=5cd195ef-887e-43b8-a695-421365d8d1ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:38:12 compute-0 ovn_controller[144995]: 2026-02-02T15:38:12Z|00119|binding|INFO|Setting lport 5cd195ef-887e-43b8-a695-421365d8d1ca ovn-installed in OVS
Feb 02 15:38:12 compute-0 ovn_controller[144995]: 2026-02-02T15:38:12Z|00120|binding|INFO|Setting lport 5cd195ef-887e-43b8-a695-421365d8d1ca up in Southbound
Feb 02 15:38:12 compute-0 NetworkManager[49171]: <info>  [1770046692.5789] device (tap5cd195ef-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.579 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:12 compute-0 NetworkManager[49171]: <info>  [1770046692.5808] device (tap5cd195ef-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.580 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 5cd195ef-887e-43b8-a695-421365d8d1ca in datapath c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec bound to our chassis
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.582 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.583 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:12 compute-0 systemd-machined[207609]: New machine qemu-11-instance-0000000b.
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.592 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a1bbfbb2-8463-43f5-b42b-ddc10a703db2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.593 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4a41c5c-31 in ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.594 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4a41c5c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.594 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e9e845-30bc-45c5-9932-3d21b4381ed1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.595 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[856ac345-92c3-4c3c-9e65-db9f3489230b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.603 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[c6151b24-51b5-4e8c-bfe9-19ec5e9e5227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.611 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b73a8e-e075-4f5b-92fe-c617f7550a35]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.633 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[83452b7c-59f4-48a6-9888-1d245afe7d3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 systemd-udevd[257655]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.637 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb99a81-a806-4036-9ebf-c82fd5d85e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 NetworkManager[49171]: <info>  [1770046692.6386] manager: (tapc4a41c5c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.662 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[21140535-9fd2-47df-bb4b-013384205063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.665 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[6c03af39-d001-422c-be78-2d4ab837c715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 NetworkManager[49171]: <info>  [1770046692.6825] device (tapc4a41c5c-30): carrier: link connected
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.684 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4a6de7-afa2-4f41-89f3-94150a49bf6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.695 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b017a7-93d9-4fdd-9e67-08e87f616644]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a41c5c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:01:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417859, 'reachable_time': 22353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257688, 'error': None, 'target': 'ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.707 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[060ec9e7-3a7e-4d67-8de3-82b10c16f723]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:11f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 417859, 'tstamp': 417859}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257689, 'error': None, 'target': 'ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.718 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[85f8cf52-e787-4227-94ac-b20dcf4cd477]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a41c5c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:01:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417859, 'reachable_time': 22353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257690, 'error': None, 'target': 'ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.741 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[57cad36d-9213-4d0e-822b-701eb26bfb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.797 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b68b73bc-5cc1-42a2-9447-4ceedb281450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.799 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a41c5c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.800 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.801 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a41c5c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.803 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:12 compute-0 NetworkManager[49171]: <info>  [1770046692.8039] manager: (tapc4a41c5c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Feb 02 15:38:12 compute-0 kernel: tapc4a41c5c-30: entered promiscuous mode
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.808 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a41c5c-30, col_values=(('external_ids', {'iface-id': '6b5211aa-1f3d-42a0-8ca1-95aab2ffb72d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:12 compute-0 ovn_controller[144995]: 2026-02-02T15:38:12Z|00121|binding|INFO|Releasing lport 6b5211aa-1f3d-42a0-8ca1-95aab2ffb72d from this chassis (sb_readonly=0)
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.810 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.813 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.814 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[48323240-b04c-49a1-bdf2-6098a458c95e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.815 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec.pid.haproxy
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:38:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:12.816 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec', 'env', 'PROCESS_TAG=haproxy-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.818 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:12 compute-0 ceph-mon[75334]: osdmap e292: 3 total, 3 up, 3 in
Feb 02 15:38:12 compute-0 ceph-mon[75334]: osdmap e293: 3 total, 3 up, 3 in
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.941 239549 DEBUG nova.network.neutron [req-5777d83a-161f-4e7a-87d4-b1c879f40800 req-867878d1-d3f8-448a-bc02-777bb0538073 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Updated VIF entry in instance network info cache for port 5cd195ef-887e-43b8-a695-421365d8d1ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.943 239549 DEBUG nova.network.neutron [req-5777d83a-161f-4e7a-87d4-b1c879f40800 req-867878d1-d3f8-448a-bc02-777bb0538073 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Updating instance_info_cache with network_info: [{"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:38:12 compute-0 nova_compute[239545]: 2026-02-02 15:38:12.972 239549 DEBUG oslo_concurrency.lockutils [req-5777d83a-161f-4e7a-87d4-b1c879f40800 req-867878d1-d3f8-448a-bc02-777bb0538073 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-0478993b-8261-4780-971f-04d18afc9603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:38:13 compute-0 podman[257722]: 2026-02-02 15:38:13.196106309 +0000 UTC m=+0.047543587 container create de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:38:13 compute-0 systemd[1]: Started libpod-conmon-de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c.scope.
Feb 02 15:38:13 compute-0 podman[257722]: 2026-02-02 15:38:13.170341849 +0000 UTC m=+0.021779137 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:38:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de46c5f792f3e787224c067cae9ba85571f313491e3f0583495dcf8a7f30048/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:13 compute-0 podman[257722]: 2026-02-02 15:38:13.293414195 +0000 UTC m=+0.144851483 container init de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 02 15:38:13 compute-0 podman[257722]: 2026-02-02 15:38:13.300103167 +0000 UTC m=+0.151540435 container start de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:38:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674926348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674926348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:13 compute-0 neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec[257761]: [NOTICE]   (257782) : New worker (257785) forked
Feb 02 15:38:13 compute-0 neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec[257761]: [NOTICE]   (257782) : Loading success.
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.379 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046693.3787777, 0478993b-8261-4780-971f-04d18afc9603 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.379 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] VM Started (Lifecycle Event)
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.406 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.412 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046693.3820946, 0478993b-8261-4780-971f-04d18afc9603 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.412 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] VM Paused (Lifecycle Event)
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.432 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.436 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.458 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.538 239549 DEBUG nova.compute.manager [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received event network-vif-plugged-5cd195ef-887e-43b8-a695-421365d8d1ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.539 239549 DEBUG oslo_concurrency.lockutils [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.539 239549 DEBUG oslo_concurrency.lockutils [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.539 239549 DEBUG oslo_concurrency.lockutils [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.539 239549 DEBUG nova.compute.manager [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Processing event network-vif-plugged-5cd195ef-887e-43b8-a695-421365d8d1ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.540 239549 DEBUG nova.compute.manager [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received event network-vif-plugged-5cd195ef-887e-43b8-a695-421365d8d1ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.540 239549 DEBUG oslo_concurrency.lockutils [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.540 239549 DEBUG oslo_concurrency.lockutils [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.540 239549 DEBUG oslo_concurrency.lockutils [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.541 239549 DEBUG nova.compute.manager [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] No waiting events found dispatching network-vif-plugged-5cd195ef-887e-43b8-a695-421365d8d1ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.541 239549 WARNING nova.compute.manager [req-255ebe39-26fe-457f-94ad-f5d5fdea6574 req-876c1000-8521-4a01-9447-b248ba865665 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received unexpected event network-vif-plugged-5cd195ef-887e-43b8-a695-421365d8d1ca for instance with vm_state building and task_state spawning.
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.541 239549 DEBUG nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.545 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046693.5444007, 0478993b-8261-4780-971f-04d18afc9603 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.545 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] VM Resumed (Lifecycle Event)
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.548 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.551 239549 INFO nova.virt.libvirt.driver [-] [instance: 0478993b-8261-4780-971f-04d18afc9603] Instance spawned successfully.
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.551 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.569 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.580 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.585 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.585 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.586 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.587 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.587 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.588 239549 DEBUG nova.virt.libvirt.driver [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 134 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 3.2 MiB/s wr, 249 op/s
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.601 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.663 239549 INFO nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Took 7.17 seconds to spawn the instance on the hypervisor.
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.664 239549 DEBUG nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.681 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.737 239549 INFO nova.compute.manager [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Took 8.26 seconds to build instance.
Feb 02 15:38:13 compute-0 nova_compute[239545]: 2026-02-02 15:38:13.759 239549 DEBUG oslo_concurrency.lockutils [None req-94304e05-e211-48cf-b3e2-610d766f0324 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3674926348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3674926348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:13 compute-0 ceph-mon[75334]: pgmap v1229: 305 pgs: 305 active+clean; 134 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 3.2 MiB/s wr, 249 op/s
Feb 02 15:38:14 compute-0 nova_compute[239545]: 2026-02-02 15:38:14.646 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:14.647 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:38:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:14.648 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:38:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:38:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:38:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:38:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:38:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:38:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:38:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Feb 02 15:38:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Feb 02 15:38:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Feb 02 15:38:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 335 op/s
Feb 02 15:38:15 compute-0 nova_compute[239545]: 2026-02-02 15:38:15.612 239549 DEBUG nova.compute.manager [req-a92894b1-3f43-4a03-a571-3901ff798864 req-ffc13d99-3c36-44e3-b49d-94eb539758f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received event network-changed-5cd195ef-887e-43b8-a695-421365d8d1ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:15 compute-0 nova_compute[239545]: 2026-02-02 15:38:15.613 239549 DEBUG nova.compute.manager [req-a92894b1-3f43-4a03-a571-3901ff798864 req-ffc13d99-3c36-44e3-b49d-94eb539758f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Refreshing instance network info cache due to event network-changed-5cd195ef-887e-43b8-a695-421365d8d1ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:38:15 compute-0 nova_compute[239545]: 2026-02-02 15:38:15.613 239549 DEBUG oslo_concurrency.lockutils [req-a92894b1-3f43-4a03-a571-3901ff798864 req-ffc13d99-3c36-44e3-b49d-94eb539758f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-0478993b-8261-4780-971f-04d18afc9603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:38:15 compute-0 nova_compute[239545]: 2026-02-02 15:38:15.613 239549 DEBUG oslo_concurrency.lockutils [req-a92894b1-3f43-4a03-a571-3901ff798864 req-ffc13d99-3c36-44e3-b49d-94eb539758f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-0478993b-8261-4780-971f-04d18afc9603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:38:15 compute-0 nova_compute[239545]: 2026-02-02 15:38:15.613 239549 DEBUG nova.network.neutron [req-a92894b1-3f43-4a03-a571-3901ff798864 req-ffc13d99-3c36-44e3-b49d-94eb539758f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Refreshing network info cache for port 5cd195ef-887e-43b8-a695-421365d8d1ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:38:15 compute-0 ceph-mon[75334]: osdmap e294: 3 total, 3 up, 3 in
Feb 02 15:38:15 compute-0 ceph-mon[75334]: pgmap v1231: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 335 op/s
Feb 02 15:38:16 compute-0 nova_compute[239545]: 2026-02-02 15:38:16.844 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:16 compute-0 nova_compute[239545]: 2026-02-02 15:38:16.910 239549 DEBUG nova.network.neutron [req-a92894b1-3f43-4a03-a571-3901ff798864 req-ffc13d99-3c36-44e3-b49d-94eb539758f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Updated VIF entry in instance network info cache for port 5cd195ef-887e-43b8-a695-421365d8d1ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:38:16 compute-0 nova_compute[239545]: 2026-02-02 15:38:16.911 239549 DEBUG nova.network.neutron [req-a92894b1-3f43-4a03-a571-3901ff798864 req-ffc13d99-3c36-44e3-b49d-94eb539758f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Updating instance_info_cache with network_info: [{"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:38:16 compute-0 nova_compute[239545]: 2026-02-02 15:38:16.933 239549 DEBUG oslo_concurrency.lockutils [req-a92894b1-3f43-4a03-a571-3901ff798864 req-ffc13d99-3c36-44e3-b49d-94eb539758f8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-0478993b-8261-4780-971f-04d18afc9603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:38:17 compute-0 ovn_controller[144995]: 2026-02-02T15:38:17Z|00122|binding|INFO|Releasing lport 6b5211aa-1f3d-42a0-8ca1-95aab2ffb72d from this chassis (sb_readonly=0)
Feb 02 15:38:17 compute-0 nova_compute[239545]: 2026-02-02 15:38:17.049 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 33 KiB/s wr, 152 op/s
Feb 02 15:38:18 compute-0 ceph-mon[75334]: pgmap v1232: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 33 KiB/s wr, 152 op/s
Feb 02 15:38:18 compute-0 nova_compute[239545]: 2026-02-02 15:38:18.683 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 26 KiB/s wr, 182 op/s
Feb 02 15:38:19 compute-0 ovn_controller[144995]: 2026-02-02T15:38:19Z|00123|binding|INFO|Releasing lport 6b5211aa-1f3d-42a0-8ca1-95aab2ffb72d from this chassis (sb_readonly=0)
Feb 02 15:38:19 compute-0 nova_compute[239545]: 2026-02-02 15:38:19.778 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Feb 02 15:38:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Feb 02 15:38:20 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Feb 02 15:38:20 compute-0 ceph-mon[75334]: pgmap v1233: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 26 KiB/s wr, 182 op/s
Feb 02 15:38:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3520898905' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3520898905' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 27 KiB/s wr, 257 op/s
Feb 02 15:38:21 compute-0 ceph-mon[75334]: osdmap e295: 3 total, 3 up, 3 in
Feb 02 15:38:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3520898905' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3520898905' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:21 compute-0 ceph-mon[75334]: pgmap v1235: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 27 KiB/s wr, 257 op/s
Feb 02 15:38:21 compute-0 nova_compute[239545]: 2026-02-02 15:38:21.847 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:21 compute-0 nova_compute[239545]: 2026-02-02 15:38:21.950 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046686.9239435, e39fbf7a-5b10-4f35-b531-efb11df8a34b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:38:21 compute-0 nova_compute[239545]: 2026-02-02 15:38:21.950 239549 INFO nova.compute.manager [-] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] VM Stopped (Lifecycle Event)
Feb 02 15:38:21 compute-0 nova_compute[239545]: 2026-02-02 15:38:21.966 239549 DEBUG nova.compute.manager [None req-b079e0d9-6d8c-4f03-9623-17029dd55737 - - - - - -] [instance: e39fbf7a-5b10-4f35-b531-efb11df8a34b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:38:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Feb 02 15:38:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Feb 02 15:38:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Feb 02 15:38:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451356481' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451356481' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.7 KiB/s wr, 174 op/s
Feb 02 15:38:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/389631322' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/389631322' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:23 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:23.649 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:23 compute-0 ceph-mon[75334]: osdmap e296: 3 total, 3 up, 3 in
Feb 02 15:38:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/451356481' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/451356481' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:23 compute-0 ceph-mon[75334]: pgmap v1237: 305 pgs: 305 active+clean; 134 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.7 KiB/s wr, 174 op/s
Feb 02 15:38:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/389631322' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/389631322' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:23 compute-0 nova_compute[239545]: 2026-02-02 15:38:23.731 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 153 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 390 op/s
Feb 02 15:38:26 compute-0 ovn_controller[144995]: 2026-02-02T15:38:26Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:13:b8 10.100.0.4
Feb 02 15:38:26 compute-0 ovn_controller[144995]: 2026-02-02T15:38:26Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:13:b8 10.100.0.4
Feb 02 15:38:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Feb 02 15:38:26 compute-0 ceph-mon[75334]: pgmap v1238: 305 pgs: 305 active+clean; 153 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 390 op/s
Feb 02 15:38:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Feb 02 15:38:26 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Feb 02 15:38:26 compute-0 nova_compute[239545]: 2026-02-02 15:38:26.848 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Feb 02 15:38:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 153 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 3.1 MiB/s wr, 277 op/s
Feb 02 15:38:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Feb 02 15:38:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.629466) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046707629497, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1375, "num_deletes": 264, "total_data_size": 1877826, "memory_usage": 1906496, "flush_reason": "Manual Compaction"}
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046707640556, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1832615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23969, "largest_seqno": 25343, "table_properties": {"data_size": 1825786, "index_size": 3904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14719, "raw_average_key_size": 20, "raw_value_size": 1811927, "raw_average_value_size": 2513, "num_data_blocks": 173, "num_entries": 721, "num_filter_entries": 721, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770046624, "oldest_key_time": 1770046624, "file_creation_time": 1770046707, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 11129 microseconds, and 3101 cpu microseconds.
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.640594) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1832615 bytes OK
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.640611) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.642266) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.642280) EVENT_LOG_v1 {"time_micros": 1770046707642276, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.642296) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1871405, prev total WAL file size 1871405, number of live WAL files 2.
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.642793) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1789KB)], [53(9087KB)]
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046707642850, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11137735, "oldest_snapshot_seqno": -1}
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5440 keys, 11033419 bytes, temperature: kUnknown
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046707736075, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 11033419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10990149, "index_size": 28568, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 134985, "raw_average_key_size": 24, "raw_value_size": 10885380, "raw_average_value_size": 2000, "num_data_blocks": 1180, "num_entries": 5440, "num_filter_entries": 5440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770046707, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:38:27 compute-0 ceph-mon[75334]: osdmap e297: 3 total, 3 up, 3 in
Feb 02 15:38:27 compute-0 ceph-mon[75334]: osdmap e298: 3 total, 3 up, 3 in
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.736572) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 11033419 bytes
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.760976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.3 rd, 118.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.9 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 5981, records dropped: 541 output_compression: NoCompression
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.761017) EVENT_LOG_v1 {"time_micros": 1770046707761000, "job": 28, "event": "compaction_finished", "compaction_time_micros": 93369, "compaction_time_cpu_micros": 18561, "output_level": 6, "num_output_files": 1, "total_output_size": 11033419, "num_input_records": 5981, "num_output_records": 5440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046707761392, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046707762153, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.642678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.762197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.762202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.762204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.762205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:38:27 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:38:27.762207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:38:28 compute-0 nova_compute[239545]: 2026-02-02 15:38:28.102 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/227922716' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/227922716' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:28 compute-0 nova_compute[239545]: 2026-02-02 15:38:28.733 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:28 compute-0 ceph-mon[75334]: pgmap v1240: 305 pgs: 305 active+clean; 153 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 3.1 MiB/s wr, 277 op/s
Feb 02 15:38:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/227922716' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/227922716' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 156 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 509 KiB/s rd, 3.6 MiB/s wr, 322 op/s
Feb 02 15:38:29 compute-0 ceph-mon[75334]: pgmap v1242: 305 pgs: 305 active+clean; 156 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 509 KiB/s rd, 3.6 MiB/s wr, 322 op/s
Feb 02 15:38:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Feb 02 15:38:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Feb 02 15:38:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Feb 02 15:38:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2905996065' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2905996065' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 413 KiB/s rd, 715 KiB/s wr, 135 op/s
Feb 02 15:38:31 compute-0 ceph-mon[75334]: osdmap e299: 3 total, 3 up, 3 in
Feb 02 15:38:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2905996065' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2905996065' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:31 compute-0 ceph-mon[75334]: pgmap v1244: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 413 KiB/s rd, 715 KiB/s wr, 135 op/s
Feb 02 15:38:31 compute-0 nova_compute[239545]: 2026-02-02 15:38:31.850 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:32 compute-0 nova_compute[239545]: 2026-02-02 15:38:32.289 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Feb 02 15:38:33 compute-0 podman[257796]: 2026-02-02 15:38:33.316249637 +0000 UTC m=+0.051411079 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent)
Feb 02 15:38:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Feb 02 15:38:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Feb 02 15:38:33 compute-0 podman[257795]: 2026-02-02 15:38:33.365282439 +0000 UTC m=+0.102331417 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:38:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 417 KiB/s rd, 715 KiB/s wr, 141 op/s
Feb 02 15:38:33 compute-0 nova_compute[239545]: 2026-02-02 15:38:33.768 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:34 compute-0 ceph-mon[75334]: osdmap e300: 3 total, 3 up, 3 in
Feb 02 15:38:34 compute-0 ceph-mon[75334]: pgmap v1246: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 417 KiB/s rd, 715 KiB/s wr, 141 op/s
Feb 02 15:38:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1076642953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1076642953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:35 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1076642953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:35 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1076642953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 558 KiB/s wr, 188 op/s
Feb 02 15:38:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613750702' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613750702' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:36 compute-0 ceph-mon[75334]: pgmap v1247: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 558 KiB/s wr, 188 op/s
Feb 02 15:38:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2613750702' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2613750702' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:36 compute-0 nova_compute[239545]: 2026-02-02 15:38:36.565 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:36 compute-0 nova_compute[239545]: 2026-02-02 15:38:36.854 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Feb 02 15:38:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Feb 02 15:38:37 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Feb 02 15:38:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 24 KiB/s wr, 101 op/s
Feb 02 15:38:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/196673256' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/196673256' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:38 compute-0 ceph-mon[75334]: osdmap e301: 3 total, 3 up, 3 in
Feb 02 15:38:38 compute-0 ceph-mon[75334]: pgmap v1249: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 24 KiB/s wr, 101 op/s
Feb 02 15:38:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/196673256' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/196673256' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:38 compute-0 nova_compute[239545]: 2026-02-02 15:38:38.770 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:39 compute-0 nova_compute[239545]: 2026-02-02 15:38:39.357 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 21 KiB/s wr, 120 op/s
Feb 02 15:38:40 compute-0 ceph-mon[75334]: pgmap v1250: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 21 KiB/s wr, 120 op/s
Feb 02 15:38:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 26 KiB/s wr, 127 op/s
Feb 02 15:38:41 compute-0 nova_compute[239545]: 2026-02-02 15:38:41.866 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:42 compute-0 ceph-mon[75334]: pgmap v1251: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 26 KiB/s wr, 127 op/s
Feb 02 15:38:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:38:42
Feb 02 15:38:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:38:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:38:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data']
Feb 02 15:38:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:38:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 22 KiB/s wr, 105 op/s
Feb 02 15:38:43 compute-0 nova_compute[239545]: 2026-02-02 15:38:43.772 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:43 compute-0 ceph-mon[75334]: pgmap v1252: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 22 KiB/s wr, 105 op/s
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:38:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:38:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Feb 02 15:38:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Feb 02 15:38:45 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Feb 02 15:38:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 9.5 KiB/s wr, 53 op/s
Feb 02 15:38:45 compute-0 nova_compute[239545]: 2026-02-02 15:38:45.873 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:45 compute-0 nova_compute[239545]: 2026-02-02 15:38:45.873 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:45 compute-0 nova_compute[239545]: 2026-02-02 15:38:45.894 239549 DEBUG nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:38:45 compute-0 nova_compute[239545]: 2026-02-02 15:38:45.964 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:45 compute-0 nova_compute[239545]: 2026-02-02 15:38:45.964 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:45 compute-0 nova_compute[239545]: 2026-02-02 15:38:45.974 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:38:45 compute-0 nova_compute[239545]: 2026-02-02 15:38:45.974 239549 INFO nova.compute.claims [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:38:46 compute-0 nova_compute[239545]: 2026-02-02 15:38:46.222 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:46 compute-0 ceph-mon[75334]: osdmap e302: 3 total, 3 up, 3 in
Feb 02 15:38:46 compute-0 ceph-mon[75334]: pgmap v1254: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 9.5 KiB/s wr, 53 op/s
Feb 02 15:38:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:38:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/938216137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:46 compute-0 nova_compute[239545]: 2026-02-02 15:38:46.777 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:46 compute-0 nova_compute[239545]: 2026-02-02 15:38:46.790 239549 DEBUG nova.compute.provider_tree [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:38:46 compute-0 nova_compute[239545]: 2026-02-02 15:38:46.836 239549 DEBUG nova.scheduler.client.report [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:38:46 compute-0 nova_compute[239545]: 2026-02-02 15:38:46.868 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:46 compute-0 nova_compute[239545]: 2026-02-02 15:38:46.929 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:46 compute-0 nova_compute[239545]: 2026-02-02 15:38:46.930 239549 DEBUG nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:38:46 compute-0 nova_compute[239545]: 2026-02-02 15:38:46.983 239549 DEBUG nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:38:46 compute-0 nova_compute[239545]: 2026-02-02 15:38:46.984 239549 DEBUG nova.network.neutron [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.005 239549 INFO nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.027 239549 DEBUG nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.142 239549 DEBUG nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.143 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.143 239549 INFO nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Creating image(s)
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.162 239549 DEBUG nova.storage.rbd_utils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.184 239549 DEBUG nova.storage.rbd_utils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.210 239549 DEBUG nova.storage.rbd_utils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.215 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.228 239549 DEBUG nova.policy [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '52fc74263c9d4d478b0b870727c4fa0c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46fcff5180ad4462a78fc4ba0bf7c266', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.261 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.262 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.263 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.263 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.283 239549 DEBUG nova.storage.rbd_utils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.288 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 7.7 KiB/s wr, 43 op/s
Feb 02 15:38:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/938216137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:47 compute-0 nova_compute[239545]: 2026-02-02 15:38:47.999 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.064 239549 DEBUG nova.storage.rbd_utils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] resizing rbd image 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.146 239549 DEBUG nova.objects.instance [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'migration_context' on Instance uuid 0cd0267f-d963-4475-aa31-ae2d3864ad80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.255 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.256 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Ensure instance console log exists: /var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.256 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.257 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.257 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.458 239549 DEBUG oslo_concurrency.lockutils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.458 239549 DEBUG oslo_concurrency.lockutils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.480 239549 DEBUG nova.objects.instance [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lazy-loading 'flavor' on Instance uuid 0478993b-8261-4780-971f-04d18afc9603 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.502 239549 INFO nova.virt.libvirt.driver [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Ignoring supplied device name: /dev/vdb
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.517 239549 DEBUG oslo_concurrency.lockutils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:48 compute-0 ceph-mon[75334]: pgmap v1255: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 7.7 KiB/s wr, 43 op/s
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.774 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.800 239549 DEBUG oslo_concurrency.lockutils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.801 239549 DEBUG oslo_concurrency.lockutils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.801 239549 INFO nova.compute.manager [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Attaching volume 653f80bc-dc30-44e1-b124-ff341d1d8de1 to /dev/vdb
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.990 239549 DEBUG os_brick.utils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:38:48 compute-0 nova_compute[239545]: 2026-02-02 15:38:48.991 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.003 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.004 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[049fe260-0483-42e3-97c8-c17be971484c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.006 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.014 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.015 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[6f92e568-e73c-4938-a890-faf681a2c29a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.016 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.023 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.024 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4458fc-b2ac-4b80-a4c7-11c905f2d981]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.025 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1bc145-755b-43d8-901e-57a0298a0919]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.025 239549 DEBUG oslo_concurrency.processutils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.045 239549 DEBUG oslo_concurrency.processutils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.048 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.048 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.049 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.049 239549 DEBUG os_brick.utils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.049 239549 DEBUG nova.virt.block_device [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Updating existing volume attachment record: 0d769ebf-0b12-4998-946b-d2aecf2aec98 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:38:49 compute-0 nova_compute[239545]: 2026-02-02 15:38:49.442 239549 DEBUG nova.network.neutron [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Successfully created port: ff69595e-71b6-4de9-a34f-11323c8da359 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:38:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 181 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 887 KiB/s wr, 18 op/s
Feb 02 15:38:49 compute-0 ceph-mon[75334]: pgmap v1256: 305 pgs: 305 active+clean; 181 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 887 KiB/s wr, 18 op/s
Feb 02 15:38:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:38:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384820226' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:50 compute-0 nova_compute[239545]: 2026-02-02 15:38:50.356 239549 DEBUG nova.objects.instance [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lazy-loading 'flavor' on Instance uuid 0478993b-8261-4780-971f-04d18afc9603 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:50 compute-0 nova_compute[239545]: 2026-02-02 15:38:50.383 239549 DEBUG nova.virt.libvirt.driver [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Attempting to attach volume 653f80bc-dc30-44e1-b124-ff341d1d8de1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:38:50 compute-0 nova_compute[239545]: 2026-02-02 15:38:50.385 239549 DEBUG nova.virt.libvirt.guest [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:38:50 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:38:50 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-653f80bc-dc30-44e1-b124-ff341d1d8de1">
Feb 02 15:38:50 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:50 compute-0 nova_compute[239545]:   </source>
Feb 02 15:38:50 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:38:50 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:38:50 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:38:50 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:38:50 compute-0 nova_compute[239545]:   <serial>653f80bc-dc30-44e1-b124-ff341d1d8de1</serial>
Feb 02 15:38:50 compute-0 nova_compute[239545]: </disk>
Feb 02 15:38:50 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:38:50 compute-0 nova_compute[239545]: 2026-02-02 15:38:50.543 239549 DEBUG nova.virt.libvirt.driver [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:50 compute-0 nova_compute[239545]: 2026-02-02 15:38:50.543 239549 DEBUG nova.virt.libvirt.driver [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:50 compute-0 nova_compute[239545]: 2026-02-02 15:38:50.544 239549 DEBUG nova.virt.libvirt.driver [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:50 compute-0 nova_compute[239545]: 2026-02-02 15:38:50.544 239549 DEBUG nova.virt.libvirt.driver [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] No VIF found with MAC fa:16:3e:b8:13:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:38:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1384820226' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.249 239549 DEBUG oslo_concurrency.lockutils [None req-7124294f-b09d-4ccb-a5c2-4b3efcb5cfed 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.316 239549 DEBUG nova.network.neutron [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Successfully updated port: ff69595e-71b6-4de9-a34f-11323c8da359 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.329 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.329 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquired lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.329 239549 DEBUG nova.network.neutron [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.425 239549 DEBUG nova.compute.manager [req-522c619d-7fcd-4a0c-ac0d-e8745a385064 req-8e5f5b55-4985-4b37-b4d7-bc0febc33f5b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received event network-changed-ff69595e-71b6-4de9-a34f-11323c8da359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.425 239549 DEBUG nova.compute.manager [req-522c619d-7fcd-4a0c-ac0d-e8745a385064 req-8e5f5b55-4985-4b37-b4d7-bc0febc33f5b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Refreshing instance network info cache due to event network-changed-ff69595e-71b6-4de9-a34f-11323c8da359. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.425 239549 DEBUG oslo_concurrency.lockutils [req-522c619d-7fcd-4a0c-ac0d-e8745a385064 req-8e5f5b55-4985-4b37-b4d7-bc0febc33f5b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.484 239549 DEBUG nova.network.neutron [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:38:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 213 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Feb 02 15:38:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Feb 02 15:38:51 compute-0 ceph-mon[75334]: pgmap v1257: 305 pgs: 305 active+clean; 213 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Feb 02 15:38:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Feb 02 15:38:51 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Feb 02 15:38:51 compute-0 nova_compute[239545]: 2026-02-02 15:38:51.918 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.386 239549 DEBUG nova.network.neutron [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updating instance_info_cache with network_info: [{"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.405 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Releasing lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.406 239549 DEBUG nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Instance network_info: |[{"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.407 239549 DEBUG oslo_concurrency.lockutils [req-522c619d-7fcd-4a0c-ac0d-e8745a385064 req-8e5f5b55-4985-4b37-b4d7-bc0febc33f5b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.407 239549 DEBUG nova.network.neutron [req-522c619d-7fcd-4a0c-ac0d-e8745a385064 req-8e5f5b55-4985-4b37-b4d7-bc0febc33f5b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Refreshing network info cache for port ff69595e-71b6-4de9-a34f-11323c8da359 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.413 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Start _get_guest_xml network_info=[{"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.420 239549 WARNING nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.424 239549 DEBUG nova.virt.libvirt.host [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.425 239549 DEBUG nova.virt.libvirt.host [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.435 239549 DEBUG nova.virt.libvirt.host [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.435 239549 DEBUG nova.virt.libvirt.host [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.436 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.436 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.436 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.437 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.437 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.437 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.437 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.437 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.437 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.437 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.438 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.438 239549 DEBUG nova.virt.hardware [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.441 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:52 compute-0 ceph-mon[75334]: osdmap e303: 3 total, 3 up, 3 in
Feb 02 15:38:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:38:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221681877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:52 compute-0 nova_compute[239545]: 2026-02-02 15:38:52.985 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.016 239549 DEBUG nova.storage.rbd_utils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.019 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.187 239549 DEBUG nova.compute.manager [req-84905b68-8ba3-4500-9517-bb2ddbc26c5d req-1c613e01-46cd-4b4e-ac65-b89f556867d8 1799f1bbc3934bd8a00777481c6c55b2 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received event volume-extended-653f80bc-dc30-44e1-b124-ff341d1d8de1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.204 239549 DEBUG nova.compute.manager [req-84905b68-8ba3-4500-9517-bb2ddbc26c5d req-1c613e01-46cd-4b4e-ac65-b89f556867d8 1799f1bbc3934bd8a00777481c6c55b2 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Handling volume-extended event for volume 653f80bc-dc30-44e1-b124-ff341d1d8de1 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.222 239549 INFO nova.compute.manager [req-84905b68-8ba3-4500-9517-bb2ddbc26c5d req-1c613e01-46cd-4b4e-ac65-b89f556867d8 1799f1bbc3934bd8a00777481c6c55b2 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Cinder extended volume 653f80bc-dc30-44e1-b124-ff341d1d8de1; extending it to detect new size
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.355 239549 DEBUG nova.virt.libvirt.driver [req-84905b68-8ba3-4500-9517-bb2ddbc26c5d req-1c613e01-46cd-4b4e-ac65-b89f556867d8 1799f1bbc3934bd8a00777481c6c55b2 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.565 239549 DEBUG nova.network.neutron [req-522c619d-7fcd-4a0c-ac0d-e8745a385064 req-8e5f5b55-4985-4b37-b4d7-bc0febc33f5b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updated VIF entry in instance network info cache for port ff69595e-71b6-4de9-a34f-11323c8da359. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.565 239549 DEBUG nova.network.neutron [req-522c619d-7fcd-4a0c-ac0d-e8745a385064 req-8e5f5b55-4985-4b37-b4d7-bc0febc33f5b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updating instance_info_cache with network_info: [{"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.582 239549 DEBUG oslo_concurrency.lockutils [req-522c619d-7fcd-4a0c-ac0d-e8745a385064 req-8e5f5b55-4985-4b37-b4d7-bc0febc33f5b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:38:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 213 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.6 MiB/s wr, 76 op/s
Feb 02 15:38:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:38:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300381513' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.650 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.651 239549 DEBUG nova.virt.libvirt.vif [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:38:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1259992382',display_name='tempest-TestStampPattern-server-1259992382',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1259992382',id=12,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU3Qd28tTX1c5qwGJRKT3n61SGNF68frpFMSsyV8cHZ2kSTbPtWsGt0wKjJJJJlLa3QDX/7DBKeziYUBGfREdOy19PqZh47/jl2MuarCSlTN9sOG0Vwc8p2ZOsRH+TAQg==',key_name='tempest-TestStampPattern-1309840176',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46fcff5180ad4462a78fc4ba0bf7c266',ramdisk_id='',reservation_id='r-iab8kkrx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-2129228693',owner_user_name='tempest-TestStampPattern-2129228693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:38:47Z,user_data=None,user_id='52fc74263c9d4d478b0b870727c4fa0c',uuid=0cd0267f-d963-4475-aa31-ae2d3864ad80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.652 239549 DEBUG nova.network.os_vif_util [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converting VIF {"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.652 239549 DEBUG nova.network.os_vif_util [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:83:95,bridge_name='br-int',has_traffic_filtering=True,id=ff69595e-71b6-4de9-a34f-11323c8da359,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff69595e-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.653 239549 DEBUG nova.objects.instance [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0cd0267f-d963-4475-aa31-ae2d3864ad80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.671 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <uuid>0cd0267f-d963-4475-aa31-ae2d3864ad80</uuid>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <name>instance-0000000c</name>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <nova:name>tempest-TestStampPattern-server-1259992382</nova:name>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:38:52</nova:creationTime>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <nova:user uuid="52fc74263c9d4d478b0b870727c4fa0c">tempest-TestStampPattern-2129228693-project-member</nova:user>
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <nova:project uuid="46fcff5180ad4462a78fc4ba0bf7c266">tempest-TestStampPattern-2129228693</nova:project>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <nova:port uuid="ff69595e-71b6-4de9-a34f-11323c8da359">
Feb 02 15:38:53 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <system>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <entry name="serial">0cd0267f-d963-4475-aa31-ae2d3864ad80</entry>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <entry name="uuid">0cd0267f-d963-4475-aa31-ae2d3864ad80</entry>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     </system>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <os>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   </os>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <features>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   </features>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/0cd0267f-d963-4475-aa31-ae2d3864ad80_disk">
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       </source>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/0cd0267f-d963-4475-aa31-ae2d3864ad80_disk.config">
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       </source>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:38:53 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:71:83:95"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <target dev="tapff69595e-71"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80/console.log" append="off"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <video>
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     </video>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:38:53 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:38:53 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:38:53 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:38:53 compute-0 nova_compute[239545]: </domain>
Feb 02 15:38:53 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.673 239549 DEBUG nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Preparing to wait for external event network-vif-plugged-ff69595e-71b6-4de9-a34f-11323c8da359 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.673 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.673 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.674 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.674 239549 DEBUG nova.virt.libvirt.vif [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:38:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1259992382',display_name='tempest-TestStampPattern-server-1259992382',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1259992382',id=12,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU3Qd28tTX1c5qwGJRKT3n61SGNF68frpFMSsyV8cHZ2kSTbPtWsGt0wKjJJJJlLa3QDX/7DBKeziYUBGfREdOy19PqZh47/jl2MuarCSlTN9sOG0Vwc8p2ZOsRH+TAQg==',key_name='tempest-TestStampPattern-1309840176',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46fcff5180ad4462a78fc4ba0bf7c266',ramdisk_id='',reservation_id='r-iab8kkrx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-2129228693',owner_user_name='tempest-TestStampPattern-2129228693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:38:47Z,user_data=None,user_id='52fc74263c9d4d478b0b870727c4fa0c',uuid=0cd0267f-d963-4475-aa31-ae2d3864ad80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.675 239549 DEBUG nova.network.os_vif_util [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converting VIF {"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.675 239549 DEBUG nova.network.os_vif_util [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:83:95,bridge_name='br-int',has_traffic_filtering=True,id=ff69595e-71b6-4de9-a34f-11323c8da359,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff69595e-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.676 239549 DEBUG os_vif [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:83:95,bridge_name='br-int',has_traffic_filtering=True,id=ff69595e-71b6-4de9-a34f-11323c8da359,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff69595e-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.676 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.677 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.677 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.681 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.681 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff69595e-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.682 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff69595e-71, col_values=(('external_ids', {'iface-id': 'ff69595e-71b6-4de9-a34f-11323c8da359', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:83:95', 'vm-uuid': '0cd0267f-d963-4475-aa31-ae2d3864ad80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.683 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:53 compute-0 NetworkManager[49171]: <info>  [1770046733.6845] manager: (tapff69595e-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.686 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.689 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.690 239549 INFO os_vif [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:83:95,bridge_name='br-int',has_traffic_filtering=True,id=ff69595e-71b6-4de9-a34f-11323c8da359,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff69595e-71')
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.747 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.747 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.748 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No VIF found with MAC fa:16:3e:71:83:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.748 239549 INFO nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Using config drive
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.771 239549 DEBUG nova.storage.rbd_utils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:53 compute-0 nova_compute[239545]: 2026-02-02 15:38:53.776 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/221681877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:53 compute-0 ceph-mon[75334]: pgmap v1259: 305 pgs: 305 active+clean; 213 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.6 MiB/s wr, 76 op/s
Feb 02 15:38:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1300381513' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.140 239549 INFO nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Creating config drive at /var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80/disk.config
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.144 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmpzyk432 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.261 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmpzyk432" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.285 239549 DEBUG nova.storage.rbd_utils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.288 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80/disk.config 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.388 239549 DEBUG oslo_concurrency.lockutils [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.389 239549 DEBUG oslo_concurrency.lockutils [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.412 239549 DEBUG oslo_concurrency.processutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80/disk.config 0cd0267f-d963-4475-aa31-ae2d3864ad80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.414 239549 INFO nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Deleting local config drive /var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80/disk.config because it was imported into RBD.
Feb 02 15:38:54 compute-0 kernel: tapff69595e-71: entered promiscuous mode
Feb 02 15:38:54 compute-0 NetworkManager[49171]: <info>  [1770046734.4390] manager: (tapff69595e-71): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.438 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:54 compute-0 ovn_controller[144995]: 2026-02-02T15:38:54Z|00124|binding|INFO|Claiming lport ff69595e-71b6-4de9-a34f-11323c8da359 for this chassis.
Feb 02 15:38:54 compute-0 ovn_controller[144995]: 2026-02-02T15:38:54Z|00125|binding|INFO|ff69595e-71b6-4de9-a34f-11323c8da359: Claiming fa:16:3e:71:83:95 10.100.0.10
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:38:54 compute-0 ovn_controller[144995]: 2026-02-02T15:38:54Z|00126|binding|INFO|Setting lport ff69595e-71b6-4de9-a34f-11323c8da359 ovn-installed in OVS
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011071838686426939 of space, bias 1.0, pg target 0.33215516059280814 quantized to 32 (current 32)
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.450 239549 INFO nova.compute.manager [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Detaching volume 653f80bc-dc30-44e1-b124-ff341d1d8de1
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.451 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:83:95 10.100.0.10'], port_security=['fa:16:3e:71:83:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0cd0267f-d963-4475-aa31-ae2d3864ad80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f321435-d909-47d9-9978-c1a6e976cdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46fcff5180ad4462a78fc4ba0bf7c266', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df6850bc-5320-4ccb-85d3-0e9f88b0ebcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a6ad9bc-2949-4854-862e-b465f4808980, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ff69595e-71b6-4de9-a34f-11323c8da359) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035673800239481015 of space, bias 1.0, pg target 0.10702140071844304 quantized to 32 (current 32)
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.1179917118097102e-06 of space, bias 1.0, pg target 0.00033539751354291303 quantized to 32 (current 32)
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659712465105791 of space, bias 1.0, pg target 0.19979137395317373 quantized to 32 (current 32)
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3611454388324797e-06 of space, bias 4.0, pg target 0.0016333745265989757 quantized to 16 (current 16)
Feb 02 15:38:54 compute-0 ovn_controller[144995]: 2026-02-02T15:38:54Z|00127|binding|INFO|Setting lport ff69595e-71b6-4de9-a34f-11323c8da359 up in Southbound
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.452 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:38:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.452 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ff69595e-71b6-4de9-a34f-11323c8da359 in datapath 2f321435-d909-47d9-9978-c1a6e976cdf3 bound to our chassis
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.454 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f321435-d909-47d9-9978-c1a6e976cdf3
Feb 02 15:38:54 compute-0 systemd-udevd[258194]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.461 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae6afc0-f8c5-419a-a436-a2c1b70eee58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.462 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f321435-d1 in ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.463 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f321435-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.463 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[779ed9ac-dcad-41aa-af54-adfd8e3edc97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.464 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[80f5a667-9816-47ba-8773-9791d5f87c14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 systemd-machined[207609]: New machine qemu-12-instance-0000000c.
Feb 02 15:38:54 compute-0 NetworkManager[49171]: <info>  [1770046734.4701] device (tapff69595e-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:38:54 compute-0 NetworkManager[49171]: <info>  [1770046734.4705] device (tapff69595e-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.472 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[973183d5-0757-4bce-aff1-09786be3dadf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.480 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5218b257-b04a-4973-b57c-f36738f71562]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.504 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[a78f5f34-b0d3-430a-9fe0-082ea5a2e99e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.508 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a4415ffc-441f-42eb-83e0-d55238a8e16c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 NetworkManager[49171]: <info>  [1770046734.5102] manager: (tap2f321435-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.530 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7da484-0ae4-4866-990a-e66ba730dcb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.534 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0647f8-d389-4599-bae4-3b3d4f10a077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 NetworkManager[49171]: <info>  [1770046734.5565] device (tap2f321435-d0): carrier: link connected
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.558 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[21f3e37f-f72c-427b-acc4-9132343192bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.572 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0b90ecb0-4874-4ab1-9ccf-54c336b3ea6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f321435-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:45:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422047, 'reachable_time': 19594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258227, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.580 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[fd99bba2-9431-4286-bc9e-b76c17dc3c05]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:45d3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422047, 'tstamp': 422047}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258228, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.596 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[30605a8d-4dbb-4d89-a32a-dd9945cf9503]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f321435-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:45:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422047, 'reachable_time': 19594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258229, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.611 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[27ebe28c-0a47-4358-b861-df2f1f37cbe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.666 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8fae3114-c0fc-4e9b-9191-0878a0ee89f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.666 239549 INFO nova.virt.block_device [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Attempting to driver detach volume 653f80bc-dc30-44e1-b124-ff341d1d8de1 from mountpoint /dev/vdb
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.667 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f321435-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.668 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.668 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f321435-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:54 compute-0 kernel: tap2f321435-d0: entered promiscuous mode
Feb 02 15:38:54 compute-0 NetworkManager[49171]: <info>  [1770046734.6705] manager: (tap2f321435-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.669 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.673 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.675 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f321435-d0, col_values=(('external_ids', {'iface-id': '240bc225-e61e-427a-8aef-43d7550fa498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:54 compute-0 ovn_controller[144995]: 2026-02-02T15:38:54Z|00128|binding|INFO|Releasing lport 240bc225-e61e-427a-8aef-43d7550fa498 from this chassis (sb_readonly=0)
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.676 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.686 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.686 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f321435-d909-47d9-9978-c1a6e976cdf3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f321435-d909-47d9-9978-c1a6e976cdf3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.687 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[848293b1-4f74-48a5-a21f-a7476c013c69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.688 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-2f321435-d909-47d9-9978-c1a6e976cdf3
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/2f321435-d909-47d9-9978-c1a6e976cdf3.pid.haproxy
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 2f321435-d909-47d9-9978-c1a6e976cdf3
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.689 239549 DEBUG nova.virt.libvirt.driver [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Attempting to detach device vdb from instance 0478993b-8261-4780-971f-04d18afc9603 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.689 239549 DEBUG nova.virt.libvirt.guest [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-653f80bc-dc30-44e1-b124-ff341d1d8de1">
Feb 02 15:38:54 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   </source>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <serial>653f80bc-dc30-44e1-b124-ff341d1d8de1</serial>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:38:54 compute-0 nova_compute[239545]: </disk>
Feb 02 15:38:54 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:38:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:54.689 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'env', 'PROCESS_TAG=haproxy-2f321435-d909-47d9-9978-c1a6e976cdf3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f321435-d909-47d9-9978-c1a6e976cdf3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.704 239549 INFO nova.virt.libvirt.driver [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Successfully detached device vdb from instance 0478993b-8261-4780-971f-04d18afc9603 from the persistent domain config.
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.704 239549 DEBUG nova.virt.libvirt.driver [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0478993b-8261-4780-971f-04d18afc9603 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.705 239549 DEBUG nova.virt.libvirt.guest [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-653f80bc-dc30-44e1-b124-ff341d1d8de1">
Feb 02 15:38:54 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   </source>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <serial>653f80bc-dc30-44e1-b124-ff341d1d8de1</serial>
Feb 02 15:38:54 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:38:54 compute-0 nova_compute[239545]: </disk>
Feb 02 15:38:54 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.805 239549 DEBUG nova.compute.manager [req-bbc9cd46-4f52-4ae3-b619-b13bffe20103 req-482f4ffe-8051-4ec0-921c-4e51eb54f4d1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received event network-vif-plugged-ff69595e-71b6-4de9-a34f-11323c8da359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.805 239549 DEBUG oslo_concurrency.lockutils [req-bbc9cd46-4f52-4ae3-b619-b13bffe20103 req-482f4ffe-8051-4ec0-921c-4e51eb54f4d1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.805 239549 DEBUG oslo_concurrency.lockutils [req-bbc9cd46-4f52-4ae3-b619-b13bffe20103 req-482f4ffe-8051-4ec0-921c-4e51eb54f4d1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.806 239549 DEBUG oslo_concurrency.lockutils [req-bbc9cd46-4f52-4ae3-b619-b13bffe20103 req-482f4ffe-8051-4ec0-921c-4e51eb54f4d1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.806 239549 DEBUG nova.compute.manager [req-bbc9cd46-4f52-4ae3-b619-b13bffe20103 req-482f4ffe-8051-4ec0-921c-4e51eb54f4d1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Processing event network-vif-plugged-ff69595e-71b6-4de9-a34f-11323c8da359 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:38:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.817 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Received event <DeviceRemovedEvent: 1770046734.8173556, 0478993b-8261-4780-971f-04d18afc9603 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.819 239549 DEBUG nova.virt.libvirt.driver [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0478993b-8261-4780-971f-04d18afc9603 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 15:38:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Feb 02 15:38:54 compute-0 nova_compute[239545]: 2026-02-02 15:38:54.822 239549 INFO nova.virt.libvirt.driver [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Successfully detached device vdb from instance 0478993b-8261-4780-971f-04d18afc9603 from the live domain config.
Feb 02 15:38:54 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.084 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046735.0839539, 0cd0267f-d963-4475-aa31-ae2d3864ad80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.084 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] VM Started (Lifecycle Event)
Feb 02 15:38:55 compute-0 podman[258304]: 2026-02-02 15:38:55.085480354 +0000 UTC m=+0.088843882 container create 4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.086 239549 DEBUG nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.094 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.099 239549 INFO nova.virt.libvirt.driver [-] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Instance spawned successfully.
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.099 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:38:55 compute-0 podman[258304]: 2026-02-02 15:38:55.018351006 +0000 UTC m=+0.021714554 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.114 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.118 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.121 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.121 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.122 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.122 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.122 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.122 239549 DEBUG nova.virt.libvirt.driver [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:38:55 compute-0 systemd[1]: Started libpod-conmon-4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16.scope.
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.149 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.149 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046735.0857568, 0cd0267f-d963-4475-aa31-ae2d3864ad80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.149 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] VM Paused (Lifecycle Event)
Feb 02 15:38:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd375382865bbb2a142e01122c9ab6a48a8ef35eb35f8d4c25ccba0035492f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:38:55 compute-0 podman[258304]: 2026-02-02 15:38:55.168361552 +0000 UTC m=+0.171725090 container init 4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:38:55 compute-0 podman[258304]: 2026-02-02 15:38:55.17409292 +0000 UTC m=+0.177456438 container start 4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.177 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.182 239549 DEBUG nova.objects.instance [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lazy-loading 'flavor' on Instance uuid 0478993b-8261-4780-971f-04d18afc9603 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.185 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046735.0944595, 0cd0267f-d963-4475-aa31-ae2d3864ad80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.186 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] VM Resumed (Lifecycle Event)
Feb 02 15:38:55 compute-0 neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3[258321]: [NOTICE]   (258325) : New worker (258327) forked
Feb 02 15:38:55 compute-0 neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3[258321]: [NOTICE]   (258325) : Loading success.
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.328 239549 DEBUG oslo_concurrency.lockutils [None req-c45f7149-6464-44b1-8eb9-41b4f8fb9957 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.336 239549 INFO nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Took 8.19 seconds to spawn the instance on the hypervisor.
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.338 239549 DEBUG nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.354 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.358 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.402 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.424 239549 INFO nova.compute.manager [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Took 9.49 seconds to build instance.
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.462 239549 DEBUG oslo_concurrency.lockutils [None req-ad98e215-feb6-4220-8e01-33df9726b34d 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 15:38:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 213 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.7 MiB/s wr, 101 op/s
Feb 02 15:38:55 compute-0 ceph-mon[75334]: osdmap e304: 3 total, 3 up, 3 in
Feb 02 15:38:55 compute-0 ceph-mon[75334]: pgmap v1261: 305 pgs: 305 active+clean; 213 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.7 MiB/s wr, 101 op/s
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.901 239549 DEBUG oslo_concurrency.lockutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.902 239549 DEBUG oslo_concurrency.lockutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.902 239549 DEBUG oslo_concurrency.lockutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.902 239549 DEBUG oslo_concurrency.lockutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.902 239549 DEBUG oslo_concurrency.lockutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.904 239549 INFO nova.compute.manager [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Terminating instance
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.904 239549 DEBUG nova.compute.manager [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:38:55 compute-0 kernel: tap5cd195ef-88 (unregistering): left promiscuous mode
Feb 02 15:38:55 compute-0 NetworkManager[49171]: <info>  [1770046735.9520] device (tap5cd195ef-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:38:55 compute-0 ovn_controller[144995]: 2026-02-02T15:38:55Z|00129|binding|INFO|Releasing lport 5cd195ef-887e-43b8-a695-421365d8d1ca from this chassis (sb_readonly=0)
Feb 02 15:38:55 compute-0 ovn_controller[144995]: 2026-02-02T15:38:55Z|00130|binding|INFO|Setting lport 5cd195ef-887e-43b8-a695-421365d8d1ca down in Southbound
Feb 02 15:38:55 compute-0 ovn_controller[144995]: 2026-02-02T15:38:55Z|00131|binding|INFO|Removing iface tap5cd195ef-88 ovn-installed in OVS
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.956 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:55 compute-0 nova_compute[239545]: 2026-02-02 15:38:55.974 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:56 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Feb 02 15:38:56 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 14.082s CPU time.
Feb 02 15:38:56 compute-0 systemd-machined[207609]: Machine qemu-11-instance-0000000b terminated.
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.021 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:13:b8 10.100.0.4'], port_security=['fa:16:3e:b8:13:b8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0478993b-8261-4780-971f-04d18afc9603', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ab4d9435497e4a81a051bfaeef7c7de5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '068c39b8-0553-41d7-9e5a-98b8ea9dd66e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b279c3b-4584-48ef-ad70-f37da209c47b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=5cd195ef-887e-43b8-a695-421365d8d1ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.022 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 5cd195ef-887e-43b8-a695-421365d8d1ca in datapath c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec unbound from our chassis
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.024 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.025 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cc637173-ef01-4972-9b3c-1706500ad99a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.025 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec namespace which is not needed anymore
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.139 239549 INFO nova.virt.libvirt.driver [-] [instance: 0478993b-8261-4780-971f-04d18afc9603] Instance destroyed successfully.
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.140 239549 DEBUG nova.objects.instance [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lazy-loading 'resources' on Instance uuid 0478993b-8261-4780-971f-04d18afc9603 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:38:56 compute-0 neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec[257761]: [NOTICE]   (257782) : haproxy version is 2.8.14-c23fe91
Feb 02 15:38:56 compute-0 neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec[257761]: [NOTICE]   (257782) : path to executable is /usr/sbin/haproxy
Feb 02 15:38:56 compute-0 neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec[257761]: [WARNING]  (257782) : Exiting Master process...
Feb 02 15:38:56 compute-0 neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec[257761]: [WARNING]  (257782) : Exiting Master process...
Feb 02 15:38:56 compute-0 neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec[257761]: [ALERT]    (257782) : Current worker (257785) exited with code 143 (Terminated)
Feb 02 15:38:56 compute-0 neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec[257761]: [WARNING]  (257782) : All workers exited. Exiting... (0)
Feb 02 15:38:56 compute-0 systemd[1]: libpod-de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c.scope: Deactivated successfully.
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.171 239549 DEBUG nova.virt.libvirt.vif [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:38:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-341732747',display_name='tempest-VolumesExtendAttachedTest-instance-341732747',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-341732747',id=11,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEi/ALaNSxgnlRX9FWWX8/c1Ia8XVxdvnoVBEwpV4DB09JZxKVCw9PFfREBYGQ87IQepjlJFnyjBPA3f1kTTLzyU9D/7EuGc5PAv2tfhhQ31//kTu2bw0CgFDNnlISartA==',key_name='tempest-keypair-1969230109',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:38:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ab4d9435497e4a81a051bfaeef7c7de5',ramdisk_id='',reservation_id='r-0f8mn8e5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-1890784903',owner_user_name='tempest-VolumesExtendAttachedTest-1890784903-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:38:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='630312472f584d3aa673cad217006b1c',uuid=0478993b-8261-4780-971f-04d18afc9603,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:38:56 compute-0 podman[258358]: 2026-02-02 15:38:56.171345087 +0000 UTC m=+0.058875200 container died de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.171 239549 DEBUG nova.network.os_vif_util [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Converting VIF {"id": "5cd195ef-887e-43b8-a695-421365d8d1ca", "address": "fa:16:3e:b8:13:b8", "network": {"id": "c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-375855718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab4d9435497e4a81a051bfaeef7c7de5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cd195ef-88", "ovs_interfaceid": "5cd195ef-887e-43b8-a695-421365d8d1ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.172 239549 DEBUG nova.network.os_vif_util [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:13:b8,bridge_name='br-int',has_traffic_filtering=True,id=5cd195ef-887e-43b8-a695-421365d8d1ca,network=Network(c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cd195ef-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.172 239549 DEBUG os_vif [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:13:b8,bridge_name='br-int',has_traffic_filtering=True,id=5cd195ef-887e-43b8-a695-421365d8d1ca,network=Network(c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cd195ef-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.175 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.175 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cd195ef-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.177 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.179 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.181 239549 INFO os_vif [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:13:b8,bridge_name='br-int',has_traffic_filtering=True,id=5cd195ef-887e-43b8-a695-421365d8d1ca,network=Network(c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cd195ef-88')
Feb 02 15:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c-userdata-shm.mount: Deactivated successfully.
Feb 02 15:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0de46c5f792f3e787224c067cae9ba85571f313491e3f0583495dcf8a7f30048-merged.mount: Deactivated successfully.
Feb 02 15:38:56 compute-0 podman[258358]: 2026-02-02 15:38:56.234275073 +0000 UTC m=+0.121805186 container cleanup de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:38:56 compute-0 systemd[1]: libpod-conmon-de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c.scope: Deactivated successfully.
Feb 02 15:38:56 compute-0 podman[258413]: 2026-02-02 15:38:56.303497601 +0000 UTC m=+0.049362010 container remove de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.309 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2358a8c1-2fb2-48b1-a00b-6ffea00f9a1f]: (4, ('Mon Feb  2 03:38:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec (de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c)\nde7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c\nMon Feb  2 03:38:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec (de7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c)\nde7d23f6206ab0ac1983fad35e2088ba168551724c6ff1dad27f6e65be3b5a1c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.311 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7f90ab1a-91b2-483d-a383-ae0f16acdced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.312 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a41c5c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.314 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:56 compute-0 kernel: tapc4a41c5c-30: left promiscuous mode
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.317 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.321 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[bed019b4-db1f-4484-8cdf-16d3b76b951e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.325 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.346 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[429b0486-216f-46e5-b486-aa7bebcfd236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.347 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3164af45-e11b-4012-8637-b1dde939e1cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.358 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3d91b234-17a8-4cf1-a4b8-4f6529c2702c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417854, 'reachable_time': 27627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258428, 'error': None, 'target': 'ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:56 compute-0 systemd[1]: run-netns-ovnmeta\x2dc4a41c5c\x2d3f3d\x2d4ced\x2d9d34\x2ddc6db367b0ec.mount: Deactivated successfully.
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.361 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4a41c5c-3f3d-4ced-9d34-dc6db367b0ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:38:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:56.361 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[89b9232f-8ef5-41bb-907a-7c60d11e54ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.488 239549 INFO nova.virt.libvirt.driver [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Deleting instance files /var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603_del
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.489 239549 INFO nova.virt.libvirt.driver [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Deletion of /var/lib/nova/instances/0478993b-8261-4780-971f-04d18afc9603_del complete
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.534 239549 INFO nova.compute.manager [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Took 0.63 seconds to destroy the instance on the hypervisor.
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.534 239549 DEBUG oslo.service.loopingcall [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.535 239549 DEBUG nova.compute.manager [-] [instance: 0478993b-8261-4780-971f-04d18afc9603] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.535 239549 DEBUG nova.network.neutron [-] [instance: 0478993b-8261-4780-971f-04d18afc9603] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.648 239549 DEBUG nova.compute.manager [req-0b7b6550-5b8f-4f23-a7f2-c101952be9d0 req-9f38bae5-7b3f-4396-b898-656c4e0140a5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received event network-vif-unplugged-5cd195ef-887e-43b8-a695-421365d8d1ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.648 239549 DEBUG oslo_concurrency.lockutils [req-0b7b6550-5b8f-4f23-a7f2-c101952be9d0 req-9f38bae5-7b3f-4396-b898-656c4e0140a5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.648 239549 DEBUG oslo_concurrency.lockutils [req-0b7b6550-5b8f-4f23-a7f2-c101952be9d0 req-9f38bae5-7b3f-4396-b898-656c4e0140a5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.648 239549 DEBUG oslo_concurrency.lockutils [req-0b7b6550-5b8f-4f23-a7f2-c101952be9d0 req-9f38bae5-7b3f-4396-b898-656c4e0140a5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.648 239549 DEBUG nova.compute.manager [req-0b7b6550-5b8f-4f23-a7f2-c101952be9d0 req-9f38bae5-7b3f-4396-b898-656c4e0140a5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] No waiting events found dispatching network-vif-unplugged-5cd195ef-887e-43b8-a695-421365d8d1ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.649 239549 DEBUG nova.compute.manager [req-0b7b6550-5b8f-4f23-a7f2-c101952be9d0 req-9f38bae5-7b3f-4396-b898-656c4e0140a5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received event network-vif-unplugged-5cd195ef-887e-43b8-a695-421365d8d1ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:38:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2603056697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2603056697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:56 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2603056697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:56 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2603056697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.877 239549 DEBUG nova.compute.manager [req-b963c6c5-355c-430e-bf76-e47be8b28bc7 req-500349f9-5672-4ae9-a647-53e27c8a079f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received event network-vif-plugged-ff69595e-71b6-4de9-a34f-11323c8da359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.878 239549 DEBUG oslo_concurrency.lockutils [req-b963c6c5-355c-430e-bf76-e47be8b28bc7 req-500349f9-5672-4ae9-a647-53e27c8a079f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.878 239549 DEBUG oslo_concurrency.lockutils [req-b963c6c5-355c-430e-bf76-e47be8b28bc7 req-500349f9-5672-4ae9-a647-53e27c8a079f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.878 239549 DEBUG oslo_concurrency.lockutils [req-b963c6c5-355c-430e-bf76-e47be8b28bc7 req-500349f9-5672-4ae9-a647-53e27c8a079f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.879 239549 DEBUG nova.compute.manager [req-b963c6c5-355c-430e-bf76-e47be8b28bc7 req-500349f9-5672-4ae9-a647-53e27c8a079f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] No waiting events found dispatching network-vif-plugged-ff69595e-71b6-4de9-a34f-11323c8da359 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:38:56 compute-0 nova_compute[239545]: 2026-02-02 15:38:56.879 239549 WARNING nova.compute.manager [req-b963c6c5-355c-430e-bf76-e47be8b28bc7 req-500349f9-5672-4ae9-a647-53e27c8a079f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received unexpected event network-vif-plugged-ff69595e-71b6-4de9-a34f-11323c8da359 for instance with vm_state active and task_state None.
Feb 02 15:38:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 213 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 1.6 MiB/s wr, 97 op/s
Feb 02 15:38:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:38:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Feb 02 15:38:57 compute-0 ceph-mon[75334]: pgmap v1262: 305 pgs: 305 active+clean; 213 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 1.6 MiB/s wr, 97 op/s
Feb 02 15:38:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Feb 02 15:38:57 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Feb 02 15:38:57 compute-0 nova_compute[239545]: 2026-02-02 15:38:57.950 239549 DEBUG nova.network.neutron [-] [instance: 0478993b-8261-4780-971f-04d18afc9603] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:38:57 compute-0 nova_compute[239545]: 2026-02-02 15:38:57.982 239549 INFO nova.compute.manager [-] [instance: 0478993b-8261-4780-971f-04d18afc9603] Took 1.45 seconds to deallocate network for instance.
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.054 239549 DEBUG oslo_concurrency.lockutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.055 239549 DEBUG oslo_concurrency.lockutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.122 239549 DEBUG oslo_concurrency.processutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:38:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:38:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/81824445' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.631 239549 DEBUG oslo_concurrency.processutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.636 239549 DEBUG nova.compute.provider_tree [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.834 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:38:58 compute-0 ceph-mon[75334]: osdmap e305: 3 total, 3 up, 3 in
Feb 02 15:38:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/81824445' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.983 239549 DEBUG nova.scheduler.client.report [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.991 239549 DEBUG nova.compute.manager [req-d64847c2-f17c-4969-9ce3-52f2b4dc55cd req-4680495c-63f0-4cc8-929b-a65720ad4d4f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received event network-vif-plugged-5cd195ef-887e-43b8-a695-421365d8d1ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.991 239549 DEBUG oslo_concurrency.lockutils [req-d64847c2-f17c-4969-9ce3-52f2b4dc55cd req-4680495c-63f0-4cc8-929b-a65720ad4d4f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0478993b-8261-4780-971f-04d18afc9603-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.991 239549 DEBUG oslo_concurrency.lockutils [req-d64847c2-f17c-4969-9ce3-52f2b4dc55cd req-4680495c-63f0-4cc8-929b-a65720ad4d4f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.991 239549 DEBUG oslo_concurrency.lockutils [req-d64847c2-f17c-4969-9ce3-52f2b4dc55cd req-4680495c-63f0-4cc8-929b-a65720ad4d4f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.992 239549 DEBUG nova.compute.manager [req-d64847c2-f17c-4969-9ce3-52f2b4dc55cd req-4680495c-63f0-4cc8-929b-a65720ad4d4f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] No waiting events found dispatching network-vif-plugged-5cd195ef-887e-43b8-a695-421365d8d1ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:38:58 compute-0 nova_compute[239545]: 2026-02-02 15:38:58.992 239549 WARNING nova.compute.manager [req-d64847c2-f17c-4969-9ce3-52f2b4dc55cd req-4680495c-63f0-4cc8-929b-a65720ad4d4f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received unexpected event network-vif-plugged-5cd195ef-887e-43b8-a695-421365d8d1ca for instance with vm_state deleted and task_state None.
Feb 02 15:38:59 compute-0 nova_compute[239545]: 2026-02-02 15:38:59.015 239549 DEBUG oslo_concurrency.lockutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:59 compute-0 nova_compute[239545]: 2026-02-02 15:38:59.069 239549 INFO nova.scheduler.client.report [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Deleted allocations for instance 0478993b-8261-4780-971f-04d18afc9603
Feb 02 15:38:59 compute-0 nova_compute[239545]: 2026-02-02 15:38:59.083 239549 DEBUG nova.compute.manager [req-199519cc-4caf-4a8b-94f6-54998198e4f9 req-9e7d0bd4-aab8-4799-a45d-d22f346a3bb1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0478993b-8261-4780-971f-04d18afc9603] Received event network-vif-deleted-5cd195ef-887e-43b8-a695-421365d8d1ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:38:59 compute-0 nova_compute[239545]: 2026-02-02 15:38:59.156 239549 DEBUG oslo_concurrency.lockutils [None req-1e53a086-f64f-4727-91e6-429eb03f3fc1 630312472f584d3aa673cad217006b1c ab4d9435497e4a81a051bfaeef7c7de5 - - default default] Lock "0478993b-8261-4780-971f-04d18afc9603" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:59.250 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:38:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:59.251 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:38:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:38:59.251 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:38:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:38:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1961162050' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:38:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1961162050' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 193 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 24 KiB/s wr, 141 op/s
Feb 02 15:38:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1961162050' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:38:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1961162050' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:38:59 compute-0 ceph-mon[75334]: pgmap v1264: 305 pgs: 305 active+clean; 193 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 24 KiB/s wr, 141 op/s
Feb 02 15:39:00 compute-0 nova_compute[239545]: 2026-02-02 15:39:00.557 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.179 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:39:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 134 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 25 KiB/s wr, 231 op/s
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.694 239549 DEBUG nova.compute.manager [req-e01ad447-3375-46b9-b26f-e8d9007adc39 req-1cc3ee7d-fdc2-4ee0-ae38-6307fe00a516 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received event network-changed-ff69595e-71b6-4de9-a34f-11323c8da359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.695 239549 DEBUG nova.compute.manager [req-e01ad447-3375-46b9-b26f-e8d9007adc39 req-1cc3ee7d-fdc2-4ee0-ae38-6307fe00a516 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Refreshing instance network info cache due to event network-changed-ff69595e-71b6-4de9-a34f-11323c8da359. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.696 239549 DEBUG oslo_concurrency.lockutils [req-e01ad447-3375-46b9-b26f-e8d9007adc39 req-1cc3ee7d-fdc2-4ee0-ae38-6307fe00a516 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.696 239549 DEBUG oslo_concurrency.lockutils [req-e01ad447-3375-46b9-b26f-e8d9007adc39 req-1cc3ee7d-fdc2-4ee0-ae38-6307fe00a516 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.697 239549 DEBUG nova.network.neutron [req-e01ad447-3375-46b9-b26f-e8d9007adc39 req-1cc3ee7d-fdc2-4ee0-ae38-6307fe00a516 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Refreshing network info cache for port ff69595e-71b6-4de9-a34f-11323c8da359 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:39:01 compute-0 nova_compute[239545]: 2026-02-02 15:39:01.795 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:39:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/845150340' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:02 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/845150340' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:02 compute-0 ceph-mon[75334]: pgmap v1265: 305 pgs: 305 active+clean; 134 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 25 KiB/s wr, 231 op/s
Feb 02 15:39:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/845150340' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/845150340' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Feb 02 15:39:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Feb 02 15:39:02 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Feb 02 15:39:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 134 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.0 KiB/s wr, 223 op/s
Feb 02 15:39:03 compute-0 ceph-mon[75334]: osdmap e306: 3 total, 3 up, 3 in
Feb 02 15:39:03 compute-0 ceph-mon[75334]: pgmap v1267: 305 pgs: 305 active+clean; 134 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.0 KiB/s wr, 223 op/s
Feb 02 15:39:03 compute-0 nova_compute[239545]: 2026-02-02 15:39:03.836 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:04 compute-0 nova_compute[239545]: 2026-02-02 15:39:04.240 239549 DEBUG nova.network.neutron [req-e01ad447-3375-46b9-b26f-e8d9007adc39 req-1cc3ee7d-fdc2-4ee0-ae38-6307fe00a516 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updated VIF entry in instance network info cache for port ff69595e-71b6-4de9-a34f-11323c8da359. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:39:04 compute-0 nova_compute[239545]: 2026-02-02 15:39:04.242 239549 DEBUG nova.network.neutron [req-e01ad447-3375-46b9-b26f-e8d9007adc39 req-1cc3ee7d-fdc2-4ee0-ae38-6307fe00a516 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updating instance_info_cache with network_info: [{"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:39:04 compute-0 nova_compute[239545]: 2026-02-02 15:39:04.273 239549 DEBUG oslo_concurrency.lockutils [req-e01ad447-3375-46b9-b26f-e8d9007adc39 req-1cc3ee7d-fdc2-4ee0-ae38-6307fe00a516 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:39:04 compute-0 nova_compute[239545]: 2026-02-02 15:39:04.275 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:39:04 compute-0 nova_compute[239545]: 2026-02-02 15:39:04.276 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:39:04 compute-0 nova_compute[239545]: 2026-02-02 15:39:04.277 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0cd0267f-d963-4475-aa31-ae2d3864ad80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:39:04 compute-0 podman[258454]: 2026-02-02 15:39:04.359529262 +0000 UTC m=+0.087719986 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:39:04 compute-0 podman[258453]: 2026-02-02 15:39:04.361330745 +0000 UTC m=+0.087136141 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:39:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 134 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.9 KiB/s wr, 230 op/s
Feb 02 15:39:06 compute-0 nova_compute[239545]: 2026-02-02 15:39:06.051 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updating instance_info_cache with network_info: [{"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:39:06 compute-0 nova_compute[239545]: 2026-02-02 15:39:06.104 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:39:06 compute-0 nova_compute[239545]: 2026-02-02 15:39:06.104 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:39:06 compute-0 nova_compute[239545]: 2026-02-02 15:39:06.105 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:06 compute-0 nova_compute[239545]: 2026-02-02 15:39:06.105 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:06 compute-0 nova_compute[239545]: 2026-02-02 15:39:06.181 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:06 compute-0 nova_compute[239545]: 2026-02-02 15:39:06.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:06 compute-0 nova_compute[239545]: 2026-02-02 15:39:06.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:06 compute-0 ceph-mon[75334]: pgmap v1268: 305 pgs: 305 active+clean; 134 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.9 KiB/s wr, 230 op/s
Feb 02 15:39:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1117035527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.197 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.198 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.198 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.198 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.199 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:07 compute-0 ovn_controller[144995]: 2026-02-02T15:39:07Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:83:95 10.100.0.10
Feb 02 15:39:07 compute-0 ovn_controller[144995]: 2026-02-02T15:39:07Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:83:95 10.100.0.10
Feb 02 15:39:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 134 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 KiB/s wr, 189 op/s
Feb 02 15:39:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Feb 02 15:39:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Feb 02 15:39:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1117035527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:07 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Feb 02 15:39:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:39:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437462027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.737 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.835 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.835 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:39:07 compute-0 ovn_controller[144995]: 2026-02-02T15:39:07Z|00132|binding|INFO|Releasing lport 240bc225-e61e-427a-8aef-43d7550fa498 from this chassis (sb_readonly=0)
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.969 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.994 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.994 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4269MB free_disk=59.96731290593743GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.995 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:07 compute-0 nova_compute[239545]: 2026-02-02 15:39:07.995 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:08 compute-0 nova_compute[239545]: 2026-02-02 15:39:08.212 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0cd0267f-d963-4475-aa31-ae2d3864ad80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:39:08 compute-0 nova_compute[239545]: 2026-02-02 15:39:08.212 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:39:08 compute-0 nova_compute[239545]: 2026-02-02 15:39:08.212 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:39:08 compute-0 nova_compute[239545]: 2026-02-02 15:39:08.359 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Feb 02 15:39:08 compute-0 ceph-mon[75334]: pgmap v1269: 305 pgs: 305 active+clean; 134 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 KiB/s wr, 189 op/s
Feb 02 15:39:08 compute-0 ceph-mon[75334]: osdmap e307: 3 total, 3 up, 3 in
Feb 02 15:39:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/437462027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:39:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Feb 02 15:39:08 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Feb 02 15:39:08 compute-0 nova_compute[239545]: 2026-02-02 15:39:08.839 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:39:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1173929060' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:39:08 compute-0 nova_compute[239545]: 2026-02-02 15:39:08.928 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:08 compute-0 nova_compute[239545]: 2026-02-02 15:39:08.935 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:39:09 compute-0 nova_compute[239545]: 2026-02-02 15:39:09.070 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:39:09 compute-0 nova_compute[239545]: 2026-02-02 15:39:09.138 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:39:09 compute-0 nova_compute[239545]: 2026-02-02 15:39:09.138 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:09 compute-0 nova_compute[239545]: 2026-02-02 15:39:09.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:09 compute-0 nova_compute[239545]: 2026-02-02 15:39:09.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:09 compute-0 nova_compute[239545]: 2026-02-02 15:39:09.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:39:09 compute-0 nova_compute[239545]: 2026-02-02 15:39:09.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:09 compute-0 nova_compute[239545]: 2026-02-02 15:39:09.547 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 15:39:09 compute-0 nova_compute[239545]: 2026-02-02 15:39:09.575 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 15:39:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 139 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 699 KiB/s wr, 44 op/s
Feb 02 15:39:09 compute-0 ceph-mon[75334]: osdmap e308: 3 total, 3 up, 3 in
Feb 02 15:39:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1173929060' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:39:09 compute-0 ceph-mon[75334]: pgmap v1272: 305 pgs: 305 active+clean; 139 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 699 KiB/s wr, 44 op/s
Feb 02 15:39:10 compute-0 sudo[258538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:39:10 compute-0 sudo[258538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:39:10 compute-0 sudo[258538]: pam_unix(sudo:session): session closed for user root
Feb 02 15:39:10 compute-0 sudo[258563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:39:10 compute-0 sudo[258563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:39:10 compute-0 sudo[258563]: pam_unix(sudo:session): session closed for user root
Feb 02 15:39:10 compute-0 nova_compute[239545]: 2026-02-02 15:39:10.574 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:39:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 15:39:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:39:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:39:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:39:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:39:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:39:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:39:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:39:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:39:10 compute-0 sudo[258621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:39:10 compute-0 sudo[258621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:39:10 compute-0 sudo[258621]: pam_unix(sudo:session): session closed for user root
Feb 02 15:39:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Feb 02 15:39:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:39:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:39:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:39:10 compute-0 sudo[258646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:39:10 compute-0 sudo[258646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:39:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Feb 02 15:39:10 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Feb 02 15:39:11 compute-0 podman[258682]: 2026-02-02 15:39:11.031646456 +0000 UTC m=+0.045710832 container create e029120ef5c08a0b43844d8aada0c6fc251b4a35e400abdcc5c18eede41623d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:39:11 compute-0 systemd[1]: Started libpod-conmon-e029120ef5c08a0b43844d8aada0c6fc251b4a35e400abdcc5c18eede41623d2.scope.
Feb 02 15:39:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:39:11 compute-0 podman[258682]: 2026-02-02 15:39:11.010917087 +0000 UTC m=+0.024981493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:39:11 compute-0 podman[258682]: 2026-02-02 15:39:11.113761376 +0000 UTC m=+0.127825762 container init e029120ef5c08a0b43844d8aada0c6fc251b4a35e400abdcc5c18eede41623d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:39:11 compute-0 podman[258682]: 2026-02-02 15:39:11.120539989 +0000 UTC m=+0.134604345 container start e029120ef5c08a0b43844d8aada0c6fc251b4a35e400abdcc5c18eede41623d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:39:11 compute-0 podman[258682]: 2026-02-02 15:39:11.1239053 +0000 UTC m=+0.137969656 container attach e029120ef5c08a0b43844d8aada0c6fc251b4a35e400abdcc5c18eede41623d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hofstadter, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:39:11 compute-0 serene_hofstadter[258698]: 167 167
Feb 02 15:39:11 compute-0 systemd[1]: libpod-e029120ef5c08a0b43844d8aada0c6fc251b4a35e400abdcc5c18eede41623d2.scope: Deactivated successfully.
Feb 02 15:39:11 compute-0 podman[258682]: 2026-02-02 15:39:11.125515669 +0000 UTC m=+0.139580035 container died e029120ef5c08a0b43844d8aada0c6fc251b4a35e400abdcc5c18eede41623d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:39:11 compute-0 nova_compute[239545]: 2026-02-02 15:39:11.138 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046736.1368563, 0478993b-8261-4780-971f-04d18afc9603 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:39:11 compute-0 nova_compute[239545]: 2026-02-02 15:39:11.138 239549 INFO nova.compute.manager [-] [instance: 0478993b-8261-4780-971f-04d18afc9603] VM Stopped (Lifecycle Event)
Feb 02 15:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-841e3b410bedfd89b55beb596dfc13b36fff20eb728ec502c0302438f160218e-merged.mount: Deactivated successfully.
Feb 02 15:39:11 compute-0 nova_compute[239545]: 2026-02-02 15:39:11.158 239549 DEBUG nova.compute.manager [None req-ba86b0a7-4746-4438-8f85-7c7ce1c44ef1 - - - - - -] [instance: 0478993b-8261-4780-971f-04d18afc9603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:39:11 compute-0 podman[258682]: 2026-02-02 15:39:11.161531737 +0000 UTC m=+0.175596093 container remove e029120ef5c08a0b43844d8aada0c6fc251b4a35e400abdcc5c18eede41623d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hofstadter, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:39:11 compute-0 systemd[1]: libpod-conmon-e029120ef5c08a0b43844d8aada0c6fc251b4a35e400abdcc5c18eede41623d2.scope: Deactivated successfully.
Feb 02 15:39:11 compute-0 nova_compute[239545]: 2026-02-02 15:39:11.184 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:11 compute-0 podman[258722]: 2026-02-02 15:39:11.309759 +0000 UTC m=+0.040906887 container create a0d6bdcd5cc2e140e24e57340080aceae928d75314ca1915a4e6351f4cd11c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:39:11 compute-0 systemd[1]: Started libpod-conmon-a0d6bdcd5cc2e140e24e57340080aceae928d75314ca1915a4e6351f4cd11c91.scope.
Feb 02 15:39:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c327823deed24205dfb01db70b2283f6752d386ad20d53694ac84e1dccb8551/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c327823deed24205dfb01db70b2283f6752d386ad20d53694ac84e1dccb8551/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c327823deed24205dfb01db70b2283f6752d386ad20d53694ac84e1dccb8551/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c327823deed24205dfb01db70b2283f6752d386ad20d53694ac84e1dccb8551/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c327823deed24205dfb01db70b2283f6752d386ad20d53694ac84e1dccb8551/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:11 compute-0 podman[258722]: 2026-02-02 15:39:11.293553219 +0000 UTC m=+0.024701126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:39:11 compute-0 podman[258722]: 2026-02-02 15:39:11.393428377 +0000 UTC m=+0.124576294 container init a0d6bdcd5cc2e140e24e57340080aceae928d75314ca1915a4e6351f4cd11c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:39:11 compute-0 podman[258722]: 2026-02-02 15:39:11.400489037 +0000 UTC m=+0.131636934 container start a0d6bdcd5cc2e140e24e57340080aceae928d75314ca1915a4e6351f4cd11c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wright, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:39:11 compute-0 podman[258722]: 2026-02-02 15:39:11.403887788 +0000 UTC m=+0.135035865 container attach a0d6bdcd5cc2e140e24e57340080aceae928d75314ca1915a4e6351f4cd11c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wright, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:39:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 677 KiB/s rd, 4.3 MiB/s wr, 166 op/s
Feb 02 15:39:11 compute-0 ceph-mon[75334]: osdmap e309: 3 total, 3 up, 3 in
Feb 02 15:39:11 compute-0 ceph-mon[75334]: pgmap v1274: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 677 KiB/s rd, 4.3 MiB/s wr, 166 op/s
Feb 02 15:39:11 compute-0 agitated_wright[258739]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:39:11 compute-0 agitated_wright[258739]: --> All data devices are unavailable
Feb 02 15:39:11 compute-0 systemd[1]: libpod-a0d6bdcd5cc2e140e24e57340080aceae928d75314ca1915a4e6351f4cd11c91.scope: Deactivated successfully.
Feb 02 15:39:11 compute-0 podman[258722]: 2026-02-02 15:39:11.829519977 +0000 UTC m=+0.560667884 container died a0d6bdcd5cc2e140e24e57340080aceae928d75314ca1915a4e6351f4cd11c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c327823deed24205dfb01db70b2283f6752d386ad20d53694ac84e1dccb8551-merged.mount: Deactivated successfully.
Feb 02 15:39:11 compute-0 podman[258722]: 2026-02-02 15:39:11.870661399 +0000 UTC m=+0.601809286 container remove a0d6bdcd5cc2e140e24e57340080aceae928d75314ca1915a4e6351f4cd11c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:39:11 compute-0 systemd[1]: libpod-conmon-a0d6bdcd5cc2e140e24e57340080aceae928d75314ca1915a4e6351f4cd11c91.scope: Deactivated successfully.
Feb 02 15:39:11 compute-0 sudo[258646]: pam_unix(sudo:session): session closed for user root
Feb 02 15:39:11 compute-0 sudo[258773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:39:11 compute-0 sudo[258773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:39:11 compute-0 sudo[258773]: pam_unix(sudo:session): session closed for user root
Feb 02 15:39:12 compute-0 sudo[258798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:39:12 compute-0 sudo[258798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:39:12 compute-0 podman[258835]: 2026-02-02 15:39:12.28683175 +0000 UTC m=+0.032414852 container create 6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:39:12 compute-0 systemd[1]: Started libpod-conmon-6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff.scope.
Feb 02 15:39:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:39:12 compute-0 podman[258835]: 2026-02-02 15:39:12.350028443 +0000 UTC m=+0.095611555 container init 6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 15:39:12 compute-0 podman[258835]: 2026-02-02 15:39:12.356361195 +0000 UTC m=+0.101944257 container start 6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:39:12 compute-0 adoring_goldberg[258851]: 167 167
Feb 02 15:39:12 compute-0 systemd[1]: libpod-6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff.scope: Deactivated successfully.
Feb 02 15:39:12 compute-0 conmon[258851]: conmon 6912307201efe2efb279 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff.scope/container/memory.events
Feb 02 15:39:12 compute-0 podman[258835]: 2026-02-02 15:39:12.361759945 +0000 UTC m=+0.107343067 container attach 6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:39:12 compute-0 podman[258835]: 2026-02-02 15:39:12.362129724 +0000 UTC m=+0.107712796 container died 6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldberg, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 02 15:39:12 compute-0 podman[258835]: 2026-02-02 15:39:12.271020448 +0000 UTC m=+0.016603540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:39:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fb44db6abf34a22a402a016f71d86cbbe87142b54a2c3b234a5dfe0d69ce5ad-merged.mount: Deactivated successfully.
Feb 02 15:39:12 compute-0 podman[258835]: 2026-02-02 15:39:12.396027202 +0000 UTC m=+0.141610274 container remove 6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldberg, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:39:12 compute-0 systemd[1]: libpod-conmon-6912307201efe2efb279b490daeb2878e4a1f3b45f8bc3d984fb187b0c3ea4ff.scope: Deactivated successfully.
Feb 02 15:39:12 compute-0 podman[258875]: 2026-02-02 15:39:12.550548736 +0000 UTC m=+0.039723848 container create c9777339a300f615b7bdb2e14648855153d4605a0317e76c2aa268c34e3e865e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:39:12 compute-0 systemd[1]: Started libpod-conmon-c9777339a300f615b7bdb2e14648855153d4605a0317e76c2aa268c34e3e865e.scope.
Feb 02 15:39:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2ee8f675b9246f27a53443b3ea621dbca56162fc68a79484b14bc99113deaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2ee8f675b9246f27a53443b3ea621dbca56162fc68a79484b14bc99113deaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2ee8f675b9246f27a53443b3ea621dbca56162fc68a79484b14bc99113deaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2ee8f675b9246f27a53443b3ea621dbca56162fc68a79484b14bc99113deaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:12 compute-0 podman[258875]: 2026-02-02 15:39:12.531450095 +0000 UTC m=+0.020625207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:39:12 compute-0 podman[258875]: 2026-02-02 15:39:12.631418805 +0000 UTC m=+0.120593917 container init c9777339a300f615b7bdb2e14648855153d4605a0317e76c2aa268c34e3e865e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wiles, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:39:12 compute-0 podman[258875]: 2026-02-02 15:39:12.647098713 +0000 UTC m=+0.136273805 container start c9777339a300f615b7bdb2e14648855153d4605a0317e76c2aa268c34e3e865e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:39:12 compute-0 podman[258875]: 2026-02-02 15:39:12.65111796 +0000 UTC m=+0.140293052 container attach c9777339a300f615b7bdb2e14648855153d4605a0317e76c2aa268c34e3e865e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wiles, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:39:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Feb 02 15:39:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Feb 02 15:39:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Feb 02 15:39:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]: {
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:     "0": [
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:         {
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "devices": [
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "/dev/loop3"
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             ],
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_name": "ceph_lv0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_size": "21470642176",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "name": "ceph_lv0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "tags": {
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.cluster_name": "ceph",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.crush_device_class": "",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.encrypted": "0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.objectstore": "bluestore",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.osd_id": "0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.type": "block",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.vdo": "0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.with_tpm": "0"
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             },
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "type": "block",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "vg_name": "ceph_vg0"
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:         }
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:     ],
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:     "1": [
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:         {
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "devices": [
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "/dev/loop4"
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             ],
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_name": "ceph_lv1",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_size": "21470642176",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "name": "ceph_lv1",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "tags": {
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.cluster_name": "ceph",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.crush_device_class": "",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.encrypted": "0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.objectstore": "bluestore",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.osd_id": "1",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.type": "block",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.vdo": "0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.with_tpm": "0"
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             },
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "type": "block",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "vg_name": "ceph_vg1"
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:         }
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:     ],
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:     "2": [
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:         {
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "devices": [
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "/dev/loop5"
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             ],
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_name": "ceph_lv2",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_size": "21470642176",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "name": "ceph_lv2",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "tags": {
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.cluster_name": "ceph",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.crush_device_class": "",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.encrypted": "0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.objectstore": "bluestore",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.osd_id": "2",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.type": "block",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.vdo": "0",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:                 "ceph.with_tpm": "0"
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             },
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "type": "block",
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:             "vg_name": "ceph_vg2"
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:         }
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]:     ]
Feb 02 15:39:12 compute-0 vibrant_wiles[258892]: }
Feb 02 15:39:12 compute-0 systemd[1]: libpod-c9777339a300f615b7bdb2e14648855153d4605a0317e76c2aa268c34e3e865e.scope: Deactivated successfully.
Feb 02 15:39:12 compute-0 podman[258875]: 2026-02-02 15:39:12.970547389 +0000 UTC m=+0.459722491 container died c9777339a300f615b7bdb2e14648855153d4605a0317e76c2aa268c34e3e865e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:39:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a2ee8f675b9246f27a53443b3ea621dbca56162fc68a79484b14bc99113deaa-merged.mount: Deactivated successfully.
Feb 02 15:39:13 compute-0 podman[258875]: 2026-02-02 15:39:13.016776373 +0000 UTC m=+0.505951475 container remove c9777339a300f615b7bdb2e14648855153d4605a0317e76c2aa268c34e3e865e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wiles, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:39:13 compute-0 systemd[1]: libpod-conmon-c9777339a300f615b7bdb2e14648855153d4605a0317e76c2aa268c34e3e865e.scope: Deactivated successfully.
Feb 02 15:39:13 compute-0 sudo[258798]: pam_unix(sudo:session): session closed for user root
Feb 02 15:39:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1539175394' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1539175394' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:13 compute-0 sudo[258913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:39:13 compute-0 sudo[258913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:39:13 compute-0 sudo[258913]: pam_unix(sudo:session): session closed for user root
Feb 02 15:39:13 compute-0 sudo[258938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:39:13 compute-0 sudo[258938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:39:13 compute-0 podman[258976]: 2026-02-02 15:39:13.463150902 +0000 UTC m=+0.042976167 container create caf4ab6787c51b8da9b21509eb9030dbc83907db4de8dc4cf683de251b1795ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:39:13 compute-0 systemd[1]: Started libpod-conmon-caf4ab6787c51b8da9b21509eb9030dbc83907db4de8dc4cf683de251b1795ec.scope.
Feb 02 15:39:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:39:13 compute-0 podman[258976]: 2026-02-02 15:39:13.441586382 +0000 UTC m=+0.021411737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:39:13 compute-0 podman[258976]: 2026-02-02 15:39:13.543505998 +0000 UTC m=+0.123331323 container init caf4ab6787c51b8da9b21509eb9030dbc83907db4de8dc4cf683de251b1795ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:39:13 compute-0 podman[258976]: 2026-02-02 15:39:13.550961139 +0000 UTC m=+0.130786424 container start caf4ab6787c51b8da9b21509eb9030dbc83907db4de8dc4cf683de251b1795ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:39:13 compute-0 dreamy_euler[258992]: 167 167
Feb 02 15:39:13 compute-0 podman[258976]: 2026-02-02 15:39:13.55514512 +0000 UTC m=+0.134970395 container attach caf4ab6787c51b8da9b21509eb9030dbc83907db4de8dc4cf683de251b1795ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:39:13 compute-0 systemd[1]: libpod-caf4ab6787c51b8da9b21509eb9030dbc83907db4de8dc4cf683de251b1795ec.scope: Deactivated successfully.
Feb 02 15:39:13 compute-0 podman[258976]: 2026-02-02 15:39:13.556645666 +0000 UTC m=+0.136470941 container died caf4ab6787c51b8da9b21509eb9030dbc83907db4de8dc4cf683de251b1795ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 15:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6230ea1ff38c94a2b6389bb1b5e7b9bea8571860af31bd788b208bd276c1b2c0-merged.mount: Deactivated successfully.
Feb 02 15:39:13 compute-0 podman[258976]: 2026-02-02 15:39:13.591488805 +0000 UTC m=+0.171314070 container remove caf4ab6787c51b8da9b21509eb9030dbc83907db4de8dc4cf683de251b1795ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:39:13 compute-0 systemd[1]: libpod-conmon-caf4ab6787c51b8da9b21509eb9030dbc83907db4de8dc4cf683de251b1795ec.scope: Deactivated successfully.
Feb 02 15:39:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 4.3 MiB/s wr, 171 op/s
Feb 02 15:39:13 compute-0 podman[259017]: 2026-02-02 15:39:13.738030387 +0000 UTC m=+0.056601965 container create d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:39:13 compute-0 ceph-mon[75334]: osdmap e310: 3 total, 3 up, 3 in
Feb 02 15:39:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1539175394' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1539175394' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:13 compute-0 ceph-mon[75334]: pgmap v1276: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 4.3 MiB/s wr, 171 op/s
Feb 02 15:39:13 compute-0 systemd[1]: Started libpod-conmon-d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8.scope.
Feb 02 15:39:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223bbd89f54e4259724ef06e2bc932c9f1a1654ef502858f7043860920bb154e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223bbd89f54e4259724ef06e2bc932c9f1a1654ef502858f7043860920bb154e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223bbd89f54e4259724ef06e2bc932c9f1a1654ef502858f7043860920bb154e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223bbd89f54e4259724ef06e2bc932c9f1a1654ef502858f7043860920bb154e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:39:13 compute-0 podman[259017]: 2026-02-02 15:39:13.806217601 +0000 UTC m=+0.124789189 container init d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:39:13 compute-0 podman[259017]: 2026-02-02 15:39:13.717246556 +0000 UTC m=+0.035818184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:39:13 compute-0 podman[259017]: 2026-02-02 15:39:13.811504178 +0000 UTC m=+0.130075746 container start d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:39:13 compute-0 podman[259017]: 2026-02-02 15:39:13.814053809 +0000 UTC m=+0.132625397 container attach d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:39:13 compute-0 nova_compute[239545]: 2026-02-02 15:39:13.841 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:14 compute-0 lvm[259110]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:39:14 compute-0 lvm[259113]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:39:14 compute-0 lvm[259110]: VG ceph_vg0 finished
Feb 02 15:39:14 compute-0 lvm[259113]: VG ceph_vg1 finished
Feb 02 15:39:14 compute-0 lvm[259115]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:39:14 compute-0 lvm[259115]: VG ceph_vg2 finished
Feb 02 15:39:14 compute-0 lucid_mestorf[259034]: {}
Feb 02 15:39:14 compute-0 podman[259017]: 2026-02-02 15:39:14.568441702 +0000 UTC m=+0.887013270 container died d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:39:14 compute-0 systemd[1]: libpod-d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8.scope: Deactivated successfully.
Feb 02 15:39:14 compute-0 systemd[1]: libpod-d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8.scope: Consumed 1.046s CPU time.
Feb 02 15:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-223bbd89f54e4259724ef06e2bc932c9f1a1654ef502858f7043860920bb154e-merged.mount: Deactivated successfully.
Feb 02 15:39:14 compute-0 podman[259017]: 2026-02-02 15:39:14.616234734 +0000 UTC m=+0.934806312 container remove d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:39:14 compute-0 systemd[1]: libpod-conmon-d4ac9393097e60c3a9ac223b081ea17cbc3687d1dc9b4bdcdd32a9ae8e7d49f8.scope: Deactivated successfully.
Feb 02 15:39:14 compute-0 sudo[258938]: pam_unix(sudo:session): session closed for user root
Feb 02 15:39:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:39:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:39:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:39:14 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:39:14 compute-0 sudo[259128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:39:14 compute-0 sudo[259128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:39:14 compute-0 sudo[259128]: pam_unix(sudo:session): session closed for user root
Feb 02 15:39:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:39:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:39:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:39:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:39:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:39:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:39:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 556 KiB/s rd, 3.1 MiB/s wr, 182 op/s
Feb 02 15:39:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:39:15 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:39:16 compute-0 nova_compute[239545]: 2026-02-02 15:39:16.187 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:16 compute-0 ovn_controller[144995]: 2026-02-02T15:39:16Z|00133|binding|INFO|Releasing lport 240bc225-e61e-427a-8aef-43d7550fa498 from this chassis (sb_readonly=0)
Feb 02 15:39:16 compute-0 nova_compute[239545]: 2026-02-02 15:39:16.645 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:16 compute-0 ceph-mon[75334]: pgmap v1277: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 556 KiB/s rd, 3.1 MiB/s wr, 182 op/s
Feb 02 15:39:17 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:17.116 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:39:17 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:17.116 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:39:17 compute-0 nova_compute[239545]: 2026-02-02 15:39:17.118 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 480 KiB/s rd, 2.6 MiB/s wr, 157 op/s
Feb 02 15:39:17 compute-0 ceph-mon[75334]: pgmap v1278: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 480 KiB/s rd, 2.6 MiB/s wr, 157 op/s
Feb 02 15:39:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Feb 02 15:39:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Feb 02 15:39:17 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Feb 02 15:39:17 compute-0 nova_compute[239545]: 2026-02-02 15:39:17.840 239549 DEBUG oslo_concurrency.lockutils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:17 compute-0 nova_compute[239545]: 2026-02-02 15:39:17.840 239549 DEBUG oslo_concurrency.lockutils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:17 compute-0 nova_compute[239545]: 2026-02-02 15:39:17.857 239549 DEBUG nova.objects.instance [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'flavor' on Instance uuid 0cd0267f-d963-4475-aa31-ae2d3864ad80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:39:17 compute-0 nova_compute[239545]: 2026-02-02 15:39:17.899 239549 DEBUG oslo_concurrency.lockutils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.291 239549 DEBUG oslo_concurrency.lockutils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.292 239549 DEBUG oslo_concurrency.lockutils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.292 239549 INFO nova.compute.manager [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Attaching volume 9a960307-48ca-4464-b486-206e25ea0afb to /dev/vdb
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.550 239549 DEBUG os_brick.utils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.552 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.561 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.561 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[7059e8ff-952d-4d7d-84c0-2902dd33e736]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.563 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.586 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.587 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[543e803b-b162-4ca3-a3ac-b0b01c06692c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.588 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.594 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.595 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f21513-66ee-440b-a18e-1ccd43258545]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.596 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[82b31210-88e7-4720-b842-a4530828b6f3]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.596 239549 DEBUG oslo_concurrency.processutils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.610 239549 DEBUG oslo_concurrency.processutils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "nvme version" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.613 239549 DEBUG os_brick.initiator.connectors.lightos [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.614 239549 DEBUG os_brick.initiator.connectors.lightos [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.614 239549 DEBUG os_brick.initiator.connectors.lightos [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.615 239549 DEBUG os_brick.utils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.616 239549 DEBUG nova.virt.block_device [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updating existing volume attachment record: 9b8f7810-48f8-41e7-b57b-5762a13b43b0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:39:18 compute-0 ceph-mon[75334]: osdmap e311: 3 total, 3 up, 3 in
Feb 02 15:39:18 compute-0 nova_compute[239545]: 2026-02-02 15:39:18.892 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/547694788' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:19 compute-0 nova_compute[239545]: 2026-02-02 15:39:19.471 239549 DEBUG nova.objects.instance [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'flavor' on Instance uuid 0cd0267f-d963-4475-aa31-ae2d3864ad80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:39:19 compute-0 nova_compute[239545]: 2026-02-02 15:39:19.504 239549 DEBUG nova.virt.libvirt.driver [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Attempting to attach volume 9a960307-48ca-4464-b486-206e25ea0afb with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:39:19 compute-0 nova_compute[239545]: 2026-02-02 15:39:19.507 239549 DEBUG nova.virt.libvirt.guest [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:39:19 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:39:19 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-9a960307-48ca-4464-b486-206e25ea0afb">
Feb 02 15:39:19 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:39:19 compute-0 nova_compute[239545]:   </source>
Feb 02 15:39:19 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:39:19 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:39:19 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:39:19 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:39:19 compute-0 nova_compute[239545]:   <serial>9a960307-48ca-4464-b486-206e25ea0afb</serial>
Feb 02 15:39:19 compute-0 nova_compute[239545]: </disk>
Feb 02 15:39:19 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:39:19 compute-0 nova_compute[239545]: 2026-02-02 15:39:19.593 239549 DEBUG nova.virt.libvirt.driver [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:39:19 compute-0 nova_compute[239545]: 2026-02-02 15:39:19.594 239549 DEBUG nova.virt.libvirt.driver [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:39:19 compute-0 nova_compute[239545]: 2026-02-02 15:39:19.594 239549 DEBUG nova.virt.libvirt.driver [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:39:19 compute-0 nova_compute[239545]: 2026-02-02 15:39:19.594 239549 DEBUG nova.virt.libvirt.driver [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No VIF found with MAC fa:16:3e:71:83:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:39:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 20 KiB/s wr, 48 op/s
Feb 02 15:39:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/547694788' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:19 compute-0 ceph-mon[75334]: pgmap v1280: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 20 KiB/s wr, 48 op/s
Feb 02 15:39:19 compute-0 nova_compute[239545]: 2026-02-02 15:39:19.827 239549 DEBUG oslo_concurrency.lockutils [None req-5c707bb1-90ed-4beb-9a85-e67d33a0cd95 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:21 compute-0 nova_compute[239545]: 2026-02-02 15:39:21.191 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 21 KiB/s wr, 48 op/s
Feb 02 15:39:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2248412191' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Feb 02 15:39:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Feb 02 15:39:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Feb 02 15:39:22 compute-0 ceph-mon[75334]: pgmap v1281: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 21 KiB/s wr, 48 op/s
Feb 02 15:39:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2248412191' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:22 compute-0 nova_compute[239545]: 2026-02-02 15:39:22.950 239549 DEBUG oslo_concurrency.lockutils [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:22 compute-0 nova_compute[239545]: 2026-02-02 15:39:22.951 239549 DEBUG oslo_concurrency.lockutils [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:22 compute-0 nova_compute[239545]: 2026-02-02 15:39:22.972 239549 INFO nova.compute.manager [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Detaching volume 9a960307-48ca-4464-b486-206e25ea0afb
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.109 239549 INFO nova.virt.block_device [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Attempting to driver detach volume 9a960307-48ca-4464-b486-206e25ea0afb from mountpoint /dev/vdb
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.119 239549 DEBUG nova.virt.libvirt.driver [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Attempting to detach device vdb from instance 0cd0267f-d963-4475-aa31-ae2d3864ad80 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.120 239549 DEBUG nova.virt.libvirt.guest [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-9a960307-48ca-4464-b486-206e25ea0afb">
Feb 02 15:39:23 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   </source>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <serial>9a960307-48ca-4464-b486-206e25ea0afb</serial>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:39:23 compute-0 nova_compute[239545]: </disk>
Feb 02 15:39:23 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.129 239549 INFO nova.virt.libvirt.driver [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Successfully detached device vdb from instance 0cd0267f-d963-4475-aa31-ae2d3864ad80 from the persistent domain config.
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.129 239549 DEBUG nova.virt.libvirt.driver [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0cd0267f-d963-4475-aa31-ae2d3864ad80 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.130 239549 DEBUG nova.virt.libvirt.guest [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-9a960307-48ca-4464-b486-206e25ea0afb">
Feb 02 15:39:23 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   </source>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <serial>9a960307-48ca-4464-b486-206e25ea0afb</serial>
Feb 02 15:39:23 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:39:23 compute-0 nova_compute[239545]: </disk>
Feb 02 15:39:23 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.199 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Received event <DeviceRemovedEvent: 1770046763.1986556, 0cd0267f-d963-4475-aa31-ae2d3864ad80 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.200 239549 DEBUG nova.virt.libvirt.driver [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0cd0267f-d963-4475-aa31-ae2d3864ad80 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.204 239549 INFO nova.virt.libvirt.driver [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Successfully detached device vdb from instance 0cd0267f-d963-4475-aa31-ae2d3864ad80 from the live domain config.
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.394 239549 DEBUG nova.objects.instance [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'flavor' on Instance uuid 0cd0267f-d963-4475-aa31-ae2d3864ad80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.433 239549 DEBUG oslo_concurrency.lockutils [None req-81f069aa-3a4d-4b3a-910d-829675f95750 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 4.6 KiB/s wr, 15 op/s
Feb 02 15:39:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Feb 02 15:39:23 compute-0 ceph-mon[75334]: osdmap e312: 3 total, 3 up, 3 in
Feb 02 15:39:23 compute-0 ceph-mon[75334]: pgmap v1283: 305 pgs: 305 active+clean; 167 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 4.6 KiB/s wr, 15 op/s
Feb 02 15:39:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Feb 02 15:39:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Feb 02 15:39:23 compute-0 nova_compute[239545]: 2026-02-02 15:39:23.918 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Feb 02 15:39:24 compute-0 ceph-mon[75334]: osdmap e313: 3 total, 3 up, 3 in
Feb 02 15:39:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Feb 02 15:39:24 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Feb 02 15:39:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/886018346' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/886018346' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:25.118 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:39:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974816746' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 391 KiB/s wr, 67 op/s
Feb 02 15:39:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Feb 02 15:39:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Feb 02 15:39:25 compute-0 ceph-mon[75334]: osdmap e314: 3 total, 3 up, 3 in
Feb 02 15:39:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/886018346' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/886018346' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/974816746' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:25 compute-0 ceph-mon[75334]: pgmap v1286: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 391 KiB/s wr, 67 op/s
Feb 02 15:39:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Feb 02 15:39:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271927323' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271927323' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:26 compute-0 nova_compute[239545]: 2026-02-02 15:39:26.125 239549 DEBUG nova.compute.manager [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:39:26 compute-0 nova_compute[239545]: 2026-02-02 15:39:26.177 239549 INFO nova.compute.manager [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] instance snapshotting
Feb 02 15:39:26 compute-0 nova_compute[239545]: 2026-02-02 15:39:26.244 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:26 compute-0 nova_compute[239545]: 2026-02-02 15:39:26.383 239549 INFO nova.virt.libvirt.driver [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Beginning live snapshot process
Feb 02 15:39:26 compute-0 nova_compute[239545]: 2026-02-02 15:39:26.516 239549 DEBUG nova.virt.libvirt.imagebackend [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No parent info for 271bf15b-9e9a-428a-a098-dcc68b158a7a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 02 15:39:26 compute-0 nova_compute[239545]: 2026-02-02 15:39:26.668 239549 DEBUG nova.storage.rbd_utils [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] creating snapshot(dab1891baeef46038adb4619ab508660) on rbd image(0cd0267f-d963-4475-aa31-ae2d3864ad80_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 02 15:39:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Feb 02 15:39:26 compute-0 ceph-mon[75334]: osdmap e315: 3 total, 3 up, 3 in
Feb 02 15:39:26 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/271927323' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:26 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/271927323' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Feb 02 15:39:26 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Feb 02 15:39:26 compute-0 nova_compute[239545]: 2026-02-02 15:39:26.833 239549 DEBUG nova.storage.rbd_utils [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] cloning vms/0cd0267f-d963-4475-aa31-ae2d3864ad80_disk@dab1891baeef46038adb4619ab508660 to images/8e3e083e-b65c-4749-8ca6-c10a6b6905ac clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 02 15:39:27 compute-0 nova_compute[239545]: 2026-02-02 15:39:27.005 239549 DEBUG nova.storage.rbd_utils [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] flattening images/8e3e083e-b65c-4749-8ca6-c10a6b6905ac flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 02 15:39:27 compute-0 nova_compute[239545]: 2026-02-02 15:39:27.350 239549 DEBUG nova.storage.rbd_utils [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] removing snapshot(dab1891baeef46038adb4619ab508660) on rbd image(0cd0267f-d963-4475-aa31-ae2d3864ad80_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 02 15:39:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 577 KiB/s wr, 69 op/s
Feb 02 15:39:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Feb 02 15:39:27 compute-0 ceph-mon[75334]: osdmap e316: 3 total, 3 up, 3 in
Feb 02 15:39:27 compute-0 ceph-mon[75334]: pgmap v1289: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 577 KiB/s wr, 69 op/s
Feb 02 15:39:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Feb 02 15:39:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Feb 02 15:39:27 compute-0 nova_compute[239545]: 2026-02-02 15:39:27.827 239549 DEBUG nova.storage.rbd_utils [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] creating snapshot(snap) on rbd image(8e3e083e-b65c-4749-8ca6-c10a6b6905ac) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 02 15:39:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Feb 02 15:39:28 compute-0 ceph-mon[75334]: osdmap e317: 3 total, 3 up, 3 in
Feb 02 15:39:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Feb 02 15:39:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Feb 02 15:39:28 compute-0 nova_compute[239545]: 2026-02-02 15:39:28.920 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 208 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.3 MiB/s wr, 147 op/s
Feb 02 15:39:29 compute-0 ceph-mon[75334]: osdmap e318: 3 total, 3 up, 3 in
Feb 02 15:39:29 compute-0 ceph-mon[75334]: pgmap v1292: 305 pgs: 305 active+clean; 208 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.3 MiB/s wr, 147 op/s
Feb 02 15:39:30 compute-0 nova_compute[239545]: 2026-02-02 15:39:30.117 239549 INFO nova.virt.libvirt.driver [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Snapshot image upload complete
Feb 02 15:39:30 compute-0 nova_compute[239545]: 2026-02-02 15:39:30.118 239549 INFO nova.compute.manager [None req-42b6e40c-28f6-4756-93fb-f74a92339572 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Took 3.94 seconds to snapshot the instance on the hypervisor.
Feb 02 15:39:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1381654238' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Feb 02 15:39:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Feb 02 15:39:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Feb 02 15:39:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1381654238' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:31 compute-0 nova_compute[239545]: 2026-02-02 15:39:31.247 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 9.8 MiB/s rd, 9.7 MiB/s wr, 356 op/s
Feb 02 15:39:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Feb 02 15:39:31 compute-0 ceph-mon[75334]: osdmap e319: 3 total, 3 up, 3 in
Feb 02 15:39:31 compute-0 ceph-mon[75334]: pgmap v1294: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 9.8 MiB/s rd, 9.7 MiB/s wr, 356 op/s
Feb 02 15:39:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Feb 02 15:39:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Feb 02 15:39:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1055719898' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1055719898' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Feb 02 15:39:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Feb 02 15:39:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Feb 02 15:39:32 compute-0 ceph-mon[75334]: osdmap e320: 3 total, 3 up, 3 in
Feb 02 15:39:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1055719898' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1055719898' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:32 compute-0 ceph-mon[75334]: osdmap e321: 3 total, 3 up, 3 in
Feb 02 15:39:33 compute-0 nova_compute[239545]: 2026-02-02 15:39:33.321 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:33 compute-0 nova_compute[239545]: 2026-02-02 15:39:33.322 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:33 compute-0 nova_compute[239545]: 2026-02-02 15:39:33.342 239549 DEBUG nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:39:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2831144111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2831144111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:33 compute-0 nova_compute[239545]: 2026-02-02 15:39:33.420 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:33 compute-0 nova_compute[239545]: 2026-02-02 15:39:33.421 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:33 compute-0 nova_compute[239545]: 2026-02-02 15:39:33.430 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:39:33 compute-0 nova_compute[239545]: 2026-02-02 15:39:33.430 239549 INFO nova.compute.claims [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:39:33 compute-0 nova_compute[239545]: 2026-02-02 15:39:33.547 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 4.5 MiB/s wr, 325 op/s
Feb 02 15:39:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Feb 02 15:39:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2831144111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2831144111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:33 compute-0 ceph-mon[75334]: pgmap v1297: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 4.5 MiB/s wr, 325 op/s
Feb 02 15:39:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Feb 02 15:39:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Feb 02 15:39:33 compute-0 nova_compute[239545]: 2026-02-02 15:39:33.925 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:39:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437386628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.110 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.118 239549 DEBUG nova.compute.provider_tree [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.142 239549 DEBUG nova.scheduler.client.report [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.179 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.180 239549 DEBUG nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.243 239549 DEBUG nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.243 239549 DEBUG nova.network.neutron [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.271 239549 INFO nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.291 239549 DEBUG nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:39:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3746006835' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3746006835' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.402 239549 DEBUG nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.404 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.404 239549 INFO nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Creating image(s)
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.430 239549 DEBUG nova.storage.rbd_utils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image b7efb964-7e90-423b-b648-41772085a2be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.458 239549 DEBUG nova.storage.rbd_utils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image b7efb964-7e90-423b-b648-41772085a2be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.486 239549 DEBUG nova.storage.rbd_utils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image b7efb964-7e90-423b-b648-41772085a2be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.490 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "1f9344f1bc3c7fd19a31c9fc51879ee83bf18901" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.491 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "1f9344f1bc3c7fd19a31c9fc51879ee83bf18901" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.496 239549 DEBUG nova.policy [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '52fc74263c9d4d478b0b870727c4fa0c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46fcff5180ad4462a78fc4ba0bf7c266', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:39:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1650185295' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1650185295' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.714 239549 DEBUG nova.virt.libvirt.imagebackend [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Image locations are: [{'url': 'rbd://e43470b2-6632-573a-87d3-0f5428ec59e9/images/8e3e083e-b65c-4749-8ca6-c10a6b6905ac/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e43470b2-6632-573a-87d3-0f5428ec59e9/images/8e3e083e-b65c-4749-8ca6-c10a6b6905ac/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.774 239549 DEBUG nova.virt.libvirt.imagebackend [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Selected location: {'url': 'rbd://e43470b2-6632-573a-87d3-0f5428ec59e9/images/8e3e083e-b65c-4749-8ca6-c10a6b6905ac/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.775 239549 DEBUG nova.storage.rbd_utils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] cloning images/8e3e083e-b65c-4749-8ca6-c10a6b6905ac@snap to None/b7efb964-7e90-423b-b648-41772085a2be_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.860 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "1f9344f1bc3c7fd19a31c9fc51879ee83bf18901" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:34 compute-0 ceph-mon[75334]: osdmap e322: 3 total, 3 up, 3 in
Feb 02 15:39:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/437386628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:39:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3746006835' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3746006835' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1650185295' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1650185295' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.978 239549 DEBUG nova.objects.instance [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'migration_context' on Instance uuid b7efb964-7e90-423b-b648-41772085a2be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.993 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.994 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Ensure instance console log exists: /var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.994 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.995 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:34 compute-0 nova_compute[239545]: 2026-02-02 15:39:34.995 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:35 compute-0 nova_compute[239545]: 2026-02-02 15:39:35.203 239549 DEBUG nova.network.neutron [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Successfully created port: 79ac76f3-882f-40a6-ab76-3286e5b6fc7e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:39:35 compute-0 podman[259523]: 2026-02-02 15:39:35.371363682 +0000 UTC m=+0.103346531 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 02 15:39:35 compute-0 podman[259522]: 2026-02-02 15:39:35.381030047 +0000 UTC m=+0.113405555 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:39:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 14 KiB/s wr, 331 op/s
Feb 02 15:39:35 compute-0 ceph-mon[75334]: pgmap v1299: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 14 KiB/s wr, 331 op/s
Feb 02 15:39:36 compute-0 nova_compute[239545]: 2026-02-02 15:39:36.006 239549 DEBUG nova.network.neutron [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Successfully updated port: 79ac76f3-882f-40a6-ab76-3286e5b6fc7e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:39:36 compute-0 nova_compute[239545]: 2026-02-02 15:39:36.021 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:39:36 compute-0 nova_compute[239545]: 2026-02-02 15:39:36.022 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquired lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:39:36 compute-0 nova_compute[239545]: 2026-02-02 15:39:36.022 239549 DEBUG nova.network.neutron [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:39:36 compute-0 nova_compute[239545]: 2026-02-02 15:39:36.089 239549 DEBUG nova.compute.manager [req-1d81b793-9731-46ea-bc29-592aa96e4228 req-e5368f8b-4d42-4fc6-b7e5-c3b937bd2594 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received event network-changed-79ac76f3-882f-40a6-ab76-3286e5b6fc7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:39:36 compute-0 nova_compute[239545]: 2026-02-02 15:39:36.090 239549 DEBUG nova.compute.manager [req-1d81b793-9731-46ea-bc29-592aa96e4228 req-e5368f8b-4d42-4fc6-b7e5-c3b937bd2594 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Refreshing instance network info cache due to event network-changed-79ac76f3-882f-40a6-ab76-3286e5b6fc7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:39:36 compute-0 nova_compute[239545]: 2026-02-02 15:39:36.090 239549 DEBUG oslo_concurrency.lockutils [req-1d81b793-9731-46ea-bc29-592aa96e4228 req-e5368f8b-4d42-4fc6-b7e5-c3b937bd2594 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:39:36 compute-0 nova_compute[239545]: 2026-02-02 15:39:36.250 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:36 compute-0 nova_compute[239545]: 2026-02-02 15:39:36.482 239549 DEBUG nova.network.neutron [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:39:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3948258102' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3948258102' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 11 KiB/s wr, 264 op/s
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.699 239549 DEBUG nova.network.neutron [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Updating instance_info_cache with network_info: [{"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.728 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Releasing lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.728 239549 DEBUG nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Instance network_info: |[{"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.729 239549 DEBUG oslo_concurrency.lockutils [req-1d81b793-9731-46ea-bc29-592aa96e4228 req-e5368f8b-4d42-4fc6-b7e5-c3b937bd2594 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.729 239549 DEBUG nova.network.neutron [req-1d81b793-9731-46ea-bc29-592aa96e4228 req-e5368f8b-4d42-4fc6-b7e5-c3b937bd2594 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Refreshing network info cache for port 79ac76f3-882f-40a6-ab76-3286e5b6fc7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.731 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Start _get_guest_xml network_info=[{"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-02T15:39:25Z,direct_url=<?>,disk_format='raw',id=8e3e083e-b65c-4749-8ca6-c10a6b6905ac,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-712370755',owner='46fcff5180ad4462a78fc4ba0bf7c266',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-02T15:39:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '8e3e083e-b65c-4749-8ca6-c10a6b6905ac'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.735 239549 WARNING nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.740 239549 DEBUG nova.virt.libvirt.host [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.740 239549 DEBUG nova.virt.libvirt.host [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.743 239549 DEBUG nova.virt.libvirt.host [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.744 239549 DEBUG nova.virt.libvirt.host [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.744 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.744 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-02T15:39:25Z,direct_url=<?>,disk_format='raw',id=8e3e083e-b65c-4749-8ca6-c10a6b6905ac,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-712370755',owner='46fcff5180ad4462a78fc4ba0bf7c266',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-02T15:39:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.745 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.745 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.745 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.745 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.745 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.746 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.746 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.746 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.746 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.746 239549 DEBUG nova.virt.hardware [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:39:37 compute-0 nova_compute[239545]: 2026-02-02 15:39:37.748 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Feb 02 15:39:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Feb 02 15:39:37 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Feb 02 15:39:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1311908022' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:38 compute-0 nova_compute[239545]: 2026-02-02 15:39:38.369 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:38 compute-0 nova_compute[239545]: 2026-02-02 15:39:38.395 239549 DEBUG nova.storage.rbd_utils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image b7efb964-7e90-423b-b648-41772085a2be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:38 compute-0 nova_compute[239545]: 2026-02-02 15:39:38.401 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:38 compute-0 ceph-mon[75334]: pgmap v1300: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 11 KiB/s wr, 264 op/s
Feb 02 15:39:38 compute-0 ceph-mon[75334]: osdmap e323: 3 total, 3 up, 3 in
Feb 02 15:39:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1311908022' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Feb 02 15:39:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Feb 02 15:39:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Feb 02 15:39:38 compute-0 nova_compute[239545]: 2026-02-02 15:39:38.928 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070533903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:38.999 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.002 239549 DEBUG nova.virt.libvirt.vif [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-222622463',display_name='tempest-TestStampPattern-server-222622463',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-222622463',id=13,image_ref='8e3e083e-b65c-4749-8ca6-c10a6b6905ac',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU3Qd28tTX1c5qwGJRKT3n61SGNF68frpFMSsyV8cHZ2kSTbPtWsGt0wKjJJJJlLa3QDX/7DBKeziYUBGfREdOy19PqZh47/jl2MuarCSlTN9sOG0Vwc8p2ZOsRH+TAQg==',key_name='tempest-TestStampPattern-1309840176',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46fcff5180ad4462a78fc4ba0bf7c266',ramdisk_id='',reservation_id='r-hpi94sts',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='0cd0267f-d963-4475-aa31-ae2d3864ad80',image_min_disk='1',image_min_ram='0',image_owner_id='46fcff5180ad4462a78fc4ba0bf7c266',image_owner_project_name='tempest-TestStampPattern-2129228693',image_owner_user_name='tempest-TestStampPattern-2129228693-project-member',image_user_id='52fc74263c9d4d478b0b870727c4fa0c',network_allocated='True',owner_project_name='tempest-TestStampPattern-2129228693',owner_user_name='tempest-TestStampPattern-2129228693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:39:34Z,user_data=None,user_id='52fc74263c9d4d478b0b870727c4fa0c',uuid=b7efb964-7e90-423b-b648-41772085a2be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.003 239549 DEBUG nova.network.os_vif_util [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converting VIF {"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.004 239549 DEBUG nova.network.os_vif_util [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:fa:1d,bridge_name='br-int',has_traffic_filtering=True,id=79ac76f3-882f-40a6-ab76-3286e5b6fc7e,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79ac76f3-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.006 239549 DEBUG nova.objects.instance [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'pci_devices' on Instance uuid b7efb964-7e90-423b-b648-41772085a2be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.023 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <uuid>b7efb964-7e90-423b-b648-41772085a2be</uuid>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <name>instance-0000000d</name>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <nova:name>tempest-TestStampPattern-server-222622463</nova:name>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:39:37</nova:creationTime>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <nova:user uuid="52fc74263c9d4d478b0b870727c4fa0c">tempest-TestStampPattern-2129228693-project-member</nova:user>
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <nova:project uuid="46fcff5180ad4462a78fc4ba0bf7c266">tempest-TestStampPattern-2129228693</nova:project>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="8e3e083e-b65c-4749-8ca6-c10a6b6905ac"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <nova:port uuid="79ac76f3-882f-40a6-ab76-3286e5b6fc7e">
Feb 02 15:39:39 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <system>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <entry name="serial">b7efb964-7e90-423b-b648-41772085a2be</entry>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <entry name="uuid">b7efb964-7e90-423b-b648-41772085a2be</entry>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     </system>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <os>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   </os>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <features>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   </features>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/b7efb964-7e90-423b-b648-41772085a2be_disk">
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       </source>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/b7efb964-7e90-423b-b648-41772085a2be_disk.config">
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       </source>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:39:39 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:bf:fa:1d"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <target dev="tap79ac76f3-88"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be/console.log" append="off"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <video>
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     </video>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <input type="keyboard" bus="usb"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:39:39 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:39:39 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:39:39 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:39:39 compute-0 nova_compute[239545]: </domain>
Feb 02 15:39:39 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.023 239549 DEBUG nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Preparing to wait for external event network-vif-plugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.024 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.024 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.024 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.025 239549 DEBUG nova.virt.libvirt.vif [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-222622463',display_name='tempest-TestStampPattern-server-222622463',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-222622463',id=13,image_ref='8e3e083e-b65c-4749-8ca6-c10a6b6905ac',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU3Qd28tTX1c5qwGJRKT3n61SGNF68frpFMSsyV8cHZ2kSTbPtWsGt0wKjJJJJlLa3QDX/7DBKeziYUBGfREdOy19PqZh47/jl2MuarCSlTN9sOG0Vwc8p2ZOsRH+TAQg==',key_name='tempest-TestStampPattern-1309840176',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46fcff5180ad4462a78fc4ba0bf7c266',ramdisk_id='',reservation_id='r-hpi94sts',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='0cd0267f-d963-4475-aa31-ae2d3864ad80',image_min_disk='1',image_min_ram='0',image_owner_id='46fcff5180ad4462a78fc4ba0bf7c266',image_owner_project_name='tempest-TestStampPattern-2129228693',image_owner_user_name='tempest-TestStampPattern-2129228693-project-member',image_user_id='52fc74263c9d4d478b0b870727c4fa0c',network_allocated='True',owner_project_name='tempest-TestStampPattern-2129228693',owner_user_name='tempest-TestStampPattern-2129228693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:39:34Z,user_data=None,user_id='52fc74263c9d4d478b0b870727c4fa0c',uuid=b7efb964-7e90-423b-b648-41772085a2be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.025 239549 DEBUG nova.network.os_vif_util [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converting VIF {"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.025 239549 DEBUG nova.network.os_vif_util [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:fa:1d,bridge_name='br-int',has_traffic_filtering=True,id=79ac76f3-882f-40a6-ab76-3286e5b6fc7e,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79ac76f3-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.026 239549 DEBUG os_vif [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:fa:1d,bridge_name='br-int',has_traffic_filtering=True,id=79ac76f3-882f-40a6-ab76-3286e5b6fc7e,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79ac76f3-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.026 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.027 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.027 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.030 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.030 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79ac76f3-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.030 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79ac76f3-88, col_values=(('external_ids', {'iface-id': '79ac76f3-882f-40a6-ab76-3286e5b6fc7e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:fa:1d', 'vm-uuid': 'b7efb964-7e90-423b-b648-41772085a2be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.032 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:39 compute-0 NetworkManager[49171]: <info>  [1770046779.0335] manager: (tap79ac76f3-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.035 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.039 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.041 239549 INFO os_vif [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:fa:1d,bridge_name='br-int',has_traffic_filtering=True,id=79ac76f3-882f-40a6-ab76-3286e5b6fc7e,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79ac76f3-88')
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.078 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.078 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.079 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No VIF found with MAC fa:16:3e:bf:fa:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.079 239549 INFO nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Using config drive
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.097 239549 DEBUG nova.storage.rbd_utils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image b7efb964-7e90-423b-b648-41772085a2be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.460 239549 INFO nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Creating config drive at /var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be/disk.config
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.464 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphz8eo59b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.583 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphz8eo59b" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.611 239549 DEBUG nova.storage.rbd_utils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] rbd image b7efb964-7e90-423b-b648-41772085a2be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.615 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be/disk.config b7efb964-7e90-423b-b648-41772085a2be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 10 KiB/s wr, 265 op/s
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.641 239549 DEBUG nova.network.neutron [req-1d81b793-9731-46ea-bc29-592aa96e4228 req-e5368f8b-4d42-4fc6-b7e5-c3b937bd2594 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Updated VIF entry in instance network info cache for port 79ac76f3-882f-40a6-ab76-3286e5b6fc7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.641 239549 DEBUG nova.network.neutron [req-1d81b793-9731-46ea-bc29-592aa96e4228 req-e5368f8b-4d42-4fc6-b7e5-c3b937bd2594 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Updating instance_info_cache with network_info: [{"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.658 239549 DEBUG oslo_concurrency.lockutils [req-1d81b793-9731-46ea-bc29-592aa96e4228 req-e5368f8b-4d42-4fc6-b7e5-c3b937bd2594 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.722 239549 DEBUG oslo_concurrency.processutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be/disk.config b7efb964-7e90-423b-b648-41772085a2be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.722 239549 INFO nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Deleting local config drive /var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be/disk.config because it was imported into RBD.
Feb 02 15:39:39 compute-0 kernel: tap79ac76f3-88: entered promiscuous mode
Feb 02 15:39:39 compute-0 NetworkManager[49171]: <info>  [1770046779.7501] manager: (tap79ac76f3-88): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.751 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:39 compute-0 ovn_controller[144995]: 2026-02-02T15:39:39Z|00134|binding|INFO|Claiming lport 79ac76f3-882f-40a6-ab76-3286e5b6fc7e for this chassis.
Feb 02 15:39:39 compute-0 ovn_controller[144995]: 2026-02-02T15:39:39Z|00135|binding|INFO|79ac76f3-882f-40a6-ab76-3286e5b6fc7e: Claiming fa:16:3e:bf:fa:1d 10.100.0.5
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.759 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:39 compute-0 ovn_controller[144995]: 2026-02-02T15:39:39Z|00136|binding|INFO|Setting lport 79ac76f3-882f-40a6-ab76-3286e5b6fc7e ovn-installed in OVS
Feb 02 15:39:39 compute-0 ovn_controller[144995]: 2026-02-02T15:39:39Z|00137|binding|INFO|Setting lport 79ac76f3-882f-40a6-ab76-3286e5b6fc7e up in Southbound
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.762 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:fa:1d 10.100.0.5'], port_security=['fa:16:3e:bf:fa:1d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b7efb964-7e90-423b-b648-41772085a2be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f321435-d909-47d9-9978-c1a6e976cdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46fcff5180ad4462a78fc4ba0bf7c266', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df6850bc-5320-4ccb-85d3-0e9f88b0ebcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a6ad9bc-2949-4854-862e-b465f4808980, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=79ac76f3-882f-40a6-ab76-3286e5b6fc7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.763 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.765 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 79ac76f3-882f-40a6-ab76-3286e5b6fc7e in datapath 2f321435-d909-47d9-9978-c1a6e976cdf3 bound to our chassis
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.766 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f321435-d909-47d9-9978-c1a6e976cdf3
Feb 02 15:39:39 compute-0 systemd-machined[207609]: New machine qemu-13-instance-0000000d.
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.775 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[32238981-ee94-41ea-989c-e6b7461bd419]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.792 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[e344b914-6efb-4177-abc1-b76f8afbc997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:39 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.794 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[2494cd5e-8a2b-4c7c-8e9f-9ad5a1877bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:39 compute-0 systemd-udevd[259705]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:39:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Feb 02 15:39:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Feb 02 15:39:39 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Feb 02 15:39:39 compute-0 NetworkManager[49171]: <info>  [1770046779.8105] device (tap79ac76f3-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:39:39 compute-0 NetworkManager[49171]: <info>  [1770046779.8110] device (tap79ac76f3-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:39:39 compute-0 ceph-mon[75334]: osdmap e324: 3 total, 3 up, 3 in
Feb 02 15:39:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2070533903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:39 compute-0 ceph-mon[75334]: pgmap v1303: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 10 KiB/s wr, 265 op/s
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.815 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[baa3e25f-111e-448e-9943-f452bb64f8e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.826 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[fb6a8896-63bd-4e00-928f-f96976ac37a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f321435-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:45:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422047, 'reachable_time': 19594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259710, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.836 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c73f4d-96e2-4174-b704-df6c52e9536c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2f321435-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422055, 'tstamp': 422055}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259715, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2f321435-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422057, 'tstamp': 422057}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259715, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.838 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f321435-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:39:39 compute-0 nova_compute[239545]: 2026-02-02 15:39:39.840 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.843 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f321435-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.843 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.843 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f321435-d0, col_values=(('external_ids', {'iface-id': '240bc225-e61e-427a-8aef-43d7550fa498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:39:39 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:39.844 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.020 239549 DEBUG nova.compute.manager [req-d5c8afd0-d7aa-4bfa-88f6-4a731c0b1ed4 req-f4a322af-4572-42d0-9fc6-228a2f185248 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received event network-vif-plugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.021 239549 DEBUG oslo_concurrency.lockutils [req-d5c8afd0-d7aa-4bfa-88f6-4a731c0b1ed4 req-f4a322af-4572-42d0-9fc6-228a2f185248 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.021 239549 DEBUG oslo_concurrency.lockutils [req-d5c8afd0-d7aa-4bfa-88f6-4a731c0b1ed4 req-f4a322af-4572-42d0-9fc6-228a2f185248 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.021 239549 DEBUG oslo_concurrency.lockutils [req-d5c8afd0-d7aa-4bfa-88f6-4a731c0b1ed4 req-f4a322af-4572-42d0-9fc6-228a2f185248 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.022 239549 DEBUG nova.compute.manager [req-d5c8afd0-d7aa-4bfa-88f6-4a731c0b1ed4 req-f4a322af-4572-42d0-9fc6-228a2f185248 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Processing event network-vif-plugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:39:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3154886409' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3154886409' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.282 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046780.2822309, b7efb964-7e90-423b-b648-41772085a2be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.283 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] VM Started (Lifecycle Event)
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.286 239549 DEBUG nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.289 239549 DEBUG nova.virt.libvirt.driver [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.294 239549 INFO nova.virt.libvirt.driver [-] [instance: b7efb964-7e90-423b-b648-41772085a2be] Instance spawned successfully.
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.295 239549 INFO nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Took 5.89 seconds to spawn the instance on the hypervisor.
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.295 239549 DEBUG nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.330 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.334 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.357 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.358 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046780.28234, b7efb964-7e90-423b-b648-41772085a2be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.358 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] VM Paused (Lifecycle Event)
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.377 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.380 239549 INFO nova.compute.manager [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Took 6.98 seconds to build instance.
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.386 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046780.2886782, b7efb964-7e90-423b-b648-41772085a2be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.386 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] VM Resumed (Lifecycle Event)
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.399 239549 DEBUG oslo_concurrency.lockutils [None req-865df206-3b5b-4037-8a9e-fcf5be8f73df 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.408 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:39:40 compute-0 nova_compute[239545]: 2026-02-02 15:39:40.413 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:39:40 compute-0 ceph-mon[75334]: osdmap e325: 3 total, 3 up, 3 in
Feb 02 15:39:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3154886409' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3154886409' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 29 KiB/s wr, 110 op/s
Feb 02 15:39:41 compute-0 ceph-mon[75334]: pgmap v1305: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 29 KiB/s wr, 110 op/s
Feb 02 15:39:42 compute-0 nova_compute[239545]: 2026-02-02 15:39:42.146 239549 DEBUG nova.compute.manager [req-a09c716a-ff8f-42ba-9afb-f7ef99fc7b7f req-8442f015-1a8d-48c3-a173-76e0c148bb12 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received event network-vif-plugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:39:42 compute-0 nova_compute[239545]: 2026-02-02 15:39:42.147 239549 DEBUG oslo_concurrency.lockutils [req-a09c716a-ff8f-42ba-9afb-f7ef99fc7b7f req-8442f015-1a8d-48c3-a173-76e0c148bb12 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:42 compute-0 nova_compute[239545]: 2026-02-02 15:39:42.147 239549 DEBUG oslo_concurrency.lockutils [req-a09c716a-ff8f-42ba-9afb-f7ef99fc7b7f req-8442f015-1a8d-48c3-a173-76e0c148bb12 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:42 compute-0 nova_compute[239545]: 2026-02-02 15:39:42.148 239549 DEBUG oslo_concurrency.lockutils [req-a09c716a-ff8f-42ba-9afb-f7ef99fc7b7f req-8442f015-1a8d-48c3-a173-76e0c148bb12 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:42 compute-0 nova_compute[239545]: 2026-02-02 15:39:42.148 239549 DEBUG nova.compute.manager [req-a09c716a-ff8f-42ba-9afb-f7ef99fc7b7f req-8442f015-1a8d-48c3-a173-76e0c148bb12 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] No waiting events found dispatching network-vif-plugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:39:42 compute-0 nova_compute[239545]: 2026-02-02 15:39:42.149 239549 WARNING nova.compute.manager [req-a09c716a-ff8f-42ba-9afb-f7ef99fc7b7f req-8442f015-1a8d-48c3-a173-76e0c148bb12 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received unexpected event network-vif-plugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e for instance with vm_state active and task_state None.
Feb 02 15:39:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Feb 02 15:39:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Feb 02 15:39:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Feb 02 15:39:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:39:42
Feb 02 15:39:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:39:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:39:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data']
Feb 02 15:39:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:39:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:43 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2691264803' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 264 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.0 MiB/s wr, 273 op/s
Feb 02 15:39:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Feb 02 15:39:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Feb 02 15:39:43 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Feb 02 15:39:43 compute-0 ceph-mon[75334]: osdmap e326: 3 total, 3 up, 3 in
Feb 02 15:39:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2691264803' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:43 compute-0 ceph-mon[75334]: pgmap v1307: 305 pgs: 305 active+clean; 264 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.0 MiB/s wr, 273 op/s
Feb 02 15:39:43 compute-0 nova_compute[239545]: 2026-02-02 15:39:43.968 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:44 compute-0 nova_compute[239545]: 2026-02-02 15:39:44.032 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:39:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Feb 02 15:39:44 compute-0 ceph-mon[75334]: osdmap e327: 3 total, 3 up, 3 in
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.827466) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046784827488, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1308, "num_deletes": 256, "total_data_size": 1716212, "memory_usage": 1740336, "flush_reason": "Manual Compaction"}
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Feb 02 15:39:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Feb 02 15:39:44 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046784835223, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1694499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25344, "largest_seqno": 26651, "table_properties": {"data_size": 1688035, "index_size": 3603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14574, "raw_average_key_size": 20, "raw_value_size": 1674791, "raw_average_value_size": 2409, "num_data_blocks": 159, "num_entries": 695, "num_filter_entries": 695, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770046707, "oldest_key_time": 1770046707, "file_creation_time": 1770046784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 7804 microseconds, and 3613 cpu microseconds.
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.835265) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1694499 bytes OK
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.835283) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.837353) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.837368) EVENT_LOG_v1 {"time_micros": 1770046784837363, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.837384) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1710052, prev total WAL file size 1710093, number of live WAL files 2.
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.837876) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1654KB)], [56(10MB)]
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046784837932, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12727918, "oldest_snapshot_seqno": -1}
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5609 keys, 10980693 bytes, temperature: kUnknown
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046784881214, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10980693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10936326, "index_size": 29217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 139648, "raw_average_key_size": 24, "raw_value_size": 10828577, "raw_average_value_size": 1930, "num_data_blocks": 1198, "num_entries": 5609, "num_filter_entries": 5609, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770046784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.881464) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10980693 bytes
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.883405) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 293.5 rd, 253.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.5 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(14.0) write-amplify(6.5) OK, records in: 6135, records dropped: 526 output_compression: NoCompression
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.883429) EVENT_LOG_v1 {"time_micros": 1770046784883418, "job": 30, "event": "compaction_finished", "compaction_time_micros": 43363, "compaction_time_cpu_micros": 18527, "output_level": 6, "num_output_files": 1, "total_output_size": 10980693, "num_input_records": 6135, "num_output_records": 5609, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046784883653, "job": 30, "event": "table_file_deletion", "file_number": 58}
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046784884370, "job": 30, "event": "table_file_deletion", "file_number": 56}
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.837755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.884418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.884424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.884426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.884428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:39:44 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:39:44.884430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:39:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:39:45 compute-0 nova_compute[239545]: 2026-02-02 15:39:45.241 239549 DEBUG nova.compute.manager [req-e60d6062-e82c-4af8-8daf-601573da4d0e req-093082be-6d43-4f60-a656-e60f7ccc450e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received event network-changed-79ac76f3-882f-40a6-ab76-3286e5b6fc7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:39:45 compute-0 nova_compute[239545]: 2026-02-02 15:39:45.241 239549 DEBUG nova.compute.manager [req-e60d6062-e82c-4af8-8daf-601573da4d0e req-093082be-6d43-4f60-a656-e60f7ccc450e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Refreshing instance network info cache due to event network-changed-79ac76f3-882f-40a6-ab76-3286e5b6fc7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:39:45 compute-0 nova_compute[239545]: 2026-02-02 15:39:45.241 239549 DEBUG oslo_concurrency.lockutils [req-e60d6062-e82c-4af8-8daf-601573da4d0e req-093082be-6d43-4f60-a656-e60f7ccc450e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:39:45 compute-0 nova_compute[239545]: 2026-02-02 15:39:45.242 239549 DEBUG oslo_concurrency.lockutils [req-e60d6062-e82c-4af8-8daf-601573da4d0e req-093082be-6d43-4f60-a656-e60f7ccc450e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:39:45 compute-0 nova_compute[239545]: 2026-02-02 15:39:45.242 239549 DEBUG nova.network.neutron [req-e60d6062-e82c-4af8-8daf-601573da4d0e req-093082be-6d43-4f60-a656-e60f7ccc450e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Refreshing network info cache for port 79ac76f3-882f-40a6-ab76-3286e5b6fc7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:39:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811728323' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811728323' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 295 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.7 MiB/s wr, 311 op/s
Feb 02 15:39:45 compute-0 ceph-mon[75334]: osdmap e328: 3 total, 3 up, 3 in
Feb 02 15:39:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2811728323' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2811728323' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:45 compute-0 ceph-mon[75334]: pgmap v1310: 305 pgs: 305 active+clean; 295 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.7 MiB/s wr, 311 op/s
Feb 02 15:39:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Feb 02 15:39:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Feb 02 15:39:46 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Feb 02 15:39:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2334053630' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2334053630' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:47 compute-0 nova_compute[239545]: 2026-02-02 15:39:47.128 239549 DEBUG nova.network.neutron [req-e60d6062-e82c-4af8-8daf-601573da4d0e req-093082be-6d43-4f60-a656-e60f7ccc450e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Updated VIF entry in instance network info cache for port 79ac76f3-882f-40a6-ab76-3286e5b6fc7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:39:47 compute-0 nova_compute[239545]: 2026-02-02 15:39:47.129 239549 DEBUG nova.network.neutron [req-e60d6062-e82c-4af8-8daf-601573da4d0e req-093082be-6d43-4f60-a656-e60f7ccc450e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Updating instance_info_cache with network_info: [{"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:39:47 compute-0 nova_compute[239545]: 2026-02-02 15:39:47.148 239549 DEBUG oslo_concurrency.lockutils [req-e60d6062-e82c-4af8-8daf-601573da4d0e req-093082be-6d43-4f60-a656-e60f7ccc450e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:39:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1562641182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1562641182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 295 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.4 MiB/s wr, 329 op/s
Feb 02 15:39:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Feb 02 15:39:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Feb 02 15:39:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Feb 02 15:39:47 compute-0 ceph-mon[75334]: osdmap e329: 3 total, 3 up, 3 in
Feb 02 15:39:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2334053630' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2334053630' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1562641182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1562641182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:47 compute-0 ceph-mon[75334]: pgmap v1312: 305 pgs: 305 active+clean; 295 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.4 MiB/s wr, 329 op/s
Feb 02 15:39:47 compute-0 ceph-mon[75334]: osdmap e330: 3 total, 3 up, 3 in
Feb 02 15:39:48 compute-0 nova_compute[239545]: 2026-02-02 15:39:48.971 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:49 compute-0 nova_compute[239545]: 2026-02-02 15:39:49.033 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284624291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2284624291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 279 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Feb 02 15:39:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Feb 02 15:39:50 compute-0 ceph-mon[75334]: pgmap v1314: 305 pgs: 305 active+clean; 279 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Feb 02 15:39:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Feb 02 15:39:50 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Feb 02 15:39:51 compute-0 ceph-mon[75334]: osdmap e331: 3 total, 3 up, 3 in
Feb 02 15:39:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 5.8 KiB/s wr, 127 op/s
Feb 02 15:39:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Feb 02 15:39:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Feb 02 15:39:52 compute-0 ceph-mon[75334]: pgmap v1316: 305 pgs: 305 active+clean; 248 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 5.8 KiB/s wr, 127 op/s
Feb 02 15:39:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Feb 02 15:39:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Feb 02 15:39:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Feb 02 15:39:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Feb 02 15:39:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:39:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2460013642' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:39:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2460013642' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:53 compute-0 ovn_controller[144995]: 2026-02-02T15:39:53Z|00022|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.5
Feb 02 15:39:53 compute-0 ovn_controller[144995]: 2026-02-02T15:39:53Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:bf:fa:1d 10.100.0.5
Feb 02 15:39:53 compute-0 ceph-mon[75334]: osdmap e332: 3 total, 3 up, 3 in
Feb 02 15:39:53 compute-0 ceph-mon[75334]: osdmap e333: 3 total, 3 up, 3 in
Feb 02 15:39:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2460013642' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:39:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2460013642' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:39:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 249 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 905 KiB/s rd, 9.4 KiB/s wr, 159 op/s
Feb 02 15:39:53 compute-0 nova_compute[239545]: 2026-02-02 15:39:53.972 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:54 compute-0 nova_compute[239545]: 2026-02-02 15:39:54.035 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007649908899802936 of space, bias 1.0, pg target 0.22949726699408807 quantized to 32 (current 32)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003901868389090143 of space, bias 1.0, pg target 0.11705605167270429 quantized to 32 (current 32)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.075469671628077e-06 of space, bias 1.0, pg target 0.0006226409014884231 quantized to 32 (current 32)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014243073906525586 of space, bias 1.0, pg target 0.4272922171957676 quantized to 32 (current 32)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.34664501864098e-06 of space, bias 4.0, pg target 0.001615974022369176 quantized to 16 (current 16)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:39:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:39:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Feb 02 15:39:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Feb 02 15:39:54 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Feb 02 15:39:54 compute-0 ceph-mon[75334]: pgmap v1319: 305 pgs: 305 active+clean; 249 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 905 KiB/s rd, 9.4 KiB/s wr, 159 op/s
Feb 02 15:39:54 compute-0 nova_compute[239545]: 2026-02-02 15:39:54.715 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Acquiring lock "bb16b75c-fa89-4b1b-ba03-90fee561a5b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:54 compute-0 nova_compute[239545]: 2026-02-02 15:39:54.716 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "bb16b75c-fa89-4b1b-ba03-90fee561a5b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:54 compute-0 nova_compute[239545]: 2026-02-02 15:39:54.730 239549 DEBUG nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:39:54 compute-0 nova_compute[239545]: 2026-02-02 15:39:54.808 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:54 compute-0 nova_compute[239545]: 2026-02-02 15:39:54.808 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:54 compute-0 nova_compute[239545]: 2026-02-02 15:39:54.819 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:39:54 compute-0 nova_compute[239545]: 2026-02-02 15:39:54.819 239549 INFO nova.compute.claims [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:39:54 compute-0 nova_compute[239545]: 2026-02-02 15:39:54.958 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:39:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3072892841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.487 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.493 239549 DEBUG nova.compute.provider_tree [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.513 239549 DEBUG nova.scheduler.client.report [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.535 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.536 239549 DEBUG nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.608 239549 DEBUG nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.608 239549 DEBUG nova.network.neutron [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:39:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Feb 02 15:39:55 compute-0 ceph-mon[75334]: osdmap e334: 3 total, 3 up, 3 in
Feb 02 15:39:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3072892841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.632 239549 INFO nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:39:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 262 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.2 MiB/s wr, 205 op/s
Feb 02 15:39:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Feb 02 15:39:55 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.650 239549 DEBUG nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.741 239549 DEBUG nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.742 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.743 239549 INFO nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Creating image(s)
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.764 239549 DEBUG nova.storage.rbd_utils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] rbd image bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.787 239549 DEBUG nova.storage.rbd_utils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] rbd image bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.809 239549 DEBUG nova.storage.rbd_utils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] rbd image bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.813 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.874 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.874 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.875 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.875 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.896 239549 DEBUG nova.storage.rbd_utils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] rbd image bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:55 compute-0 nova_compute[239545]: 2026-02-02 15:39:55.899 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.079 239549 DEBUG nova.network.neutron [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.080 239549 DEBUG nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.114 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.170 239549 DEBUG nova.storage.rbd_utils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] resizing rbd image bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.235 239549 DEBUG nova.objects.instance [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lazy-loading 'migration_context' on Instance uuid bb16b75c-fa89-4b1b-ba03-90fee561a5b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.248 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.248 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Ensure instance console log exists: /var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.249 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.249 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.249 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.251 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.255 239549 WARNING nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.258 239549 DEBUG nova.virt.libvirt.host [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.259 239549 DEBUG nova.virt.libvirt.host [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.262 239549 DEBUG nova.virt.libvirt.host [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.262 239549 DEBUG nova.virt.libvirt.host [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.263 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.263 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.263 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.263 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.263 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.264 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.264 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.264 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.264 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.265 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.265 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.265 239549 DEBUG nova.virt.hardware [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.268 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Feb 02 15:39:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Feb 02 15:39:56 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Feb 02 15:39:56 compute-0 ceph-mon[75334]: pgmap v1321: 305 pgs: 305 active+clean; 262 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.2 MiB/s wr, 205 op/s
Feb 02 15:39:56 compute-0 ceph-mon[75334]: osdmap e335: 3 total, 3 up, 3 in
Feb 02 15:39:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279660245' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.811 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.834 239549 DEBUG nova.storage.rbd_utils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] rbd image bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:56 compute-0 nova_compute[239545]: 2026-02-02 15:39:56.838 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:39:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/695552690' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:57 compute-0 nova_compute[239545]: 2026-02-02 15:39:57.369 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:57 compute-0 nova_compute[239545]: 2026-02-02 15:39:57.371 239549 DEBUG nova.objects.instance [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb16b75c-fa89-4b1b-ba03-90fee561a5b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:39:57 compute-0 nova_compute[239545]: 2026-02-02 15:39:57.566 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <uuid>bb16b75c-fa89-4b1b-ba03-90fee561a5b9</uuid>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <name>instance-0000000e</name>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <nova:name>tempest-VolumesNegativeTest-instance-55258021</nova:name>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:39:56</nova:creationTime>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <nova:user uuid="40bd8b2776484c7d97231ded1bb56b58">tempest-VolumesNegativeTest-1649004952-project-member</nova:user>
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <nova:project uuid="ad61c6964c674c82aa121ac13ad9bb92">tempest-VolumesNegativeTest-1649004952</nova:project>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <nova:ports/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <system>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <entry name="serial">bb16b75c-fa89-4b1b-ba03-90fee561a5b9</entry>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <entry name="uuid">bb16b75c-fa89-4b1b-ba03-90fee561a5b9</entry>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     </system>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <os>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   </os>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <features>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   </features>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk">
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       </source>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk.config">
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       </source>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:39:57 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9/console.log" append="off"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <video>
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     </video>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:39:57 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:39:57 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:39:57 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:39:57 compute-0 nova_compute[239545]: </domain>
Feb 02 15:39:57 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:39:57 compute-0 nova_compute[239545]: 2026-02-02 15:39:57.623 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:39:57 compute-0 nova_compute[239545]: 2026-02-02 15:39:57.624 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:39:57 compute-0 nova_compute[239545]: 2026-02-02 15:39:57.624 239549 INFO nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Using config drive
Feb 02 15:39:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 262 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 215 op/s
Feb 02 15:39:57 compute-0 nova_compute[239545]: 2026-02-02 15:39:57.641 239549 DEBUG nova.storage.rbd_utils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] rbd image bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:57 compute-0 ceph-mon[75334]: osdmap e336: 3 total, 3 up, 3 in
Feb 02 15:39:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1279660245' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/695552690' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:39:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:39:57 compute-0 ovn_controller[144995]: 2026-02-02T15:39:57Z|00024|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.5
Feb 02 15:39:57 compute-0 ovn_controller[144995]: 2026-02-02T15:39:57Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:bf:fa:1d 10.100.0.5
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.013 239549 INFO nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Creating config drive at /var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9/disk.config
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.017 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp00xgehja execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.140 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp00xgehja" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.160 239549 DEBUG nova.storage.rbd_utils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] rbd image bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.163 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9/disk.config bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:39:58 compute-0 ovn_controller[144995]: 2026-02-02T15:39:58Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:fa:1d 10.100.0.5
Feb 02 15:39:58 compute-0 ovn_controller[144995]: 2026-02-02T15:39:58Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:fa:1d 10.100.0.5
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.305 239549 DEBUG oslo_concurrency.processutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9/disk.config bb16b75c-fa89-4b1b-ba03-90fee561a5b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.306 239549 INFO nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Deleting local config drive /var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9/disk.config because it was imported into RBD.
Feb 02 15:39:58 compute-0 systemd-machined[207609]: New machine qemu-14-instance-0000000e.
Feb 02 15:39:58 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Feb 02 15:39:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Feb 02 15:39:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Feb 02 15:39:58 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Feb 02 15:39:58 compute-0 ceph-mon[75334]: pgmap v1324: 305 pgs: 305 active+clean; 262 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 215 op/s
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.848 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046798.8480296, bb16b75c-fa89-4b1b-ba03-90fee561a5b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.849 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] VM Resumed (Lifecycle Event)
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.851 239549 DEBUG nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.852 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.855 239549 INFO nova.virt.libvirt.driver [-] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Instance spawned successfully.
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.855 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.877 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.883 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.886 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.886 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.887 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.887 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.887 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.888 239549 DEBUG nova.virt.libvirt.driver [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.918 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.919 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046798.8483512, bb16b75c-fa89-4b1b-ba03-90fee561a5b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.919 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] VM Started (Lifecycle Event)
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.947 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.952 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.974 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.978 239549 INFO nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Took 3.24 seconds to spawn the instance on the hypervisor.
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.978 239549 DEBUG nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:39:58 compute-0 nova_compute[239545]: 2026-02-02 15:39:58.979 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:39:59 compute-0 nova_compute[239545]: 2026-02-02 15:39:59.038 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:39:59 compute-0 nova_compute[239545]: 2026-02-02 15:39:59.042 239549 INFO nova.compute.manager [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Took 4.26 seconds to build instance.
Feb 02 15:39:59 compute-0 nova_compute[239545]: 2026-02-02 15:39:59.056 239549 DEBUG oslo_concurrency.lockutils [None req-f64dbc45-8eed-442f-8232-0cc31d0b6ea9 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "bb16b75c-fa89-4b1b-ba03-90fee561a5b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:59.251 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:39:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:59.252 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:39:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:39:59.252 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:39:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 276 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.3 MiB/s wr, 236 op/s
Feb 02 15:39:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Feb 02 15:39:59 compute-0 ceph-mon[75334]: osdmap e337: 3 total, 3 up, 3 in
Feb 02 15:39:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Feb 02 15:39:59 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Feb 02 15:40:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2069988991' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2069988991' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:00 compute-0 ceph-mon[75334]: pgmap v1326: 305 pgs: 305 active+clean; 276 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.3 MiB/s wr, 236 op/s
Feb 02 15:40:00 compute-0 ceph-mon[75334]: osdmap e338: 3 total, 3 up, 3 in
Feb 02 15:40:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2069988991' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2069988991' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.498 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Acquiring lock "bb16b75c-fa89-4b1b-ba03-90fee561a5b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.499 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "bb16b75c-fa89-4b1b-ba03-90fee561a5b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.500 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Acquiring lock "bb16b75c-fa89-4b1b-ba03-90fee561a5b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.500 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "bb16b75c-fa89-4b1b-ba03-90fee561a5b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.500 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "bb16b75c-fa89-4b1b-ba03-90fee561a5b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.502 239549 INFO nova.compute.manager [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Terminating instance
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.503 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Acquiring lock "refresh_cache-bb16b75c-fa89-4b1b-ba03-90fee561a5b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.503 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Acquired lock "refresh_cache-bb16b75c-fa89-4b1b-ba03-90fee561a5b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.504 239549 DEBUG nova.network.neutron [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:40:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 312 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.7 MiB/s wr, 301 op/s
Feb 02 15:40:01 compute-0 nova_compute[239545]: 2026-02-02 15:40:01.694 239549 DEBUG nova.network.neutron [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:40:01 compute-0 ceph-mon[75334]: pgmap v1328: 305 pgs: 305 active+clean; 312 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.7 MiB/s wr, 301 op/s
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.123 239549 DEBUG nova.network.neutron [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.145 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Releasing lock "refresh_cache-bb16b75c-fa89-4b1b-ba03-90fee561a5b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.146 239549 DEBUG nova.compute.manager [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:40:02 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Feb 02 15:40:02 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 3.823s CPU time.
Feb 02 15:40:02 compute-0 systemd-machined[207609]: Machine qemu-14-instance-0000000e terminated.
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.359 239549 INFO nova.virt.libvirt.driver [-] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Instance destroyed successfully.
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.359 239549 DEBUG nova.objects.instance [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lazy-loading 'resources' on Instance uuid bb16b75c-fa89-4b1b-ba03-90fee561a5b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.571 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.654 239549 INFO nova.virt.libvirt.driver [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Deleting instance files /var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9_del
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.655 239549 INFO nova.virt.libvirt.driver [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Deletion of /var/lib/nova/instances/bb16b75c-fa89-4b1b-ba03-90fee561a5b9_del complete
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.713 239549 INFO nova.compute.manager [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.713 239549 DEBUG oslo.service.loopingcall [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.714 239549 DEBUG nova.compute.manager [-] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:40:02 compute-0 nova_compute[239545]: 2026-02-02 15:40:02.714 239549 DEBUG nova.network.neutron [-] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:40:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Feb 02 15:40:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Feb 02 15:40:02 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.022 239549 DEBUG nova.network.neutron [-] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.031 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.031 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.032 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.032 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0cd0267f-d963-4475-aa31-ae2d3864ad80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.054 239549 DEBUG nova.network.neutron [-] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.216 239549 INFO nova.compute.manager [-] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Took 0.50 seconds to deallocate network for instance.
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.265 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.266 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.345 239549 DEBUG oslo_concurrency.processutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:40:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 302 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 357 op/s
Feb 02 15:40:03 compute-0 ceph-mon[75334]: osdmap e339: 3 total, 3 up, 3 in
Feb 02 15:40:03 compute-0 ceph-mon[75334]: pgmap v1330: 305 pgs: 305 active+clean; 302 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 357 op/s
Feb 02 15:40:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:40:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2312444710' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.874 239549 DEBUG oslo_concurrency.processutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.878 239549 DEBUG nova.compute.provider_tree [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.917 239549 DEBUG nova.scheduler.client.report [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.978 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:03 compute-0 nova_compute[239545]: 2026-02-02 15:40:03.982 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:04 compute-0 nova_compute[239545]: 2026-02-02 15:40:04.021 239549 INFO nova.scheduler.client.report [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Deleted allocations for instance bb16b75c-fa89-4b1b-ba03-90fee561a5b9
Feb 02 15:40:04 compute-0 nova_compute[239545]: 2026-02-02 15:40:04.041 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:04 compute-0 nova_compute[239545]: 2026-02-02 15:40:04.163 239549 DEBUG oslo_concurrency.lockutils [None req-00cae1ec-6b96-41b9-9e19-4a2bcd341234 40bd8b2776484c7d97231ded1bb56b58 ad61c6964c674c82aa121ac13ad9bb92 - - default default] Lock "bb16b75c-fa89-4b1b-ba03-90fee561a5b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:04 compute-0 nova_compute[239545]: 2026-02-02 15:40:04.560 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updating instance_info_cache with network_info: [{"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:40:04 compute-0 nova_compute[239545]: 2026-02-02 15:40:04.576 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:40:04 compute-0 nova_compute[239545]: 2026-02-02 15:40:04.576 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:40:04 compute-0 nova_compute[239545]: 2026-02-02 15:40:04.577 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2312444710' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 266 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.3 MiB/s wr, 320 op/s
Feb 02 15:40:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Feb 02 15:40:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Feb 02 15:40:05 compute-0 ceph-mon[75334]: pgmap v1331: 305 pgs: 305 active+clean; 266 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.3 MiB/s wr, 320 op/s
Feb 02 15:40:05 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Feb 02 15:40:06 compute-0 podman[260171]: 2026-02-02 15:40:06.318438454 +0000 UTC m=+0.051138923 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:40:06 compute-0 podman[260170]: 2026-02-02 15:40:06.341420202 +0000 UTC m=+0.076603071 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb 02 15:40:06 compute-0 nova_compute[239545]: 2026-02-02 15:40:06.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:06 compute-0 nova_compute[239545]: 2026-02-02 15:40:06.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:06 compute-0 nova_compute[239545]: 2026-02-02 15:40:06.571 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:06 compute-0 nova_compute[239545]: 2026-02-02 15:40:06.572 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:06 compute-0 nova_compute[239545]: 2026-02-02 15:40:06.572 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:06 compute-0 nova_compute[239545]: 2026-02-02 15:40:06.572 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:40:06 compute-0 nova_compute[239545]: 2026-02-02 15:40:06.572 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:40:06 compute-0 ceph-mon[75334]: osdmap e340: 3 total, 3 up, 3 in
Feb 02 15:40:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2852666334' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:40:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2886604318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.131 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.200 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.201 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.205 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.205 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.348 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.349 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3972MB free_disk=59.93626935686916GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.349 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.349 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.406 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0cd0267f-d963-4475-aa31-ae2d3864ad80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.406 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance b7efb964-7e90-423b-b648-41772085a2be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.407 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.407 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:40:07 compute-0 nova_compute[239545]: 2026-02-02 15:40:07.449 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:40:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 266 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 281 op/s
Feb 02 15:40:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Feb 02 15:40:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Feb 02 15:40:07 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Feb 02 15:40:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2852666334' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2886604318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:07 compute-0 ceph-mon[75334]: pgmap v1333: 305 pgs: 305 active+clean; 266 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 281 op/s
Feb 02 15:40:07 compute-0 ceph-mon[75334]: osdmap e341: 3 total, 3 up, 3 in
Feb 02 15:40:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:40:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738793458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:08 compute-0 nova_compute[239545]: 2026-02-02 15:40:08.024 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:40:08 compute-0 nova_compute[239545]: 2026-02-02 15:40:08.029 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:40:08 compute-0 nova_compute[239545]: 2026-02-02 15:40:08.043 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:40:08 compute-0 nova_compute[239545]: 2026-02-02 15:40:08.067 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:40:08 compute-0 nova_compute[239545]: 2026-02-02 15:40:08.068 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Feb 02 15:40:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1738793458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Feb 02 15:40:08 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Feb 02 15:40:08 compute-0 nova_compute[239545]: 2026-02-02 15:40:08.979 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:09 compute-0 nova_compute[239545]: 2026-02-02 15:40:09.043 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:09 compute-0 nova_compute[239545]: 2026-02-02 15:40:09.068 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:09 compute-0 nova_compute[239545]: 2026-02-02 15:40:09.068 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:09 compute-0 nova_compute[239545]: 2026-02-02 15:40:09.068 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:09 compute-0 nova_compute[239545]: 2026-02-02 15:40:09.068 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:40:09 compute-0 nova_compute[239545]: 2026-02-02 15:40:09.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:09 compute-0 nova_compute[239545]: 2026-02-02 15:40:09.564 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 266 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 19 KiB/s wr, 81 op/s
Feb 02 15:40:09 compute-0 ceph-mon[75334]: osdmap e342: 3 total, 3 up, 3 in
Feb 02 15:40:09 compute-0 ceph-mon[75334]: pgmap v1336: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 266 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 19 KiB/s wr, 81 op/s
Feb 02 15:40:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Feb 02 15:40:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Feb 02 15:40:10 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Feb 02 15:40:11 compute-0 nova_compute[239545]: 2026-02-02 15:40:11.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:40:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 470 KiB/s rd, 415 KiB/s wr, 131 op/s
Feb 02 15:40:11 compute-0 ceph-mon[75334]: osdmap e343: 3 total, 3 up, 3 in
Feb 02 15:40:11 compute-0 ceph-mon[75334]: pgmap v1338: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 470 KiB/s rd, 415 KiB/s wr, 131 op/s
Feb 02 15:40:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Feb 02 15:40:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Feb 02 15:40:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Feb 02 15:40:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2035918533' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2035918533' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 471 KiB/s rd, 413 KiB/s wr, 135 op/s
Feb 02 15:40:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/625965736' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/625965736' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:13 compute-0 nova_compute[239545]: 2026-02-02 15:40:13.985 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:14 compute-0 ceph-mon[75334]: osdmap e344: 3 total, 3 up, 3 in
Feb 02 15:40:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2035918533' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2035918533' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:14 compute-0 ceph-mon[75334]: pgmap v1340: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 471 KiB/s rd, 413 KiB/s wr, 135 op/s
Feb 02 15:40:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/625965736' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/625965736' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:14 compute-0 nova_compute[239545]: 2026-02-02 15:40:14.045 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:14 compute-0 ovn_controller[144995]: 2026-02-02T15:40:14Z|00138|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Feb 02 15:40:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:40:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:40:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:40:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:40:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:40:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:40:14 compute-0 sudo[260260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:40:14 compute-0 sudo[260260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:14 compute-0 sudo[260260]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:14 compute-0 sudo[260285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:40:14 compute-0 sudo[260285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:15 compute-0 podman[260354]: 2026-02-02 15:40:15.247226945 +0000 UTC m=+0.106262421 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:40:15 compute-0 podman[260354]: 2026-02-02 15:40:15.338078412 +0000 UTC m=+0.197113838 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 02 15:40:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 486 KiB/s rd, 354 KiB/s wr, 217 op/s
Feb 02 15:40:15 compute-0 sudo[260285]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:40:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:40:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:15 compute-0 sudo[260544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:40:15 compute-0 sudo[260544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:16 compute-0 sudo[260544]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:16 compute-0 sudo[260569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:40:16 compute-0 sudo[260569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:16 compute-0 sudo[260569]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:40:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:40:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:40:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:40:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:40:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:40:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:40:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:40:16 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:40:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:40:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:40:16 compute-0 sudo[260625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:40:16 compute-0 sudo[260625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:16 compute-0 sudo[260625]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:16 compute-0 sudo[260650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:40:16 compute-0 sudo[260650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:16 compute-0 ceph-mon[75334]: pgmap v1341: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 486 KiB/s rd, 354 KiB/s wr, 217 op/s
Feb 02 15:40:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:40:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:40:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:40:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:40:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:40:16 compute-0 podman[260687]: 2026-02-02 15:40:16.912495863 +0000 UTC m=+0.030266525 container create 850d3aaa8ba72ba291983745073a5afe77977acc9368d867ef6e032ca6458e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:40:16 compute-0 systemd[1]: Started libpod-conmon-850d3aaa8ba72ba291983745073a5afe77977acc9368d867ef6e032ca6458e76.scope.
Feb 02 15:40:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:40:16 compute-0 podman[260687]: 2026-02-02 15:40:16.89834637 +0000 UTC m=+0.016117072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:40:17 compute-0 podman[260687]: 2026-02-02 15:40:17.017393331 +0000 UTC m=+0.135163993 container init 850d3aaa8ba72ba291983745073a5afe77977acc9368d867ef6e032ca6458e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:40:17 compute-0 podman[260687]: 2026-02-02 15:40:17.023576591 +0000 UTC m=+0.141347283 container start 850d3aaa8ba72ba291983745073a5afe77977acc9368d867ef6e032ca6458e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:40:17 compute-0 hungry_merkle[260703]: 167 167
Feb 02 15:40:17 compute-0 systemd[1]: libpod-850d3aaa8ba72ba291983745073a5afe77977acc9368d867ef6e032ca6458e76.scope: Deactivated successfully.
Feb 02 15:40:17 compute-0 podman[260687]: 2026-02-02 15:40:17.063194503 +0000 UTC m=+0.180965165 container attach 850d3aaa8ba72ba291983745073a5afe77977acc9368d867ef6e032ca6458e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:40:17 compute-0 podman[260687]: 2026-02-02 15:40:17.06393022 +0000 UTC m=+0.181700902 container died 850d3aaa8ba72ba291983745073a5afe77977acc9368d867ef6e032ca6458e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:40:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-de23b50c96dcb07611b891a4503848c55c6ccc1bb87add9fbc0ad4d1ade78469-merged.mount: Deactivated successfully.
Feb 02 15:40:17 compute-0 podman[260687]: 2026-02-02 15:40:17.136030922 +0000 UTC m=+0.253801604 container remove 850d3aaa8ba72ba291983745073a5afe77977acc9368d867ef6e032ca6458e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:40:17 compute-0 systemd[1]: libpod-conmon-850d3aaa8ba72ba291983745073a5afe77977acc9368d867ef6e032ca6458e76.scope: Deactivated successfully.
Feb 02 15:40:17 compute-0 podman[260730]: 2026-02-02 15:40:17.251672949 +0000 UTC m=+0.034986210 container create 07d505a09e656443c74743aa338002cf4b27969f64b9c45778619bcf374c484e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:40:17 compute-0 systemd[1]: Started libpod-conmon-07d505a09e656443c74743aa338002cf4b27969f64b9c45778619bcf374c484e.scope.
Feb 02 15:40:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c840ccd38f42b5abd1760995398423af8fe5ab6a89614cccf543198232c934a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c840ccd38f42b5abd1760995398423af8fe5ab6a89614cccf543198232c934a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c840ccd38f42b5abd1760995398423af8fe5ab6a89614cccf543198232c934a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c840ccd38f42b5abd1760995398423af8fe5ab6a89614cccf543198232c934a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c840ccd38f42b5abd1760995398423af8fe5ab6a89614cccf543198232c934a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:17 compute-0 podman[260730]: 2026-02-02 15:40:17.331921189 +0000 UTC m=+0.115234500 container init 07d505a09e656443c74743aa338002cf4b27969f64b9c45778619bcf374c484e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_johnson, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:40:17 compute-0 podman[260730]: 2026-02-02 15:40:17.237167527 +0000 UTC m=+0.020480808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:40:17 compute-0 podman[260730]: 2026-02-02 15:40:17.339046552 +0000 UTC m=+0.122359813 container start 07d505a09e656443c74743aa338002cf4b27969f64b9c45778619bcf374c484e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:40:17 compute-0 podman[260730]: 2026-02-02 15:40:17.342845894 +0000 UTC m=+0.126159215 container attach 07d505a09e656443c74743aa338002cf4b27969f64b9c45778619bcf374c484e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:40:17 compute-0 nova_compute[239545]: 2026-02-02 15:40:17.358 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046802.3572524, bb16b75c-fa89-4b1b-ba03-90fee561a5b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:40:17 compute-0 nova_compute[239545]: 2026-02-02 15:40:17.359 239549 INFO nova.compute.manager [-] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] VM Stopped (Lifecycle Event)
Feb 02 15:40:17 compute-0 nova_compute[239545]: 2026-02-02 15:40:17.388 239549 DEBUG nova.compute.manager [None req-74b33cb7-e8a9-4ea5-8757-999ad2eb8e3d - - - - - -] [instance: bb16b75c-fa89-4b1b-ba03-90fee561a5b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:40:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 408 KiB/s rd, 298 KiB/s wr, 182 op/s
Feb 02 15:40:17 compute-0 ceph-mon[75334]: pgmap v1342: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 408 KiB/s rd, 298 KiB/s wr, 182 op/s
Feb 02 15:40:17 compute-0 serene_johnson[260746]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:40:17 compute-0 serene_johnson[260746]: --> All data devices are unavailable
Feb 02 15:40:17 compute-0 systemd[1]: libpod-07d505a09e656443c74743aa338002cf4b27969f64b9c45778619bcf374c484e.scope: Deactivated successfully.
Feb 02 15:40:17 compute-0 podman[260730]: 2026-02-02 15:40:17.757304009 +0000 UTC m=+0.540617300 container died 07d505a09e656443c74743aa338002cf4b27969f64b9c45778619bcf374c484e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_johnson, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:40:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c840ccd38f42b5abd1760995398423af8fe5ab6a89614cccf543198232c934a-merged.mount: Deactivated successfully.
Feb 02 15:40:17 compute-0 podman[260730]: 2026-02-02 15:40:17.799152864 +0000 UTC m=+0.582466125 container remove 07d505a09e656443c74743aa338002cf4b27969f64b9c45778619bcf374c484e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_johnson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:40:17 compute-0 systemd[1]: libpod-conmon-07d505a09e656443c74743aa338002cf4b27969f64b9c45778619bcf374c484e.scope: Deactivated successfully.
Feb 02 15:40:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Feb 02 15:40:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Feb 02 15:40:17 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Feb 02 15:40:17 compute-0 sudo[260650]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:17 compute-0 sudo[260778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:40:17 compute-0 sudo[260778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:17 compute-0 sudo[260778]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:17 compute-0 sudo[260803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:40:17 compute-0 sudo[260803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:18 compute-0 podman[260840]: 2026-02-02 15:40:18.16217689 +0000 UTC m=+0.039229904 container create 1ce6ef8a2c72230a8ee143192e4a45d24ec8a6b8f194c7d47e07d23943f1f5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mcclintock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.204 239549 DEBUG oslo_concurrency.lockutils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.204 239549 DEBUG oslo_concurrency.lockutils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:18 compute-0 systemd[1]: Started libpod-conmon-1ce6ef8a2c72230a8ee143192e4a45d24ec8a6b8f194c7d47e07d23943f1f5f6.scope.
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.221 239549 DEBUG nova.objects.instance [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'flavor' on Instance uuid b7efb964-7e90-423b-b648-41772085a2be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:40:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:40:18 compute-0 podman[260840]: 2026-02-02 15:40:18.14325378 +0000 UTC m=+0.020306854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:40:18 compute-0 podman[260840]: 2026-02-02 15:40:18.245270517 +0000 UTC m=+0.122323611 container init 1ce6ef8a2c72230a8ee143192e4a45d24ec8a6b8f194c7d47e07d23943f1f5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mcclintock, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:40:18 compute-0 podman[260840]: 2026-02-02 15:40:18.251787926 +0000 UTC m=+0.128840940 container start 1ce6ef8a2c72230a8ee143192e4a45d24ec8a6b8f194c7d47e07d23943f1f5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mcclintock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 02 15:40:18 compute-0 podman[260840]: 2026-02-02 15:40:18.255444794 +0000 UTC m=+0.132497898 container attach 1ce6ef8a2c72230a8ee143192e4a45d24ec8a6b8f194c7d47e07d23943f1f5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 02 15:40:18 compute-0 recursing_mcclintock[260857]: 167 167
Feb 02 15:40:18 compute-0 systemd[1]: libpod-1ce6ef8a2c72230a8ee143192e4a45d24ec8a6b8f194c7d47e07d23943f1f5f6.scope: Deactivated successfully.
Feb 02 15:40:18 compute-0 podman[260840]: 2026-02-02 15:40:18.256974561 +0000 UTC m=+0.134027695 container died 1ce6ef8a2c72230a8ee143192e4a45d24ec8a6b8f194c7d47e07d23943f1f5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.263 239549 DEBUG oslo_concurrency.lockutils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-82f7e62d5d037ca9cc9ac6b750ef6a41c581a67de6622c35a0fd1f8b883bcb0c-merged.mount: Deactivated successfully.
Feb 02 15:40:18 compute-0 podman[260840]: 2026-02-02 15:40:18.299565396 +0000 UTC m=+0.176618410 container remove 1ce6ef8a2c72230a8ee143192e4a45d24ec8a6b8f194c7d47e07d23943f1f5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:40:18 compute-0 systemd[1]: libpod-conmon-1ce6ef8a2c72230a8ee143192e4a45d24ec8a6b8f194c7d47e07d23943f1f5f6.scope: Deactivated successfully.
Feb 02 15:40:18 compute-0 podman[260882]: 2026-02-02 15:40:18.422489821 +0000 UTC m=+0.034273883 container create 3ec257d8adc387fe4876633e2fb97245f18a394be9810e4840cb6bdfdb3cb585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:40:18 compute-0 systemd[1]: Started libpod-conmon-3ec257d8adc387fe4876633e2fb97245f18a394be9810e4840cb6bdfdb3cb585.scope.
Feb 02 15:40:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b2b37d592e9d0f1cf9b61db702cf438c2c12a61232eeef42bb21758b6ca9a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b2b37d592e9d0f1cf9b61db702cf438c2c12a61232eeef42bb21758b6ca9a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b2b37d592e9d0f1cf9b61db702cf438c2c12a61232eeef42bb21758b6ca9a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b2b37d592e9d0f1cf9b61db702cf438c2c12a61232eeef42bb21758b6ca9a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:18 compute-0 podman[260882]: 2026-02-02 15:40:18.407270612 +0000 UTC m=+0.019054684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:40:18 compute-0 podman[260882]: 2026-02-02 15:40:18.511545853 +0000 UTC m=+0.123329945 container init 3ec257d8adc387fe4876633e2fb97245f18a394be9810e4840cb6bdfdb3cb585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackwell, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:40:18 compute-0 podman[260882]: 2026-02-02 15:40:18.518051522 +0000 UTC m=+0.129835584 container start 3ec257d8adc387fe4876633e2fb97245f18a394be9810e4840cb6bdfdb3cb585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:40:18 compute-0 podman[260882]: 2026-02-02 15:40:18.521270759 +0000 UTC m=+0.133054851 container attach 3ec257d8adc387fe4876633e2fb97245f18a394be9810e4840cb6bdfdb3cb585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackwell, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.524 239549 DEBUG oslo_concurrency.lockutils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.525 239549 DEBUG oslo_concurrency.lockutils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.525 239549 INFO nova.compute.manager [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Attaching volume 1f1ccb3c-fc92-4d93-a163-5feb7a38610b to /dev/vdb
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.798 239549 DEBUG os_brick.utils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.800 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:40:18 compute-0 brave_blackwell[260899]: {
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:     "0": [
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:         {
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "devices": [
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "/dev/loop3"
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             ],
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_name": "ceph_lv0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_size": "21470642176",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "name": "ceph_lv0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "tags": {
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.cluster_name": "ceph",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.crush_device_class": "",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.encrypted": "0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.objectstore": "bluestore",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.osd_id": "0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.type": "block",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.vdo": "0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.with_tpm": "0"
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             },
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "type": "block",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "vg_name": "ceph_vg0"
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:         }
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:     ],
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:     "1": [
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:         {
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "devices": [
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "/dev/loop4"
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             ],
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_name": "ceph_lv1",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_size": "21470642176",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "name": "ceph_lv1",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "tags": {
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.cluster_name": "ceph",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.crush_device_class": "",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.encrypted": "0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.objectstore": "bluestore",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.osd_id": "1",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.type": "block",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.vdo": "0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.with_tpm": "0"
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             },
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "type": "block",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "vg_name": "ceph_vg1"
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:         }
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:     ],
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:     "2": [
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:         {
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "devices": [
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "/dev/loop5"
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             ],
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_name": "ceph_lv2",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_size": "21470642176",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "name": "ceph_lv2",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "tags": {
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.cluster_name": "ceph",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.crush_device_class": "",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.encrypted": "0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.objectstore": "bluestore",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.osd_id": "2",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.type": "block",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.vdo": "0",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:                 "ceph.with_tpm": "0"
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             },
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "type": "block",
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:             "vg_name": "ceph_vg2"
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:         }
Feb 02 15:40:18 compute-0 brave_blackwell[260899]:     ]
Feb 02 15:40:18 compute-0 brave_blackwell[260899]: }
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.811 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.812 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[59717270-9a64-41c2-9e5d-561cc8d9a75b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.813 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:40:18 compute-0 ceph-mon[75334]: osdmap e345: 3 total, 3 up, 3 in
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.819 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.819 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[547166fe-766a-4d7b-925d-62c8bf47fff7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.820 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.826 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.827 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8911f8-e101-4691-8f10-dbb87cbbe8fc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.828 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[43b147c7-4a33-43cc-b98c-3e29c8bf599a]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.828 239549 DEBUG oslo_concurrency.processutils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:40:18 compute-0 systemd[1]: libpod-3ec257d8adc387fe4876633e2fb97245f18a394be9810e4840cb6bdfdb3cb585.scope: Deactivated successfully.
Feb 02 15:40:18 compute-0 podman[260882]: 2026-02-02 15:40:18.834517696 +0000 UTC m=+0.446301758 container died 3ec257d8adc387fe4876633e2fb97245f18a394be9810e4840cb6bdfdb3cb585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackwell, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.844 239549 DEBUG oslo_concurrency.processutils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.846 239549 DEBUG os_brick.initiator.connectors.lightos [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.846 239549 DEBUG os_brick.initiator.connectors.lightos [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.846 239549 DEBUG os_brick.initiator.connectors.lightos [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.847 239549 DEBUG os_brick.utils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] <== get_connector_properties: return (48ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.847 239549 DEBUG nova.virt.block_device [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Updating existing volume attachment record: a2fe04ec-5b27-43f8-9f02-165bab256931 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5b2b37d592e9d0f1cf9b61db702cf438c2c12a61232eeef42bb21758b6ca9a0-merged.mount: Deactivated successfully.
Feb 02 15:40:18 compute-0 podman[260882]: 2026-02-02 15:40:18.868345628 +0000 UTC m=+0.480129680 container remove 3ec257d8adc387fe4876633e2fb97245f18a394be9810e4840cb6bdfdb3cb585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackwell, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:40:18 compute-0 systemd[1]: libpod-conmon-3ec257d8adc387fe4876633e2fb97245f18a394be9810e4840cb6bdfdb3cb585.scope: Deactivated successfully.
Feb 02 15:40:18 compute-0 sudo[260803]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:18 compute-0 sudo[260927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:40:18 compute-0 sudo[260927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:18 compute-0 sudo[260927]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:18 compute-0 nova_compute[239545]: 2026-02-02 15:40:18.987 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:19 compute-0 sudo[260952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:40:19 compute-0 sudo[260952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:19 compute-0 nova_compute[239545]: 2026-02-02 15:40:19.047 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:19 compute-0 podman[260989]: 2026-02-02 15:40:19.302028259 +0000 UTC m=+0.040978396 container create 9d58cb5afe98460022c77475308ce900deec7c7af0faa86b5c0fc393e00e3a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 02 15:40:19 compute-0 systemd[1]: Started libpod-conmon-9d58cb5afe98460022c77475308ce900deec7c7af0faa86b5c0fc393e00e3a96.scope.
Feb 02 15:40:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:40:19 compute-0 podman[260989]: 2026-02-02 15:40:19.375419591 +0000 UTC m=+0.114369718 container init 9d58cb5afe98460022c77475308ce900deec7c7af0faa86b5c0fc393e00e3a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:40:19 compute-0 podman[260989]: 2026-02-02 15:40:19.284119394 +0000 UTC m=+0.023069571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:40:19 compute-0 podman[260989]: 2026-02-02 15:40:19.381681914 +0000 UTC m=+0.120632041 container start 9d58cb5afe98460022c77475308ce900deec7c7af0faa86b5c0fc393e00e3a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:40:19 compute-0 podman[260989]: 2026-02-02 15:40:19.384742648 +0000 UTC m=+0.123692805 container attach 9d58cb5afe98460022c77475308ce900deec7c7af0faa86b5c0fc393e00e3a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_burnell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:40:19 compute-0 pensive_burnell[261005]: 167 167
Feb 02 15:40:19 compute-0 systemd[1]: libpod-9d58cb5afe98460022c77475308ce900deec7c7af0faa86b5c0fc393e00e3a96.scope: Deactivated successfully.
Feb 02 15:40:19 compute-0 podman[260989]: 2026-02-02 15:40:19.387472934 +0000 UTC m=+0.126423051 container died 9d58cb5afe98460022c77475308ce900deec7c7af0faa86b5c0fc393e00e3a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-de9954a44fdac5be2137c9ca52117c4a81d244a92ce6454619e80420dc32c24e-merged.mount: Deactivated successfully.
Feb 02 15:40:19 compute-0 podman[260989]: 2026-02-02 15:40:19.419551593 +0000 UTC m=+0.158501720 container remove 9d58cb5afe98460022c77475308ce900deec7c7af0faa86b5c0fc393e00e3a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:40:19 compute-0 systemd[1]: libpod-conmon-9d58cb5afe98460022c77475308ce900deec7c7af0faa86b5c0fc393e00e3a96.scope: Deactivated successfully.
Feb 02 15:40:19 compute-0 podman[261028]: 2026-02-02 15:40:19.559321247 +0000 UTC m=+0.036945308 container create 9189892c34bd49c3939727ea68a1692ad996133e3b40bf6e6ab97e98f8be725c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:40:19 compute-0 systemd[1]: Started libpod-conmon-9189892c34bd49c3939727ea68a1692ad996133e3b40bf6e6ab97e98f8be725c.scope.
Feb 02 15:40:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974b1ebcc35cf7bc2c5e222e07297d36e6fb93461aef38f2064a5683e622de49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974b1ebcc35cf7bc2c5e222e07297d36e6fb93461aef38f2064a5683e622de49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974b1ebcc35cf7bc2c5e222e07297d36e6fb93461aef38f2064a5683e622de49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974b1ebcc35cf7bc2c5e222e07297d36e6fb93461aef38f2064a5683e622de49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:40:19 compute-0 podman[261028]: 2026-02-02 15:40:19.620353748 +0000 UTC m=+0.097977809 container init 9189892c34bd49c3939727ea68a1692ad996133e3b40bf6e6ab97e98f8be725c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:40:19 compute-0 podman[261028]: 2026-02-02 15:40:19.624979071 +0000 UTC m=+0.102603132 container start 9189892c34bd49c3939727ea68a1692ad996133e3b40bf6e6ab97e98f8be725c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:40:19 compute-0 podman[261028]: 2026-02-02 15:40:19.627860882 +0000 UTC m=+0.105484943 container attach 9189892c34bd49c3939727ea68a1692ad996133e3b40bf6e6ab97e98f8be725c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:40:19 compute-0 podman[261028]: 2026-02-02 15:40:19.543501142 +0000 UTC m=+0.021125253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:40:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 3.9 KiB/s wr, 100 op/s
Feb 02 15:40:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3962790301' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:19 compute-0 ceph-mon[75334]: pgmap v1344: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 3.9 KiB/s wr, 100 op/s
Feb 02 15:40:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3962790301' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:19 compute-0 nova_compute[239545]: 2026-02-02 15:40:19.832 239549 DEBUG nova.objects.instance [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'flavor' on Instance uuid b7efb964-7e90-423b-b648-41772085a2be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:40:19 compute-0 nova_compute[239545]: 2026-02-02 15:40:19.852 239549 DEBUG nova.virt.libvirt.driver [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Attempting to attach volume 1f1ccb3c-fc92-4d93-a163-5feb7a38610b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:40:19 compute-0 nova_compute[239545]: 2026-02-02 15:40:19.855 239549 DEBUG nova.virt.libvirt.guest [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:40:19 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:40:19 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-1f1ccb3c-fc92-4d93-a163-5feb7a38610b">
Feb 02 15:40:19 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:40:19 compute-0 nova_compute[239545]:   </source>
Feb 02 15:40:19 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:40:19 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:40:19 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:40:19 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:40:19 compute-0 nova_compute[239545]:   <serial>1f1ccb3c-fc92-4d93-a163-5feb7a38610b</serial>
Feb 02 15:40:19 compute-0 nova_compute[239545]: </disk>
Feb 02 15:40:19 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:40:19 compute-0 nova_compute[239545]: 2026-02-02 15:40:19.966 239549 DEBUG nova.virt.libvirt.driver [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:40:19 compute-0 nova_compute[239545]: 2026-02-02 15:40:19.967 239549 DEBUG nova.virt.libvirt.driver [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:40:19 compute-0 nova_compute[239545]: 2026-02-02 15:40:19.967 239549 DEBUG nova.virt.libvirt.driver [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:40:19 compute-0 nova_compute[239545]: 2026-02-02 15:40:19.967 239549 DEBUG nova.virt.libvirt.driver [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] No VIF found with MAC fa:16:3e:bf:fa:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:40:20 compute-0 nova_compute[239545]: 2026-02-02 15:40:20.148 239549 DEBUG oslo_concurrency.lockutils [None req-eaa31d26-23e0-4628-a6ba-8358a7685203 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:20 compute-0 lvm[261140]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:40:20 compute-0 lvm[261140]: VG ceph_vg0 finished
Feb 02 15:40:20 compute-0 lvm[261143]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:40:20 compute-0 lvm[261143]: VG ceph_vg1 finished
Feb 02 15:40:20 compute-0 lvm[261145]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:40:20 compute-0 lvm[261145]: VG ceph_vg2 finished
Feb 02 15:40:20 compute-0 lvm[261146]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:40:20 compute-0 lvm[261146]: VG ceph_vg1 finished
Feb 02 15:40:20 compute-0 lvm[261148]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:40:20 compute-0 lvm[261148]: VG ceph_vg1 finished
Feb 02 15:40:20 compute-0 admiring_moser[261044]: {}
Feb 02 15:40:20 compute-0 systemd[1]: libpod-9189892c34bd49c3939727ea68a1692ad996133e3b40bf6e6ab97e98f8be725c.scope: Deactivated successfully.
Feb 02 15:40:20 compute-0 podman[261028]: 2026-02-02 15:40:20.338334184 +0000 UTC m=+0.815958235 container died 9189892c34bd49c3939727ea68a1692ad996133e3b40bf6e6ab97e98f8be725c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-974b1ebcc35cf7bc2c5e222e07297d36e6fb93461aef38f2064a5683e622de49-merged.mount: Deactivated successfully.
Feb 02 15:40:20 compute-0 podman[261028]: 2026-02-02 15:40:20.373973649 +0000 UTC m=+0.851597720 container remove 9189892c34bd49c3939727ea68a1692ad996133e3b40bf6e6ab97e98f8be725c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:40:20 compute-0 systemd[1]: libpod-conmon-9189892c34bd49c3939727ea68a1692ad996133e3b40bf6e6ab97e98f8be725c.scope: Deactivated successfully.
Feb 02 15:40:20 compute-0 sudo[260952]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:40:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:40:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:20 compute-0 sudo[261161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:40:20 compute-0 sudo[261161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:40:20 compute-0 sudo[261161]: pam_unix(sudo:session): session closed for user root
Feb 02 15:40:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:40:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Feb 02 15:40:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Feb 02 15:40:21 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Feb 02 15:40:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/866339578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.4 KiB/s wr, 104 op/s
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.035 239549 DEBUG oslo_concurrency.lockutils [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.035 239549 DEBUG oslo_concurrency.lockutils [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.054 239549 INFO nova.compute.manager [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Detaching volume 1f1ccb3c-fc92-4d93-a163-5feb7a38610b
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.171 239549 INFO nova.virt.block_device [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Attempting to driver detach volume 1f1ccb3c-fc92-4d93-a163-5feb7a38610b from mountpoint /dev/vdb
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.182 239549 DEBUG nova.virt.libvirt.driver [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Attempting to detach device vdb from instance b7efb964-7e90-423b-b648-41772085a2be from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.182 239549 DEBUG nova.virt.libvirt.guest [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-1f1ccb3c-fc92-4d93-a163-5feb7a38610b">
Feb 02 15:40:22 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   </source>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <serial>1f1ccb3c-fc92-4d93-a163-5feb7a38610b</serial>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:40:22 compute-0 nova_compute[239545]: </disk>
Feb 02 15:40:22 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.189 239549 INFO nova.virt.libvirt.driver [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Successfully detached device vdb from instance b7efb964-7e90-423b-b648-41772085a2be from the persistent domain config.
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.190 239549 DEBUG nova.virt.libvirt.driver [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b7efb964-7e90-423b-b648-41772085a2be from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.190 239549 DEBUG nova.virt.libvirt.guest [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-1f1ccb3c-fc92-4d93-a163-5feb7a38610b">
Feb 02 15:40:22 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   </source>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <serial>1f1ccb3c-fc92-4d93-a163-5feb7a38610b</serial>
Feb 02 15:40:22 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:40:22 compute-0 nova_compute[239545]: </disk>
Feb 02 15:40:22 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.293 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Received event <DeviceRemovedEvent: 1770046822.2926407, b7efb964-7e90-423b-b648-41772085a2be => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.294 239549 DEBUG nova.virt.libvirt.driver [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b7efb964-7e90-423b-b648-41772085a2be _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.296 239549 INFO nova.virt.libvirt.driver [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Successfully detached device vdb from instance b7efb964-7e90-423b-b648-41772085a2be from the live domain config.
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.435 239549 DEBUG nova.objects.instance [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'flavor' on Instance uuid b7efb964-7e90-423b-b648-41772085a2be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:40:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Feb 02 15:40:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Feb 02 15:40:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Feb 02 15:40:22 compute-0 ceph-mon[75334]: osdmap e346: 3 total, 3 up, 3 in
Feb 02 15:40:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/866339578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:22 compute-0 ceph-mon[75334]: pgmap v1346: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.4 KiB/s wr, 104 op/s
Feb 02 15:40:22 compute-0 nova_compute[239545]: 2026-02-02 15:40:22.476 239549 DEBUG oslo_concurrency.lockutils [None req-7c7dc9a2-163b-4875-a15a-20dc33849201 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Feb 02 15:40:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Feb 02 15:40:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Feb 02 15:40:23 compute-0 ceph-mon[75334]: osdmap e347: 3 total, 3 up, 3 in
Feb 02 15:40:23 compute-0 ceph-mon[75334]: osdmap e348: 3 total, 3 up, 3 in
Feb 02 15:40:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/765234146' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/765234146' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 2.6 KiB/s wr, 24 op/s
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.922 239549 DEBUG nova.compute.manager [req-241d4e47-0097-42b4-bec1-9ec99769ebb9 req-997dce58-ba43-4dbe-9f96-5a92a2fc6320 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received event network-changed-79ac76f3-882f-40a6-ab76-3286e5b6fc7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.923 239549 DEBUG nova.compute.manager [req-241d4e47-0097-42b4-bec1-9ec99769ebb9 req-997dce58-ba43-4dbe-9f96-5a92a2fc6320 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Refreshing instance network info cache due to event network-changed-79ac76f3-882f-40a6-ab76-3286e5b6fc7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.923 239549 DEBUG oslo_concurrency.lockutils [req-241d4e47-0097-42b4-bec1-9ec99769ebb9 req-997dce58-ba43-4dbe-9f96-5a92a2fc6320 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.924 239549 DEBUG oslo_concurrency.lockutils [req-241d4e47-0097-42b4-bec1-9ec99769ebb9 req-997dce58-ba43-4dbe-9f96-5a92a2fc6320 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.924 239549 DEBUG nova.network.neutron [req-241d4e47-0097-42b4-bec1-9ec99769ebb9 req-997dce58-ba43-4dbe-9f96-5a92a2fc6320 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Refreshing network info cache for port 79ac76f3-882f-40a6-ab76-3286e5b6fc7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.967 239549 DEBUG oslo_concurrency.lockutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.967 239549 DEBUG oslo_concurrency.lockutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.967 239549 DEBUG oslo_concurrency.lockutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.968 239549 DEBUG oslo_concurrency.lockutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.968 239549 DEBUG oslo_concurrency.lockutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.969 239549 INFO nova.compute.manager [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Terminating instance
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.971 239549 DEBUG nova.compute.manager [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:40:23 compute-0 nova_compute[239545]: 2026-02-02 15:40:23.990 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 kernel: tap79ac76f3-88 (unregistering): left promiscuous mode
Feb 02 15:40:24 compute-0 NetworkManager[49171]: <info>  [1770046824.0133] device (tap79ac76f3-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:40:24 compute-0 ovn_controller[144995]: 2026-02-02T15:40:24Z|00139|binding|INFO|Releasing lport 79ac76f3-882f-40a6-ab76-3286e5b6fc7e from this chassis (sb_readonly=0)
Feb 02 15:40:24 compute-0 ovn_controller[144995]: 2026-02-02T15:40:24Z|00140|binding|INFO|Setting lport 79ac76f3-882f-40a6-ab76-3286e5b6fc7e down in Southbound
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.019 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 ovn_controller[144995]: 2026-02-02T15:40:24Z|00141|binding|INFO|Removing iface tap79ac76f3-88 ovn-installed in OVS
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.021 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.026 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:fa:1d 10.100.0.5'], port_security=['fa:16:3e:bf:fa:1d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b7efb964-7e90-423b-b648-41772085a2be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f321435-d909-47d9-9978-c1a6e976cdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46fcff5180ad4462a78fc4ba0bf7c266', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df6850bc-5320-4ccb-85d3-0e9f88b0ebcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a6ad9bc-2949-4854-862e-b465f4808980, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=79ac76f3-882f-40a6-ab76-3286e5b6fc7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.028 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 79ac76f3-882f-40a6-ab76-3286e5b6fc7e in datapath 2f321435-d909-47d9-9978-c1a6e976cdf3 unbound from our chassis
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.029 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f321435-d909-47d9-9978-c1a6e976cdf3
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.031 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.040 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2e62cfb3-1619-4fa9-b8e7-490233d591ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.048 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.058 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[4d13d22e-6f76-4ad0-abba-5efeb2771c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.061 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[21914644-c6ab-4194-a553-8d21b64956bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:24 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 02 15:40:24 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 13.858s CPU time.
Feb 02 15:40:24 compute-0 systemd-machined[207609]: Machine qemu-13-instance-0000000d terminated.
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.077 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb48604-f031-4f89-a62b-256f416f86b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.086 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[47afcd4e-972c-4cca-a6b7-ed2f474cbee9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f321435-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:45:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422047, 'reachable_time': 19594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261199, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.095 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c492ac53-e1ff-46c0-b54a-3bcd3854da94]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2f321435-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422055, 'tstamp': 422055}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261200, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2f321435-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422057, 'tstamp': 422057}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261200, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.096 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f321435-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.098 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.101 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.103 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f321435-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.104 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.104 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f321435-d0, col_values=(('external_ids', {'iface-id': '240bc225-e61e-427a-8aef-43d7550fa498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.104 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.200 239549 INFO nova.virt.libvirt.driver [-] [instance: b7efb964-7e90-423b-b648-41772085a2be] Instance destroyed successfully.
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.201 239549 DEBUG nova.objects.instance [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'resources' on Instance uuid b7efb964-7e90-423b-b648-41772085a2be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.213 239549 DEBUG nova.virt.libvirt.vif [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-222622463',display_name='tempest-TestStampPattern-server-222622463',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-222622463',id=13,image_ref='8e3e083e-b65c-4749-8ca6-c10a6b6905ac',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU3Qd28tTX1c5qwGJRKT3n61SGNF68frpFMSsyV8cHZ2kSTbPtWsGt0wKjJJJJlLa3QDX/7DBKeziYUBGfREdOy19PqZh47/jl2MuarCSlTN9sOG0Vwc8p2ZOsRH+TAQg==',key_name='tempest-TestStampPattern-1309840176',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:39:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46fcff5180ad4462a78fc4ba0bf7c266',ramdisk_id='',reservation_id='r-hpi94sts',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='0cd0267f-d963-4475-aa31-ae2d3864ad80',image_min_disk='1',image_min_ram='0',image_owner_id='46fcff5180ad4462a78fc4ba0bf7c266',image_owner_project_name='tempest-TestStampPattern-2129228693',image_owner_user_name='tempest-TestStampPattern-2129228693-project-member',image_user_id='52fc74263c9d4d478b0b870727c4fa0c',owner_project_name='tempest-TestStampPattern-2129228693',owner_user_name='tempest-TestStampPattern-2129228693-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:39:40Z,user_data=None,user_id='52fc74263c9d4d478b0b870727c4fa0c',uuid=b7efb964-7e90-423b-b648-41772085a2be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.213 239549 DEBUG nova.network.os_vif_util [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converting VIF {"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.214 239549 DEBUG nova.network.os_vif_util [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:fa:1d,bridge_name='br-int',has_traffic_filtering=True,id=79ac76f3-882f-40a6-ab76-3286e5b6fc7e,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79ac76f3-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.214 239549 DEBUG os_vif [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:fa:1d,bridge_name='br-int',has_traffic_filtering=True,id=79ac76f3-882f-40a6-ab76-3286e5b6fc7e,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79ac76f3-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.216 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.216 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79ac76f3-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.217 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.218 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.219 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.221 239549 INFO os_vif [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:fa:1d,bridge_name='br-int',has_traffic_filtering=True,id=79ac76f3-882f-40a6-ab76-3286e5b6fc7e,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79ac76f3-88')
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.339 239549 DEBUG nova.compute.manager [req-9f5b46a5-a4f1-46d5-af5c-17c7f6b3d833 req-1f04e597-1b68-43fa-8708-af857172797d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received event network-vif-unplugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.340 239549 DEBUG oslo_concurrency.lockutils [req-9f5b46a5-a4f1-46d5-af5c-17c7f6b3d833 req-1f04e597-1b68-43fa-8708-af857172797d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.340 239549 DEBUG oslo_concurrency.lockutils [req-9f5b46a5-a4f1-46d5-af5c-17c7f6b3d833 req-1f04e597-1b68-43fa-8708-af857172797d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.340 239549 DEBUG oslo_concurrency.lockutils [req-9f5b46a5-a4f1-46d5-af5c-17c7f6b3d833 req-1f04e597-1b68-43fa-8708-af857172797d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.340 239549 DEBUG nova.compute.manager [req-9f5b46a5-a4f1-46d5-af5c-17c7f6b3d833 req-1f04e597-1b68-43fa-8708-af857172797d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] No waiting events found dispatching network-vif-unplugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.341 239549 DEBUG nova.compute.manager [req-9f5b46a5-a4f1-46d5-af5c-17c7f6b3d833 req-1f04e597-1b68-43fa-8708-af857172797d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received event network-vif-unplugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.459 239549 INFO nova.virt.libvirt.driver [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Deleting instance files /var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be_del
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.460 239549 INFO nova.virt.libvirt.driver [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Deletion of /var/lib/nova/instances/b7efb964-7e90-423b-b648-41772085a2be_del complete
Feb 02 15:40:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/765234146' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/765234146' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:24 compute-0 ceph-mon[75334]: pgmap v1349: 305 pgs: 305 active+clean; 269 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 2.6 KiB/s wr, 24 op/s
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.533 239549 INFO nova.compute.manager [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Took 0.56 seconds to destroy the instance on the hypervisor.
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.534 239549 DEBUG oslo.service.loopingcall [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.535 239549 DEBUG nova.compute.manager [-] [instance: b7efb964-7e90-423b-b648-41772085a2be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.535 239549 DEBUG nova.network.neutron [-] [instance: b7efb964-7e90-423b-b648-41772085a2be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.658 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:40:24 compute-0 nova_compute[239545]: 2026-02-02 15:40:24.658 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:24 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:24.659 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:40:25 compute-0 nova_compute[239545]: 2026-02-02 15:40:25.381 239549 DEBUG nova.network.neutron [req-241d4e47-0097-42b4-bec1-9ec99769ebb9 req-997dce58-ba43-4dbe-9f96-5a92a2fc6320 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Updated VIF entry in instance network info cache for port 79ac76f3-882f-40a6-ab76-3286e5b6fc7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:40:25 compute-0 nova_compute[239545]: 2026-02-02 15:40:25.381 239549 DEBUG nova.network.neutron [req-241d4e47-0097-42b4-bec1-9ec99769ebb9 req-997dce58-ba43-4dbe-9f96-5a92a2fc6320 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Updating instance_info_cache with network_info: [{"id": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "address": "fa:16:3e:bf:fa:1d", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79ac76f3-88", "ovs_interfaceid": "79ac76f3-882f-40a6-ab76-3286e5b6fc7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:40:25 compute-0 nova_compute[239545]: 2026-02-02 15:40:25.404 239549 DEBUG oslo_concurrency.lockutils [req-241d4e47-0097-42b4-bec1-9ec99769ebb9 req-997dce58-ba43-4dbe-9f96-5a92a2fc6320 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-b7efb964-7e90-423b-b648-41772085a2be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:40:25 compute-0 nova_compute[239545]: 2026-02-02 15:40:25.450 239549 DEBUG nova.network.neutron [-] [instance: b7efb964-7e90-423b-b648-41772085a2be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:40:25 compute-0 nova_compute[239545]: 2026-02-02 15:40:25.474 239549 INFO nova.compute.manager [-] [instance: b7efb964-7e90-423b-b648-41772085a2be] Took 0.94 seconds to deallocate network for instance.
Feb 02 15:40:25 compute-0 nova_compute[239545]: 2026-02-02 15:40:25.519 239549 DEBUG oslo_concurrency.lockutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:25 compute-0 nova_compute[239545]: 2026-02-02 15:40:25.519 239549 DEBUG oslo_concurrency.lockutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:25 compute-0 nova_compute[239545]: 2026-02-02 15:40:25.594 239549 DEBUG oslo_concurrency.processutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:40:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 254 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 874 KiB/s rd, 355 KiB/s wr, 166 op/s
Feb 02 15:40:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:40:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506533138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.124 239549 DEBUG oslo_concurrency.processutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.128 239549 DEBUG nova.compute.provider_tree [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.145 239549 DEBUG nova.scheduler.client.report [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.169 239549 DEBUG oslo_concurrency.lockutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.190 239549 INFO nova.scheduler.client.report [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Deleted allocations for instance b7efb964-7e90-423b-b648-41772085a2be
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.244 239549 DEBUG oslo_concurrency.lockutils [None req-7035fb50-aa20-4e33-b9fd-6c15487e9189 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2117647758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.557 239549 DEBUG nova.compute.manager [req-97af3ce6-fb26-471a-b297-030249328c14 req-3f4a37ed-754d-4bfc-8cc9-3a736f5a50e0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received event network-vif-plugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.558 239549 DEBUG oslo_concurrency.lockutils [req-97af3ce6-fb26-471a-b297-030249328c14 req-3f4a37ed-754d-4bfc-8cc9-3a736f5a50e0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "b7efb964-7e90-423b-b648-41772085a2be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.558 239549 DEBUG oslo_concurrency.lockutils [req-97af3ce6-fb26-471a-b297-030249328c14 req-3f4a37ed-754d-4bfc-8cc9-3a736f5a50e0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.558 239549 DEBUG oslo_concurrency.lockutils [req-97af3ce6-fb26-471a-b297-030249328c14 req-3f4a37ed-754d-4bfc-8cc9-3a736f5a50e0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b7efb964-7e90-423b-b648-41772085a2be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.558 239549 DEBUG nova.compute.manager [req-97af3ce6-fb26-471a-b297-030249328c14 req-3f4a37ed-754d-4bfc-8cc9-3a736f5a50e0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] No waiting events found dispatching network-vif-plugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.559 239549 WARNING nova.compute.manager [req-97af3ce6-fb26-471a-b297-030249328c14 req-3f4a37ed-754d-4bfc-8cc9-3a736f5a50e0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received unexpected event network-vif-plugged-79ac76f3-882f-40a6-ab76-3286e5b6fc7e for instance with vm_state deleted and task_state None.
Feb 02 15:40:26 compute-0 nova_compute[239545]: 2026-02-02 15:40:26.559 239549 DEBUG nova.compute.manager [req-97af3ce6-fb26-471a-b297-030249328c14 req-3f4a37ed-754d-4bfc-8cc9-3a736f5a50e0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b7efb964-7e90-423b-b648-41772085a2be] Received event network-vif-deleted-79ac76f3-882f-40a6-ab76-3286e5b6fc7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:40:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Feb 02 15:40:26 compute-0 ceph-mon[75334]: pgmap v1350: 305 pgs: 305 active+clean; 254 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 874 KiB/s rd, 355 KiB/s wr, 166 op/s
Feb 02 15:40:26 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/506533138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:26 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2117647758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Feb 02 15:40:26 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Feb 02 15:40:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3026114537' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3026114537' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 254 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 866 KiB/s rd, 353 KiB/s wr, 156 op/s
Feb 02 15:40:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Feb 02 15:40:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Feb 02 15:40:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Feb 02 15:40:27 compute-0 ceph-mon[75334]: osdmap e349: 3 total, 3 up, 3 in
Feb 02 15:40:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3026114537' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3026114537' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:27 compute-0 ceph-mon[75334]: pgmap v1352: 305 pgs: 305 active+clean; 254 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 866 KiB/s rd, 353 KiB/s wr, 156 op/s
Feb 02 15:40:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Feb 02 15:40:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Feb 02 15:40:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Feb 02 15:40:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294789164' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294789164' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:28 compute-0 ceph-mon[75334]: osdmap e350: 3 total, 3 up, 3 in
Feb 02 15:40:28 compute-0 ceph-mon[75334]: osdmap e351: 3 total, 3 up, 3 in
Feb 02 15:40:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4294789164' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4294789164' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Feb 02 15:40:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Feb 02 15:40:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Feb 02 15:40:28 compute-0 nova_compute[239545]: 2026-02-02 15:40:28.990 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1109850378' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1109850378' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:29 compute-0 nova_compute[239545]: 2026-02-02 15:40:29.217 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 250 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.7 KiB/s wr, 68 op/s
Feb 02 15:40:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Feb 02 15:40:29 compute-0 ceph-mon[75334]: osdmap e352: 3 total, 3 up, 3 in
Feb 02 15:40:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1109850378' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1109850378' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:29 compute-0 ceph-mon[75334]: pgmap v1356: 305 pgs: 305 active+clean; 250 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.7 KiB/s wr, 68 op/s
Feb 02 15:40:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Feb 02 15:40:29 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Feb 02 15:40:30 compute-0 ceph-mon[75334]: osdmap e353: 3 total, 3 up, 3 in
Feb 02 15:40:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2089517662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.582 239549 DEBUG nova.compute.manager [req-5c91f7ed-d23d-4f46-9bac-5883a3f58337 req-c3a455fa-1ddb-4e07-bf33-19854881f0aa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received event network-changed-ff69595e-71b6-4de9-a34f-11323c8da359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.582 239549 DEBUG nova.compute.manager [req-5c91f7ed-d23d-4f46-9bac-5883a3f58337 req-c3a455fa-1ddb-4e07-bf33-19854881f0aa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Refreshing instance network info cache due to event network-changed-ff69595e-71b6-4de9-a34f-11323c8da359. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.583 239549 DEBUG oslo_concurrency.lockutils [req-5c91f7ed-d23d-4f46-9bac-5883a3f58337 req-c3a455fa-1ddb-4e07-bf33-19854881f0aa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.583 239549 DEBUG oslo_concurrency.lockutils [req-5c91f7ed-d23d-4f46-9bac-5883a3f58337 req-c3a455fa-1ddb-4e07-bf33-19854881f0aa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.583 239549 DEBUG nova.network.neutron [req-5c91f7ed-d23d-4f46-9bac-5883a3f58337 req-c3a455fa-1ddb-4e07-bf33-19854881f0aa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Refreshing network info cache for port ff69595e-71b6-4de9-a34f-11323c8da359 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:40:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 208 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 12 KiB/s wr, 178 op/s
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.660 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.673 239549 DEBUG oslo_concurrency.lockutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.674 239549 DEBUG oslo_concurrency.lockutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.674 239549 DEBUG oslo_concurrency.lockutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.674 239549 DEBUG oslo_concurrency.lockutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.674 239549 DEBUG oslo_concurrency.lockutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.675 239549 INFO nova.compute.manager [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Terminating instance
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.676 239549 DEBUG nova.compute.manager [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:40:31 compute-0 kernel: tapff69595e-71 (unregistering): left promiscuous mode
Feb 02 15:40:31 compute-0 NetworkManager[49171]: <info>  [1770046831.7240] device (tapff69595e-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.723 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.729 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:31 compute-0 ovn_controller[144995]: 2026-02-02T15:40:31Z|00142|binding|INFO|Releasing lport ff69595e-71b6-4de9-a34f-11323c8da359 from this chassis (sb_readonly=0)
Feb 02 15:40:31 compute-0 ovn_controller[144995]: 2026-02-02T15:40:31Z|00143|binding|INFO|Setting lport ff69595e-71b6-4de9-a34f-11323c8da359 down in Southbound
Feb 02 15:40:31 compute-0 ovn_controller[144995]: 2026-02-02T15:40:31Z|00144|binding|INFO|Removing iface tapff69595e-71 ovn-installed in OVS
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.731 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.736 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:83:95 10.100.0.10'], port_security=['fa:16:3e:71:83:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0cd0267f-d963-4475-aa31-ae2d3864ad80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f321435-d909-47d9-9978-c1a6e976cdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46fcff5180ad4462a78fc4ba0bf7c266', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df6850bc-5320-4ccb-85d3-0e9f88b0ebcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a6ad9bc-2949-4854-862e-b465f4808980, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ff69595e-71b6-4de9-a34f-11323c8da359) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.737 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ff69595e-71b6-4de9-a34f-11323c8da359 in datapath 2f321435-d909-47d9-9978-c1a6e976cdf3 unbound from our chassis
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.739 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.739 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f321435-d909-47d9-9978-c1a6e976cdf3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.740 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd44032d-b6e6-453b-88db-f0f8aaf52c10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.741 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3 namespace which is not needed anymore
Feb 02 15:40:31 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Feb 02 15:40:31 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 16.478s CPU time.
Feb 02 15:40:31 compute-0 systemd-machined[207609]: Machine qemu-12-instance-0000000c terminated.
Feb 02 15:40:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Feb 02 15:40:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2089517662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:31 compute-0 ceph-mon[75334]: pgmap v1358: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 208 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 12 KiB/s wr, 178 op/s
Feb 02 15:40:31 compute-0 neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3[258321]: [NOTICE]   (258325) : haproxy version is 2.8.14-c23fe91
Feb 02 15:40:31 compute-0 neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3[258321]: [NOTICE]   (258325) : path to executable is /usr/sbin/haproxy
Feb 02 15:40:31 compute-0 neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3[258321]: [WARNING]  (258325) : Exiting Master process...
Feb 02 15:40:31 compute-0 neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3[258321]: [ALERT]    (258325) : Current worker (258327) exited with code 143 (Terminated)
Feb 02 15:40:31 compute-0 neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3[258321]: [WARNING]  (258325) : All workers exited. Exiting... (0)
Feb 02 15:40:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Feb 02 15:40:31 compute-0 systemd[1]: libpod-4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16.scope: Deactivated successfully.
Feb 02 15:40:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Feb 02 15:40:31 compute-0 podman[261277]: 2026-02-02 15:40:31.880596077 +0000 UTC m=+0.057741724 container died 4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:40:31 compute-0 kernel: tapff69595e-71: entered promiscuous mode
Feb 02 15:40:31 compute-0 NetworkManager[49171]: <info>  [1770046831.8904] manager: (tapff69595e-71): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Feb 02 15:40:31 compute-0 kernel: tapff69595e-71 (unregistering): left promiscuous mode
Feb 02 15:40:31 compute-0 ovn_controller[144995]: 2026-02-02T15:40:31Z|00145|binding|INFO|Claiming lport ff69595e-71b6-4de9-a34f-11323c8da359 for this chassis.
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.894 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:31 compute-0 ovn_controller[144995]: 2026-02-02T15:40:31Z|00146|binding|INFO|ff69595e-71b6-4de9-a34f-11323c8da359: Claiming fa:16:3e:71:83:95 10.100.0.10
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.904 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:83:95 10.100.0.10'], port_security=['fa:16:3e:71:83:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0cd0267f-d963-4475-aa31-ae2d3864ad80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f321435-d909-47d9-9978-c1a6e976cdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46fcff5180ad4462a78fc4ba0bf7c266', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df6850bc-5320-4ccb-85d3-0e9f88b0ebcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a6ad9bc-2949-4854-862e-b465f4808980, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ff69595e-71b6-4de9-a34f-11323c8da359) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.906 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:31 compute-0 ovn_controller[144995]: 2026-02-02T15:40:31Z|00147|binding|INFO|Releasing lport ff69595e-71b6-4de9-a34f-11323c8da359 from this chassis (sb_readonly=0)
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.912 239549 INFO nova.virt.libvirt.driver [-] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Instance destroyed successfully.
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.913 239549 DEBUG nova.objects.instance [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lazy-loading 'resources' on Instance uuid 0cd0267f-d963-4475-aa31-ae2d3864ad80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.916 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:83:95 10.100.0.10'], port_security=['fa:16:3e:71:83:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0cd0267f-d963-4475-aa31-ae2d3864ad80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f321435-d909-47d9-9978-c1a6e976cdf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46fcff5180ad4462a78fc4ba0bf7c266', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df6850bc-5320-4ccb-85d3-0e9f88b0ebcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a6ad9bc-2949-4854-862e-b465f4808980, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=ff69595e-71b6-4de9-a34f-11323c8da359) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16-userdata-shm.mount: Deactivated successfully.
Feb 02 15:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fd375382865bbb2a142e01122c9ab6a48a8ef35eb35f8d4c25ccba0035492f3-merged.mount: Deactivated successfully.
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.929 239549 DEBUG nova.virt.libvirt.vif [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:38:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1259992382',display_name='tempest-TestStampPattern-server-1259992382',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1259992382',id=12,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU3Qd28tTX1c5qwGJRKT3n61SGNF68frpFMSsyV8cHZ2kSTbPtWsGt0wKjJJJJlLa3QDX/7DBKeziYUBGfREdOy19PqZh47/jl2MuarCSlTN9sOG0Vwc8p2ZOsRH+TAQg==',key_name='tempest-TestStampPattern-1309840176',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:38:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46fcff5180ad4462a78fc4ba0bf7c266',ramdisk_id='',reservation_id='r-iab8kkrx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-2129228693',owner_user_name='tempest-TestStampPattern-2129228693-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:39:30Z,user_data=None,user_id='52fc74263c9d4d478b0b870727c4fa0c',uuid=0cd0267f-d963-4475-aa31-ae2d3864ad80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.930 239549 DEBUG nova.network.os_vif_util [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converting VIF {"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.931 239549 DEBUG nova.network.os_vif_util [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:83:95,bridge_name='br-int',has_traffic_filtering=True,id=ff69595e-71b6-4de9-a34f-11323c8da359,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff69595e-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.932 239549 DEBUG os_vif [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:83:95,bridge_name='br-int',has_traffic_filtering=True,id=ff69595e-71b6-4de9-a34f-11323c8da359,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff69595e-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:40:31 compute-0 podman[261277]: 2026-02-02 15:40:31.933083421 +0000 UTC m=+0.110229068 container cleanup 4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.933 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.934 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff69595e-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.935 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.938 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:40:31 compute-0 nova_compute[239545]: 2026-02-02 15:40:31.941 239549 INFO os_vif [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:83:95,bridge_name='br-int',has_traffic_filtering=True,id=ff69595e-71b6-4de9-a34f-11323c8da359,network=Network(2f321435-d909-47d9-9978-c1a6e976cdf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff69595e-71')
Feb 02 15:40:31 compute-0 systemd[1]: libpod-conmon-4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16.scope: Deactivated successfully.
Feb 02 15:40:31 compute-0 podman[261314]: 2026-02-02 15:40:31.990367243 +0000 UTC m=+0.040323641 container remove 4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.994 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b98577c8-7f68-428a-a325-42b67a472819]: (4, ('Mon Feb  2 03:40:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3 (4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16)\n4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16\nMon Feb  2 03:40:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3 (4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16)\n4535c956c016a1f1767a8951418957e1c9f57440760217ce3273219182cf9a16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.996 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8037dc-3164-4cb8-a4d8-e1a73fd66a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:31 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:31.996 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f321435-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:40:31 compute-0 kernel: tap2f321435-d0: left promiscuous mode
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.000 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.005 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.007 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.008 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[993f255b-6b98-4ef7-84e0-4311a04330d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.020 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e6d9b5-c330-463d-94d0-65b2edc00f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.023 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b8910332-989d-4a3d-9ffa-60ddd2325ac0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.039 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff753f6-eb03-4469-952f-41d586087984]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422041, 'reachable_time': 39564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261345, 'error': None, 'target': 'ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.042 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f321435-d909-47d9-9978-c1a6e976cdf3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.042 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[6da13f58-cfd0-4e49-b11f-c7c6fc4c84f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d2f321435\x2dd909\x2d47d9\x2d9978\x2dc1a6e976cdf3.mount: Deactivated successfully.
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.043 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ff69595e-71b6-4de9-a34f-11323c8da359 in datapath 2f321435-d909-47d9-9978-c1a6e976cdf3 unbound from our chassis
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.044 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f321435-d909-47d9-9978-c1a6e976cdf3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.045 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[684dc61e-2047-4126-b6b7-5fb804f2429b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.046 154982 INFO neutron.agent.ovn.metadata.agent [-] Port ff69595e-71b6-4de9-a34f-11323c8da359 in datapath 2f321435-d909-47d9-9978-c1a6e976cdf3 unbound from our chassis
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.047 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f321435-d909-47d9-9978-c1a6e976cdf3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:40:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:32.047 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b985a99f-c016-41e7-8765-6cd0a4ea8a50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.211 239549 INFO nova.virt.libvirt.driver [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Deleting instance files /var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80_del
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.212 239549 INFO nova.virt.libvirt.driver [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Deletion of /var/lib/nova/instances/0cd0267f-d963-4475-aa31-ae2d3864ad80_del complete
Feb 02 15:40:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/426524156' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.266 239549 INFO nova.compute.manager [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Took 0.59 seconds to destroy the instance on the hypervisor.
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.267 239549 DEBUG oslo.service.loopingcall [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.267 239549 DEBUG nova.compute.manager [-] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.267 239549 DEBUG nova.network.neutron [-] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.696 239549 DEBUG nova.network.neutron [req-5c91f7ed-d23d-4f46-9bac-5883a3f58337 req-c3a455fa-1ddb-4e07-bf33-19854881f0aa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updated VIF entry in instance network info cache for port ff69595e-71b6-4de9-a34f-11323c8da359. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.697 239549 DEBUG nova.network.neutron [req-5c91f7ed-d23d-4f46-9bac-5883a3f58337 req-c3a455fa-1ddb-4e07-bf33-19854881f0aa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updating instance_info_cache with network_info: [{"id": "ff69595e-71b6-4de9-a34f-11323c8da359", "address": "fa:16:3e:71:83:95", "network": {"id": "2f321435-d909-47d9-9978-c1a6e976cdf3", "bridge": "br-int", "label": "tempest-TestStampPattern-822433096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46fcff5180ad4462a78fc4ba0bf7c266", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff69595e-71", "ovs_interfaceid": "ff69595e-71b6-4de9-a34f-11323c8da359", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.715 239549 DEBUG oslo_concurrency.lockutils [req-5c91f7ed-d23d-4f46-9bac-5883a3f58337 req-c3a455fa-1ddb-4e07-bf33-19854881f0aa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-0cd0267f-d963-4475-aa31-ae2d3864ad80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.768 239549 DEBUG nova.network.neutron [-] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.782 239549 INFO nova.compute.manager [-] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Took 0.51 seconds to deallocate network for instance.
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.818 239549 DEBUG oslo_concurrency.lockutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.819 239549 DEBUG oslo_concurrency.lockutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.828 239549 DEBUG nova.compute.manager [req-b474e10b-9dcc-4cab-8f3d-1a7c51178ff1 req-fc0081f9-4c5b-4cbe-baf8-e48ce59e805a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received event network-vif-deleted-ff69595e-71b6-4de9-a34f-11323c8da359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:40:32 compute-0 nova_compute[239545]: 2026-02-02 15:40:32.866 239549 DEBUG oslo_concurrency.processutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:40:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Feb 02 15:40:32 compute-0 ceph-mon[75334]: osdmap e354: 3 total, 3 up, 3 in
Feb 02 15:40:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/426524156' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Feb 02 15:40:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Feb 02 15:40:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:40:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2226016926' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.481 239549 DEBUG oslo_concurrency.processutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.486 239549 DEBUG nova.compute.provider_tree [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.504 239549 DEBUG nova.scheduler.client.report [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.524 239549 DEBUG oslo_concurrency.lockutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.599 239549 INFO nova.scheduler.client.report [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Deleted allocations for instance 0cd0267f-d963-4475-aa31-ae2d3864ad80
Feb 02 15:40:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 150 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 12 KiB/s wr, 179 op/s
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.666 239549 DEBUG oslo_concurrency.lockutils [None req-e259ccf8-e355-4c5f-8d3d-9f457291993a 52fc74263c9d4d478b0b870727c4fa0c 46fcff5180ad4462a78fc4ba0bf7c266 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.672 239549 DEBUG nova.compute.manager [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received event network-vif-unplugged-ff69595e-71b6-4de9-a34f-11323c8da359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.673 239549 DEBUG oslo_concurrency.lockutils [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.673 239549 DEBUG oslo_concurrency.lockutils [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.673 239549 DEBUG oslo_concurrency.lockutils [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.674 239549 DEBUG nova.compute.manager [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] No waiting events found dispatching network-vif-unplugged-ff69595e-71b6-4de9-a34f-11323c8da359 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.674 239549 WARNING nova.compute.manager [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received unexpected event network-vif-unplugged-ff69595e-71b6-4de9-a34f-11323c8da359 for instance with vm_state deleted and task_state None.
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.674 239549 DEBUG nova.compute.manager [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received event network-vif-plugged-ff69595e-71b6-4de9-a34f-11323c8da359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.674 239549 DEBUG oslo_concurrency.lockutils [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.674 239549 DEBUG oslo_concurrency.lockutils [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.675 239549 DEBUG oslo_concurrency.lockutils [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0cd0267f-d963-4475-aa31-ae2d3864ad80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.675 239549 DEBUG nova.compute.manager [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] No waiting events found dispatching network-vif-plugged-ff69595e-71b6-4de9-a34f-11323c8da359 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.675 239549 WARNING nova.compute.manager [req-f24dfe6d-eed6-4008-8e7a-26445c07b369 req-fddeefd5-758c-4e2d-bc17-d8e6e970736f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Received unexpected event network-vif-plugged-ff69595e-71b6-4de9-a34f-11323c8da359 for instance with vm_state deleted and task_state None.
Feb 02 15:40:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Feb 02 15:40:33 compute-0 ceph-mon[75334]: osdmap e355: 3 total, 3 up, 3 in
Feb 02 15:40:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2226016926' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:40:33 compute-0 ceph-mon[75334]: pgmap v1361: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 150 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 12 KiB/s wr, 179 op/s
Feb 02 15:40:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Feb 02 15:40:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Feb 02 15:40:33 compute-0 nova_compute[239545]: 2026-02-02 15:40:33.992 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:34 compute-0 ceph-mon[75334]: osdmap e356: 3 total, 3 up, 3 in
Feb 02 15:40:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/850693862' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/850693862' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 88 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 19 KiB/s wr, 326 op/s
Feb 02 15:40:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Feb 02 15:40:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Feb 02 15:40:35 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/850693862' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:35 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/850693862' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:35 compute-0 ceph-mon[75334]: pgmap v1363: 305 pgs: 305 active+clean; 88 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 19 KiB/s wr, 326 op/s
Feb 02 15:40:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Feb 02 15:40:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/814642631' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/814642631' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:36 compute-0 ceph-mon[75334]: osdmap e357: 3 total, 3 up, 3 in
Feb 02 15:40:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/814642631' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/814642631' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:36 compute-0 nova_compute[239545]: 2026-02-02 15:40:36.935 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:37 compute-0 podman[261370]: 2026-02-02 15:40:37.315968505 +0000 UTC m=+0.060098571 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 15:40:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184520063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:37 compute-0 podman[261369]: 2026-02-02 15:40:37.374451374 +0000 UTC m=+0.120277951 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Feb 02 15:40:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 88 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 12 KiB/s wr, 252 op/s
Feb 02 15:40:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Feb 02 15:40:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Feb 02 15:40:37 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Feb 02 15:40:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2184520063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:37 compute-0 ceph-mon[75334]: pgmap v1365: 305 pgs: 305 active+clean; 88 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 12 KiB/s wr, 252 op/s
Feb 02 15:40:37 compute-0 ceph-mon[75334]: osdmap e358: 3 total, 3 up, 3 in
Feb 02 15:40:37 compute-0 nova_compute[239545]: 2026-02-02 15:40:37.999 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:38 compute-0 nova_compute[239545]: 2026-02-02 15:40:38.052 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Feb 02 15:40:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Feb 02 15:40:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Feb 02 15:40:38 compute-0 nova_compute[239545]: 2026-02-02 15:40:38.995 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:39 compute-0 nova_compute[239545]: 2026-02-02 15:40:39.199 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046824.1986063, b7efb964-7e90-423b-b648-41772085a2be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:40:39 compute-0 nova_compute[239545]: 2026-02-02 15:40:39.199 239549 INFO nova.compute.manager [-] [instance: b7efb964-7e90-423b-b648-41772085a2be] VM Stopped (Lifecycle Event)
Feb 02 15:40:39 compute-0 nova_compute[239545]: 2026-02-02 15:40:39.220 239549 DEBUG nova.compute.manager [None req-74013e41-2cd6-48ee-8d62-d671aa374b06 - - - - - -] [instance: b7efb964-7e90-423b-b648-41772085a2be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:40:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 88 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 144 KiB/s rd, 8.9 KiB/s wr, 198 op/s
Feb 02 15:40:39 compute-0 ceph-mon[75334]: osdmap e359: 3 total, 3 up, 3 in
Feb 02 15:40:39 compute-0 ceph-mon[75334]: pgmap v1368: 305 pgs: 305 active+clean; 88 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 144 KiB/s rd, 8.9 KiB/s wr, 198 op/s
Feb 02 15:40:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Feb 02 15:40:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Feb 02 15:40:41 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Feb 02 15:40:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 6.3 KiB/s wr, 156 op/s
Feb 02 15:40:41 compute-0 nova_compute[239545]: 2026-02-02 15:40:41.940 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:42 compute-0 ceph-mon[75334]: osdmap e360: 3 total, 3 up, 3 in
Feb 02 15:40:42 compute-0 ceph-mon[75334]: pgmap v1370: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 6.3 KiB/s wr, 156 op/s
Feb 02 15:40:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Feb 02 15:40:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:40:42
Feb 02 15:40:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:40:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:40:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.mgr', 'images', 'default.rgw.control']
Feb 02 15:40:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:40:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Feb 02 15:40:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Feb 02 15:40:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:40:43 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186144372' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:40:43 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186144372' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 6.2 KiB/s wr, 169 op/s
Feb 02 15:40:43 compute-0 ceph-mon[75334]: osdmap e361: 3 total, 3 up, 3 in
Feb 02 15:40:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3186144372' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:40:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3186144372' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:40:43 compute-0 ceph-mon[75334]: pgmap v1372: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 6.2 KiB/s wr, 169 op/s
Feb 02 15:40:43 compute-0 nova_compute[239545]: 2026-02-02 15:40:43.996 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:40:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3269488839' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:44 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3269488839' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:40:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:40:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 8.5 KiB/s wr, 212 op/s
Feb 02 15:40:45 compute-0 ceph-mon[75334]: pgmap v1373: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 8.5 KiB/s wr, 212 op/s
Feb 02 15:40:46 compute-0 nova_compute[239545]: 2026-02-02 15:40:46.910 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046831.9088008, 0cd0267f-d963-4475-aa31-ae2d3864ad80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:40:46 compute-0 nova_compute[239545]: 2026-02-02 15:40:46.910 239549 INFO nova.compute.manager [-] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] VM Stopped (Lifecycle Event)
Feb 02 15:40:46 compute-0 nova_compute[239545]: 2026-02-02 15:40:46.950 239549 DEBUG nova.compute.manager [None req-06dd0e93-fbbf-43d5-abe1-5c0d53ccecb4 - - - - - -] [instance: 0cd0267f-d963-4475-aa31-ae2d3864ad80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:40:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Feb 02 15:40:47 compute-0 nova_compute[239545]: 2026-02-02 15:40:47.318 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Feb 02 15:40:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Feb 02 15:40:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.3 KiB/s wr, 96 op/s
Feb 02 15:40:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Feb 02 15:40:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Feb 02 15:40:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Feb 02 15:40:48 compute-0 ceph-mon[75334]: osdmap e362: 3 total, 3 up, 3 in
Feb 02 15:40:48 compute-0 ceph-mon[75334]: pgmap v1375: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.3 KiB/s wr, 96 op/s
Feb 02 15:40:48 compute-0 ceph-mon[75334]: osdmap e363: 3 total, 3 up, 3 in
Feb 02 15:40:49 compute-0 nova_compute[239545]: 2026-02-02 15:40:49.041 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394950394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 3.8 KiB/s wr, 99 op/s
Feb 02 15:40:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2394950394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:50 compute-0 ceph-mon[75334]: pgmap v1377: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 3.8 KiB/s wr, 99 op/s
Feb 02 15:40:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Feb 02 15:40:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Feb 02 15:40:50 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Feb 02 15:40:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 37 op/s
Feb 02 15:40:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Feb 02 15:40:51 compute-0 ceph-mon[75334]: osdmap e364: 3 total, 3 up, 3 in
Feb 02 15:40:51 compute-0 ceph-mon[75334]: pgmap v1379: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 37 op/s
Feb 02 15:40:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Feb 02 15:40:51 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Feb 02 15:40:52 compute-0 nova_compute[239545]: 2026-02-02 15:40:52.321 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:52 compute-0 ceph-mon[75334]: osdmap e365: 3 total, 3 up, 3 in
Feb 02 15:40:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1262173572' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.8 KiB/s wr, 38 op/s
Feb 02 15:40:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Feb 02 15:40:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Feb 02 15:40:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1262173572' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:53 compute-0 ceph-mon[75334]: pgmap v1381: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.8 KiB/s wr, 38 op/s
Feb 02 15:40:53 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Feb 02 15:40:54 compute-0 nova_compute[239545]: 2026-02-02 15:40:54.043 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.632998630001791e-07 of space, bias 1.0, pg target 0.00028898995890005373 quantized to 32 (current 32)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003588223524094249 of space, bias 1.0, pg target 0.10764670572282747 quantized to 32 (current 32)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.76944053090006e-06 of space, bias 1.0, pg target 0.0008308321592700181 quantized to 32 (current 32)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664942086670574 of space, bias 1.0, pg target 0.19994826260011722 quantized to 32 (current 32)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4955460144190022e-06 of space, bias 4.0, pg target 0.0017946552173028025 quantized to 16 (current 16)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:40:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:40:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Feb 02 15:40:54 compute-0 ceph-mon[75334]: osdmap e366: 3 total, 3 up, 3 in
Feb 02 15:40:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Feb 02 15:40:54 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Feb 02 15:40:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.5 KiB/s wr, 66 op/s
Feb 02 15:40:55 compute-0 ceph-mon[75334]: osdmap e367: 3 total, 3 up, 3 in
Feb 02 15:40:55 compute-0 ceph-mon[75334]: pgmap v1384: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.5 KiB/s wr, 66 op/s
Feb 02 15:40:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:40:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4191877008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4191877008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:40:57 compute-0 nova_compute[239545]: 2026-02-02 15:40:57.324 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.8 KiB/s wr, 53 op/s
Feb 02 15:40:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:40:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Feb 02 15:40:58 compute-0 ceph-mon[75334]: pgmap v1385: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.8 KiB/s wr, 53 op/s
Feb 02 15:40:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Feb 02 15:40:58 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Feb 02 15:40:59 compute-0 nova_compute[239545]: 2026-02-02 15:40:59.096 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:40:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:59.252 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:40:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:59.252 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:40:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:40:59.252 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:40:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Feb 02 15:40:59 compute-0 ceph-mon[75334]: osdmap e368: 3 total, 3 up, 3 in
Feb 02 15:40:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Feb 02 15:40:59 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Feb 02 15:40:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.7 KiB/s wr, 68 op/s
Feb 02 15:41:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1080568284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1080568284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Feb 02 15:41:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Feb 02 15:41:00 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Feb 02 15:41:00 compute-0 ceph-mon[75334]: osdmap e369: 3 total, 3 up, 3 in
Feb 02 15:41:00 compute-0 ceph-mon[75334]: pgmap v1388: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.7 KiB/s wr, 68 op/s
Feb 02 15:41:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1080568284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1080568284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:01 compute-0 ceph-mon[75334]: osdmap e370: 3 total, 3 up, 3 in
Feb 02 15:41:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.7 KiB/s wr, 113 op/s
Feb 02 15:41:02 compute-0 nova_compute[239545]: 2026-02-02 15:41:02.362 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Feb 02 15:41:02 compute-0 ceph-mon[75334]: pgmap v1390: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.7 KiB/s wr, 113 op/s
Feb 02 15:41:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Feb 02 15:41:02 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Feb 02 15:41:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:03 compute-0 nova_compute[239545]: 2026-02-02 15:41:03.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:41:03 compute-0 ceph-mon[75334]: osdmap e371: 3 total, 3 up, 3 in
Feb 02 15:41:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 6.0 KiB/s wr, 131 op/s
Feb 02 15:41:04 compute-0 nova_compute[239545]: 2026-02-02 15:41:04.097 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:04 compute-0 nova_compute[239545]: 2026-02-02 15:41:04.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:41:04 compute-0 nova_compute[239545]: 2026-02-02 15:41:04.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:41:04 compute-0 nova_compute[239545]: 2026-02-02 15:41:04.571 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:41:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Feb 02 15:41:04 compute-0 ceph-mon[75334]: pgmap v1392: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 6.0 KiB/s wr, 131 op/s
Feb 02 15:41:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Feb 02 15:41:04 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Feb 02 15:41:05 compute-0 nova_compute[239545]: 2026-02-02 15:41:05.566 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:41:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Feb 02 15:41:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Feb 02 15:41:05 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Feb 02 15:41:05 compute-0 ceph-mon[75334]: osdmap e372: 3 total, 3 up, 3 in
Feb 02 15:41:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 KiB/s wr, 87 op/s
Feb 02 15:41:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Feb 02 15:41:06 compute-0 ceph-mon[75334]: osdmap e373: 3 total, 3 up, 3 in
Feb 02 15:41:06 compute-0 ceph-mon[75334]: pgmap v1395: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 KiB/s wr, 87 op/s
Feb 02 15:41:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Feb 02 15:41:06 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Feb 02 15:41:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1850680961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1850680961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:07 compute-0 nova_compute[239545]: 2026-02-02 15:41:07.366 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Feb 02 15:41:07 compute-0 ceph-mon[75334]: osdmap e374: 3 total, 3 up, 3 in
Feb 02 15:41:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1850680961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:07 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1850680961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Feb 02 15:41:07 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Feb 02 15:41:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.0 KiB/s wr, 90 op/s
Feb 02 15:41:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Feb 02 15:41:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Feb 02 15:41:07 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Feb 02 15:41:08 compute-0 podman[261418]: 2026-02-02 15:41:08.294675666 +0000 UTC m=+0.040398712 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:41:08 compute-0 podman[261417]: 2026-02-02 15:41:08.315746658 +0000 UTC m=+0.062202741 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:41:08 compute-0 nova_compute[239545]: 2026-02-02 15:41:08.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:41:08 compute-0 nova_compute[239545]: 2026-02-02 15:41:08.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:41:08 compute-0 nova_compute[239545]: 2026-02-02 15:41:08.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:41:08 compute-0 nova_compute[239545]: 2026-02-02 15:41:08.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:41:08 compute-0 nova_compute[239545]: 2026-02-02 15:41:08.572 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:08 compute-0 nova_compute[239545]: 2026-02-02 15:41:08.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:08 compute-0 nova_compute[239545]: 2026-02-02 15:41:08.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:08 compute-0 nova_compute[239545]: 2026-02-02 15:41:08.573 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:41:08 compute-0 nova_compute[239545]: 2026-02-02 15:41:08.573 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:08 compute-0 ceph-mon[75334]: osdmap e375: 3 total, 3 up, 3 in
Feb 02 15:41:08 compute-0 ceph-mon[75334]: pgmap v1398: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.0 KiB/s wr, 90 op/s
Feb 02 15:41:08 compute-0 ceph-mon[75334]: osdmap e376: 3 total, 3 up, 3 in
Feb 02 15:41:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Feb 02 15:41:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Feb 02 15:41:08 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Feb 02 15:41:09 compute-0 nova_compute[239545]: 2026-02-02 15:41:09.151 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:41:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764335103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:09 compute-0 nova_compute[239545]: 2026-02-02 15:41:09.184 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:09 compute-0 nova_compute[239545]: 2026-02-02 15:41:09.353 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:41:09 compute-0 nova_compute[239545]: 2026-02-02 15:41:09.354 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4485MB free_disk=59.98822314105928GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:41:09 compute-0 nova_compute[239545]: 2026-02-02 15:41:09.354 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:09 compute-0 nova_compute[239545]: 2026-02-02 15:41:09.354 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:09 compute-0 nova_compute[239545]: 2026-02-02 15:41:09.413 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:41:09 compute-0 nova_compute[239545]: 2026-02-02 15:41:09.414 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:41:09 compute-0 nova_compute[239545]: 2026-02-02 15:41:09.432 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 3.2 KiB/s wr, 116 op/s
Feb 02 15:41:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Feb 02 15:41:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Feb 02 15:41:09 compute-0 ceph-mon[75334]: osdmap e377: 3 total, 3 up, 3 in
Feb 02 15:41:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2764335103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:09 compute-0 ceph-mon[75334]: pgmap v1401: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 3.2 KiB/s wr, 116 op/s
Feb 02 15:41:09 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Feb 02 15:41:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:41:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1570744511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:10 compute-0 nova_compute[239545]: 2026-02-02 15:41:10.001 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:10 compute-0 nova_compute[239545]: 2026-02-02 15:41:10.007 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:41:10 compute-0 nova_compute[239545]: 2026-02-02 15:41:10.028 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:41:10 compute-0 nova_compute[239545]: 2026-02-02 15:41:10.053 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:41:10 compute-0 nova_compute[239545]: 2026-02-02 15:41:10.054 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627001608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627001608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Feb 02 15:41:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Feb 02 15:41:10 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Feb 02 15:41:10 compute-0 ceph-mon[75334]: osdmap e378: 3 total, 3 up, 3 in
Feb 02 15:41:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1570744511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3627001608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3627001608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:11 compute-0 nova_compute[239545]: 2026-02-02 15:41:11.054 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:41:11 compute-0 nova_compute[239545]: 2026-02-02 15:41:11.055 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:41:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 203 KiB/s rd, 76 KiB/s wr, 275 op/s
Feb 02 15:41:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Feb 02 15:41:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Feb 02 15:41:11 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Feb 02 15:41:11 compute-0 ceph-mon[75334]: osdmap e379: 3 total, 3 up, 3 in
Feb 02 15:41:11 compute-0 ceph-mon[75334]: pgmap v1404: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 203 KiB/s rd, 76 KiB/s wr, 275 op/s
Feb 02 15:41:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3885030415' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3885030415' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3125221018' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3125221018' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:12 compute-0 nova_compute[239545]: 2026-02-02 15:41:12.367 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:12 compute-0 nova_compute[239545]: 2026-02-02 15:41:12.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:41:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Feb 02 15:41:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Feb 02 15:41:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Feb 02 15:41:12 compute-0 ceph-mon[75334]: osdmap e380: 3 total, 3 up, 3 in
Feb 02 15:41:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3885030415' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3885030415' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3125221018' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3125221018' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:12 compute-0 ceph-mon[75334]: osdmap e381: 3 total, 3 up, 3 in
Feb 02 15:41:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 74 KiB/s wr, 162 op/s
Feb 02 15:41:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Feb 02 15:41:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Feb 02 15:41:13 compute-0 ceph-mon[75334]: pgmap v1407: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 74 KiB/s wr, 162 op/s
Feb 02 15:41:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Feb 02 15:41:14 compute-0 nova_compute[239545]: 2026-02-02 15:41:14.186 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3568566831' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3568566831' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:14 compute-0 nova_compute[239545]: 2026-02-02 15:41:14.514 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:14 compute-0 nova_compute[239545]: 2026-02-02 15:41:14.515 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:14 compute-0 nova_compute[239545]: 2026-02-02 15:41:14.536 239549 DEBUG nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:41:14 compute-0 nova_compute[239545]: 2026-02-02 15:41:14.601 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:14 compute-0 nova_compute[239545]: 2026-02-02 15:41:14.602 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:14 compute-0 nova_compute[239545]: 2026-02-02 15:41:14.608 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:41:14 compute-0 nova_compute[239545]: 2026-02-02 15:41:14.609 239549 INFO nova.compute.claims [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:41:14 compute-0 nova_compute[239545]: 2026-02-02 15:41:14.712 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:41:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:41:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:41:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:41:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:41:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:41:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Feb 02 15:41:14 compute-0 ceph-mon[75334]: osdmap e382: 3 total, 3 up, 3 in
Feb 02 15:41:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3568566831' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3568566831' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Feb 02 15:41:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Feb 02 15:41:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:41:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144964033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.213 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.218 239549 DEBUG nova.compute.provider_tree [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.232 239549 DEBUG nova.scheduler.client.report [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.250 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.251 239549 DEBUG nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.293 239549 DEBUG nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.293 239549 DEBUG nova.network.neutron [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.310 239549 INFO nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.328 239549 DEBUG nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.373 239549 INFO nova.virt.block_device [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Booting with volume 98433566-0f76-461e-9bc6-11a91aff2a53 at /dev/vda
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.451 239549 DEBUG nova.policy [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8e72a1cb6344869821da1cfc41bf8fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.502 239549 DEBUG os_brick.utils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.503 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.512 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.513 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[4bce41cc-a9f9-4cbb-b503-0f71fe6373e8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.513 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.520 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.520 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[ac676778-6326-46f9-a347-b8d48702a09c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.521 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.528 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.528 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1951bd-95d6-44f6-a6fa-a091f0d3097f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.529 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[543e2dc0-146a-480a-a0c4-91e08c107b39]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.530 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.542 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "nvme version" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.544 239549 DEBUG os_brick.initiator.connectors.lightos [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.545 239549 DEBUG os_brick.initiator.connectors.lightos [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.545 239549 DEBUG os_brick.initiator.connectors.lightos [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.545 239549 DEBUG os_brick.utils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] <== get_connector_properties: return (42ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:41:15 compute-0 nova_compute[239545]: 2026-02-02 15:41:15.545 239549 DEBUG nova.virt.block_device [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Updating existing volume attachment record: 2dc1c300-8564-41d4-af8e-556e31cfc557 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:41:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 7.2 KiB/s wr, 186 op/s
Feb 02 15:41:15 compute-0 ceph-mon[75334]: osdmap e383: 3 total, 3 up, 3 in
Feb 02 15:41:15 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2144964033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:15 compute-0 ceph-mon[75334]: pgmap v1410: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 7.2 KiB/s wr, 186 op/s
Feb 02 15:41:16 compute-0 nova_compute[239545]: 2026-02-02 15:41:16.200 239549 DEBUG nova.network.neutron [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Successfully created port: 6e8bb61d-7abf-41af-b600-e512dee7d4c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:41:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:41:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540320662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:41:16 compute-0 nova_compute[239545]: 2026-02-02 15:41:16.825 239549 DEBUG nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:41:16 compute-0 nova_compute[239545]: 2026-02-02 15:41:16.827 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:41:16 compute-0 nova_compute[239545]: 2026-02-02 15:41:16.827 239549 INFO nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Creating image(s)
Feb 02 15:41:16 compute-0 nova_compute[239545]: 2026-02-02 15:41:16.827 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:41:16 compute-0 nova_compute[239545]: 2026-02-02 15:41:16.828 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Ensure instance console log exists: /var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:41:16 compute-0 nova_compute[239545]: 2026-02-02 15:41:16.828 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:16 compute-0 nova_compute[239545]: 2026-02-02 15:41:16.828 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:16 compute-0 nova_compute[239545]: 2026-02-02 15:41:16.829 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Feb 02 15:41:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1540320662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:41:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Feb 02 15:41:16 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Feb 02 15:41:17 compute-0 nova_compute[239545]: 2026-02-02 15:41:17.069 239549 DEBUG nova.network.neutron [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Successfully updated port: 6e8bb61d-7abf-41af-b600-e512dee7d4c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:41:17 compute-0 nova_compute[239545]: 2026-02-02 15:41:17.085 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "refresh_cache-9bf64275-8660-44f1-9fbd-a7b53f3b651b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:41:17 compute-0 nova_compute[239545]: 2026-02-02 15:41:17.085 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquired lock "refresh_cache-9bf64275-8660-44f1-9fbd-a7b53f3b651b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:41:17 compute-0 nova_compute[239545]: 2026-02-02 15:41:17.085 239549 DEBUG nova.network.neutron [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:41:17 compute-0 nova_compute[239545]: 2026-02-02 15:41:17.203 239549 DEBUG nova.compute.manager [req-e978e85e-fba1-4a98-8d50-26447ac5fec5 req-a39478d9-ce1a-4129-a695-7451cc328f4c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Received event network-changed-6e8bb61d-7abf-41af-b600-e512dee7d4c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:41:17 compute-0 nova_compute[239545]: 2026-02-02 15:41:17.204 239549 DEBUG nova.compute.manager [req-e978e85e-fba1-4a98-8d50-26447ac5fec5 req-a39478d9-ce1a-4129-a695-7451cc328f4c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Refreshing instance network info cache due to event network-changed-6e8bb61d-7abf-41af-b600-e512dee7d4c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:41:17 compute-0 nova_compute[239545]: 2026-02-02 15:41:17.204 239549 DEBUG oslo_concurrency.lockutils [req-e978e85e-fba1-4a98-8d50-26447ac5fec5 req-a39478d9-ce1a-4129-a695-7451cc328f4c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-9bf64275-8660-44f1-9fbd-a7b53f3b651b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:41:17 compute-0 nova_compute[239545]: 2026-02-02 15:41:17.249 239549 DEBUG nova.network.neutron [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:41:17 compute-0 nova_compute[239545]: 2026-02-02 15:41:17.371 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 6.0 KiB/s wr, 155 op/s
Feb 02 15:41:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/784314592' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/784314592' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Feb 02 15:41:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Feb 02 15:41:17 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Feb 02 15:41:17 compute-0 ceph-mon[75334]: osdmap e384: 3 total, 3 up, 3 in
Feb 02 15:41:17 compute-0 ceph-mon[75334]: pgmap v1412: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 6.0 KiB/s wr, 155 op/s
Feb 02 15:41:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/784314592' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/784314592' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.172 239549 DEBUG nova.network.neutron [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Updating instance_info_cache with network_info: [{"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.194 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Releasing lock "refresh_cache-9bf64275-8660-44f1-9fbd-a7b53f3b651b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.195 239549 DEBUG nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Instance network_info: |[{"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.196 239549 DEBUG oslo_concurrency.lockutils [req-e978e85e-fba1-4a98-8d50-26447ac5fec5 req-a39478d9-ce1a-4129-a695-7451cc328f4c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-9bf64275-8660-44f1-9fbd-a7b53f3b651b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.196 239549 DEBUG nova.network.neutron [req-e978e85e-fba1-4a98-8d50-26447ac5fec5 req-a39478d9-ce1a-4129-a695-7451cc328f4c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Refreshing network info cache for port 6e8bb61d-7abf-41af-b600-e512dee7d4c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.199 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Start _get_guest_xml network_info=[{"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '2dc1c300-8564-41d4-af8e-556e31cfc557', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-98433566-0f76-461e-9bc6-11a91aff2a53', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '98433566-0f76-461e-9bc6-11a91aff2a53', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9bf64275-8660-44f1-9fbd-a7b53f3b651b', 'attached_at': '', 'detached_at': '', 'volume_id': '98433566-0f76-461e-9bc6-11a91aff2a53', 'serial': '98433566-0f76-461e-9bc6-11a91aff2a53'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.204 239549 WARNING nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.208 239549 DEBUG nova.virt.libvirt.host [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.209 239549 DEBUG nova.virt.libvirt.host [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.211 239549 DEBUG nova.virt.libvirt.host [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.212 239549 DEBUG nova.virt.libvirt.host [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.212 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.212 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.213 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.213 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.213 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.213 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.214 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.214 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.214 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.214 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.214 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.215 239549 DEBUG nova.virt.hardware [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.237 239549 DEBUG nova.storage.rbd_utils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 9bf64275-8660-44f1-9fbd-a7b53f3b651b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.241 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:41:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859648697' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.786 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.943 239549 DEBUG os_brick.encryptors [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Using volume encryption metadata '{'encryption_key_id': 'b2d67e9a-e7f9-46b4-9937-612bb4d2350d', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-98433566-0f76-461e-9bc6-11a91aff2a53', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '98433566-0f76-461e-9bc6-11a91aff2a53', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9bf64275-8660-44f1-9fbd-a7b53f3b651b', 'attached_at': '', 'detached_at': '', 'volume_id': '98433566-0f76-461e-9bc6-11a91aff2a53', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.945 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.966 239549 DEBUG barbicanclient.v1.secrets [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.966 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.991 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:18 compute-0 nova_compute[239545]: 2026-02-02 15:41:18.992 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 ceph-mon[75334]: osdmap e385: 3 total, 3 up, 3 in
Feb 02 15:41:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1859648697' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:41:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Feb 02 15:41:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Feb 02 15:41:19 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.019 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.020 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.040 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.040 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.067 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.068 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.090 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.091 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.116 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.116 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.139 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.140 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.162 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.162 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.182 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.182 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.187 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.216 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.216 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.236 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.236 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.254 239549 DEBUG nova.network.neutron [req-e978e85e-fba1-4a98-8d50-26447ac5fec5 req-a39478d9-ce1a-4129-a695-7451cc328f4c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Updated VIF entry in instance network info cache for port 6e8bb61d-7abf-41af-b600-e512dee7d4c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.254 239549 DEBUG nova.network.neutron [req-e978e85e-fba1-4a98-8d50-26447ac5fec5 req-a39478d9-ce1a-4129-a695-7451cc328f4c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Updating instance_info_cache with network_info: [{"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.263 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.264 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.269 239549 DEBUG oslo_concurrency.lockutils [req-e978e85e-fba1-4a98-8d50-26447ac5fec5 req-a39478d9-ce1a-4129-a695-7451cc328f4c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-9bf64275-8660-44f1-9fbd-a7b53f3b651b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.284 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.285 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.306 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.307 239549 INFO barbicanclient.base [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Calculated Secrets uuid ref: secrets/b2d67e9a-e7f9-46b4-9937-612bb4d2350d
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.329 239549 DEBUG barbicanclient.client [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.330 239549 DEBUG nova.virt.libvirt.host [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <usage type="volume">
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <volume>98433566-0f76-461e-9bc6-11a91aff2a53</volume>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   </usage>
Feb 02 15:41:19 compute-0 nova_compute[239545]: </secret>
Feb 02 15:41:19 compute-0 nova_compute[239545]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.355 239549 DEBUG nova.virt.libvirt.vif [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:41:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-799321469',display_name='tempest-TestVolumeBootPattern-server-799321469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-799321469',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-0l6vwqb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:41:15Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=9bf64275-8660-44f1-9fbd-a7b53f3b651b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.355 239549 DEBUG nova.network.os_vif_util [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.356 239549 DEBUG nova.network.os_vif_util [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:11:56,bridge_name='br-int',has_traffic_filtering=True,id=6e8bb61d-7abf-41af-b600-e512dee7d4c1,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e8bb61d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.359 239549 DEBUG nova.objects.instance [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bf64275-8660-44f1-9fbd-a7b53f3b651b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.374 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <uuid>9bf64275-8660-44f1-9fbd-a7b53f3b651b</uuid>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <name>instance-0000000f</name>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <nova:name>tempest-TestVolumeBootPattern-server-799321469</nova:name>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:41:18</nova:creationTime>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <nova:user uuid="b8e72a1cb6344869821da1cfc41bf8fc">tempest-TestVolumeBootPattern-77302308-project-member</nova:user>
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <nova:project uuid="8a28227cdc0a4390bebe7549f189bfe5">tempest-TestVolumeBootPattern-77302308</nova:project>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <nova:port uuid="6e8bb61d-7abf-41af-b600-e512dee7d4c1">
Feb 02 15:41:19 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <system>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <entry name="serial">9bf64275-8660-44f1-9fbd-a7b53f3b651b</entry>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <entry name="uuid">9bf64275-8660-44f1-9fbd-a7b53f3b651b</entry>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     </system>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <os>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   </os>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <features>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   </features>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/9bf64275-8660-44f1-9fbd-a7b53f3b651b_disk.config">
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       </source>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-98433566-0f76-461e-9bc6-11a91aff2a53">
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       </source>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <serial>98433566-0f76-461e-9bc6-11a91aff2a53</serial>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <encryption format="luks">
Feb 02 15:41:19 compute-0 nova_compute[239545]:         <secret type="passphrase" uuid="fa1ad777-3c4b-41f0-a8a4-83c8a7f1d5a3"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       </encryption>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:c2:11:56"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <target dev="tap6e8bb61d-7a"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b/console.log" append="off"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <video>
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     </video>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:41:19 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:41:19 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:41:19 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:41:19 compute-0 nova_compute[239545]: </domain>
Feb 02 15:41:19 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.376 239549 DEBUG nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Preparing to wait for external event network-vif-plugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.376 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.377 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.377 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.378 239549 DEBUG nova.virt.libvirt.vif [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:41:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-799321469',display_name='tempest-TestVolumeBootPattern-server-799321469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-799321469',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-0l6vwqb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:41:15Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=9bf64275-8660-44f1-9fbd-a7b53f3b651b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.378 239549 DEBUG nova.network.os_vif_util [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.379 239549 DEBUG nova.network.os_vif_util [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:11:56,bridge_name='br-int',has_traffic_filtering=True,id=6e8bb61d-7abf-41af-b600-e512dee7d4c1,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e8bb61d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.379 239549 DEBUG os_vif [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:11:56,bridge_name='br-int',has_traffic_filtering=True,id=6e8bb61d-7abf-41af-b600-e512dee7d4c1,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e8bb61d-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.380 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.380 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.380 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.383 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.383 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e8bb61d-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.384 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e8bb61d-7a, col_values=(('external_ids', {'iface-id': '6e8bb61d-7abf-41af-b600-e512dee7d4c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:11:56', 'vm-uuid': '9bf64275-8660-44f1-9fbd-a7b53f3b651b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.407 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:19 compute-0 NetworkManager[49171]: <info>  [1770046879.4088] manager: (tap6e8bb61d-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.410 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.412 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.413 239549 INFO os_vif [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:11:56,bridge_name='br-int',has_traffic_filtering=True,id=6e8bb61d-7abf-41af-b600-e512dee7d4c1,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e8bb61d-7a')
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.450 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.450 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.450 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No VIF found with MAC fa:16:3e:c2:11:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.451 239549 INFO nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Using config drive
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.470 239549 DEBUG nova.storage.rbd_utils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 9bf64275-8660-44f1-9fbd-a7b53f3b651b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:41:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 7.8 KiB/s wr, 212 op/s
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.729 239549 INFO nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Creating config drive at /var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b/disk.config
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.735 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzg6a4xji execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.865 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzg6a4xji" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.887 239549 DEBUG nova.storage.rbd_utils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 9bf64275-8660-44f1-9fbd-a7b53f3b651b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:41:19 compute-0 nova_compute[239545]: 2026-02-02 15:41:19.890 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b/disk.config 9bf64275-8660-44f1-9fbd-a7b53f3b651b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:20 compute-0 ceph-mon[75334]: osdmap e386: 3 total, 3 up, 3 in
Feb 02 15:41:20 compute-0 ceph-mon[75334]: pgmap v1415: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 7.8 KiB/s wr, 212 op/s
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.034 239549 DEBUG oslo_concurrency.processutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b/disk.config 9bf64275-8660-44f1-9fbd-a7b53f3b651b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.035 239549 INFO nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Deleting local config drive /var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b/disk.config because it was imported into RBD.
Feb 02 15:41:20 compute-0 kernel: tap6e8bb61d-7a: entered promiscuous mode
Feb 02 15:41:20 compute-0 NetworkManager[49171]: <info>  [1770046880.0737] manager: (tap6e8bb61d-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/84)
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.073 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:20 compute-0 ovn_controller[144995]: 2026-02-02T15:41:20Z|00148|binding|INFO|Claiming lport 6e8bb61d-7abf-41af-b600-e512dee7d4c1 for this chassis.
Feb 02 15:41:20 compute-0 ovn_controller[144995]: 2026-02-02T15:41:20Z|00149|binding|INFO|6e8bb61d-7abf-41af-b600-e512dee7d4c1: Claiming fa:16:3e:c2:11:56 10.100.0.6
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.075 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.085 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:11:56 10.100.0.6'], port_security=['fa:16:3e:c2:11:56 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9bf64275-8660-44f1-9fbd-a7b53f3b651b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8c0eb4e8-bc20-4111-8350-1463414f08ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=6e8bb61d-7abf-41af-b600-e512dee7d4c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.087 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 6e8bb61d-7abf-41af-b600-e512dee7d4c1 in datapath 473fc4ca-a137-447b-9349-9f4677babee6 bound to our chassis
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.088 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:41:20 compute-0 systemd-udevd[261648]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.096 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6da84d5a-bf42-4deb-8d85-d547897252c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.096 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap473fc4ca-a1 in ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.098 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap473fc4ca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.098 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d60a9f-4147-4abb-8e3a-6abe9737775c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.099 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[507e54c6-700d-4be4-9d77-688cbcf2c339]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 systemd-machined[207609]: New machine qemu-15-instance-0000000f.
Feb 02 15:41:20 compute-0 NetworkManager[49171]: <info>  [1770046880.1019] device (tap6e8bb61d-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:41:20 compute-0 NetworkManager[49171]: <info>  [1770046880.1026] device (tap6e8bb61d-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:41:20 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.107 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[84dd19cc-4048-41e6-8c1b-edb00f91b3e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.113 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:20 compute-0 ovn_controller[144995]: 2026-02-02T15:41:20Z|00150|binding|INFO|Setting lport 6e8bb61d-7abf-41af-b600-e512dee7d4c1 ovn-installed in OVS
Feb 02 15:41:20 compute-0 ovn_controller[144995]: 2026-02-02T15:41:20Z|00151|binding|INFO|Setting lport 6e8bb61d-7abf-41af-b600-e512dee7d4c1 up in Southbound
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.118 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.126 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebe4b92-fac0-4ac3-be15-9d54df0dbf01]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.148 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[5630752e-3161-43ef-a714-b6effe2f3b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.153 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[92805cd3-bb4f-4cca-9f3d-ff5855998239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 NetworkManager[49171]: <info>  [1770046880.1544] manager: (tap473fc4ca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/85)
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.173 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[750f7d0b-e782-4608-81b5-ec7a6d8f1b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.176 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[7b95b6ea-27e3-4511-8e5b-d3271f0fe2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 NetworkManager[49171]: <info>  [1770046880.1935] device (tap473fc4ca-a0): carrier: link connected
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.197 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[fd189ccc-e595-4932-9210-7fe4cdf888e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.211 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d45da77e-f5b8-4b74-bdb6-8021a4d061dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436610, 'reachable_time': 42476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261682, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.222 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c4315261-1f5f-4478-9efd-4f77001a3c24]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:14cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436610, 'tstamp': 436610}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261683, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.237 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f8caf0-76af-4977-a910-749e998dc88c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436610, 'reachable_time': 42476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261684, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.258 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1446f6-598b-4819-b4dd-96d530b08512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.296 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8af37c-8d21-4b29-b402-768eeadc9d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.298 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.298 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.299 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473fc4ca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.300 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:20 compute-0 kernel: tap473fc4ca-a0: entered promiscuous mode
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.302 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473fc4ca-a0, col_values=(('external_ids', {'iface-id': '8ec763b2-de85-4ed5-bb5d-67e76d81beae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.303 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:20 compute-0 ovn_controller[144995]: 2026-02-02T15:41:20Z|00152|binding|INFO|Releasing lport 8ec763b2-de85-4ed5-bb5d-67e76d81beae from this chassis (sb_readonly=0)
Feb 02 15:41:20 compute-0 NetworkManager[49171]: <info>  [1770046880.3047] manager: (tap473fc4ca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.310 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.311 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.312 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c497b849-b08d-4c60-9f4a-a35354e778f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.313 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:41:20 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:20.314 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'env', 'PROCESS_TAG=haproxy-473fc4ca-a137-447b-9349-9f4677babee6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/473fc4ca-a137-447b-9349-9f4677babee6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.360 239549 DEBUG nova.compute.manager [req-92143a56-d28a-48fc-aa20-ebd486fb0a05 req-e1979250-38dc-40d8-b92d-0b822b2ccf21 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Received event network-vif-plugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.360 239549 DEBUG oslo_concurrency.lockutils [req-92143a56-d28a-48fc-aa20-ebd486fb0a05 req-e1979250-38dc-40d8-b92d-0b822b2ccf21 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.361 239549 DEBUG oslo_concurrency.lockutils [req-92143a56-d28a-48fc-aa20-ebd486fb0a05 req-e1979250-38dc-40d8-b92d-0b822b2ccf21 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.361 239549 DEBUG oslo_concurrency.lockutils [req-92143a56-d28a-48fc-aa20-ebd486fb0a05 req-e1979250-38dc-40d8-b92d-0b822b2ccf21 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:20 compute-0 nova_compute[239545]: 2026-02-02 15:41:20.361 239549 DEBUG nova.compute.manager [req-92143a56-d28a-48fc-aa20-ebd486fb0a05 req-e1979250-38dc-40d8-b92d-0b822b2ccf21 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Processing event network-vif-plugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:41:20 compute-0 sudo[261694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:41:20 compute-0 sudo[261694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:41:20 compute-0 sudo[261694]: pam_unix(sudo:session): session closed for user root
Feb 02 15:41:20 compute-0 sudo[261729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:41:20 compute-0 sudo[261729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:41:20 compute-0 podman[261799]: 2026-02-02 15:41:20.664948445 +0000 UTC m=+0.049354679 container create 2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:41:20 compute-0 systemd[1]: Started libpod-conmon-2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e.scope.
Feb 02 15:41:20 compute-0 podman[261799]: 2026-02-02 15:41:20.638577595 +0000 UTC m=+0.022983849 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:41:20 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901a3e1485f22a705c470bbd6df1c7de434b3991294bfd724f1df9fd2cfb3f22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:20 compute-0 podman[261799]: 2026-02-02 15:41:20.754968081 +0000 UTC m=+0.139374335 container init 2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:41:20 compute-0 podman[261799]: 2026-02-02 15:41:20.764638256 +0000 UTC m=+0.149044490 container start 2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 02 15:41:20 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[261814]: [NOTICE]   (261819) : New worker (261829) forked
Feb 02 15:41:20 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[261814]: [NOTICE]   (261819) : Loading success.
Feb 02 15:41:20 compute-0 sudo[261729]: pam_unix(sudo:session): session closed for user root
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:41:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Feb 02 15:41:21 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:41:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:41:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:41:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:41:21 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:41:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: osdmap e387: 3 total, 3 up, 3 in
Feb 02 15:41:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:41:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:41:21 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:41:21 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:41:21 compute-0 sudo[261857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:41:21 compute-0 sudo[261857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:41:21 compute-0 sudo[261857]: pam_unix(sudo:session): session closed for user root
Feb 02 15:41:21 compute-0 sudo[261883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:41:21 compute-0 sudo[261883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:41:21 compute-0 podman[261920]: 2026-02-02 15:41:21.41748772 +0000 UTC m=+0.041132591 container create cd3a1701570377f552aa3bb937ca54e04d84a8bfd7cd61067567918abcce7ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:41:21 compute-0 systemd[1]: Started libpod-conmon-cd3a1701570377f552aa3bb937ca54e04d84a8bfd7cd61067567918abcce7ee7.scope.
Feb 02 15:41:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:41:21 compute-0 podman[261920]: 2026-02-02 15:41:21.484517466 +0000 UTC m=+0.108162507 container init cd3a1701570377f552aa3bb937ca54e04d84a8bfd7cd61067567918abcce7ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dhawan, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:41:21 compute-0 podman[261920]: 2026-02-02 15:41:21.49041384 +0000 UTC m=+0.114058701 container start cd3a1701570377f552aa3bb937ca54e04d84a8bfd7cd61067567918abcce7ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dhawan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:41:21 compute-0 podman[261920]: 2026-02-02 15:41:21.493578627 +0000 UTC m=+0.117223518 container attach cd3a1701570377f552aa3bb937ca54e04d84a8bfd7cd61067567918abcce7ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:41:21 compute-0 reverent_dhawan[261937]: 167 167
Feb 02 15:41:21 compute-0 systemd[1]: libpod-cd3a1701570377f552aa3bb937ca54e04d84a8bfd7cd61067567918abcce7ee7.scope: Deactivated successfully.
Feb 02 15:41:21 compute-0 podman[261920]: 2026-02-02 15:41:21.496347284 +0000 UTC m=+0.119992145 container died cd3a1701570377f552aa3bb937ca54e04d84a8bfd7cd61067567918abcce7ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:41:21 compute-0 podman[261920]: 2026-02-02 15:41:21.403351145 +0000 UTC m=+0.026996026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0d3a5d0a4573388be5a01f53a8275b66cd636f00731317d6f83637ba98048f0-merged.mount: Deactivated successfully.
Feb 02 15:41:21 compute-0 podman[261920]: 2026-02-02 15:41:21.532950063 +0000 UTC m=+0.156594924 container remove cd3a1701570377f552aa3bb937ca54e04d84a8bfd7cd61067567918abcce7ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:41:21 compute-0 systemd[1]: libpod-conmon-cd3a1701570377f552aa3bb937ca54e04d84a8bfd7cd61067567918abcce7ee7.scope: Deactivated successfully.
Feb 02 15:41:21 compute-0 podman[261962]: 2026-02-02 15:41:21.649055642 +0000 UTC m=+0.036459186 container create a271d0a5d4f7b91cb2f156946a2d05995294949fb279b55205a31951aba81c0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_noyce, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 02 15:41:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 8.1 KiB/s wr, 117 op/s
Feb 02 15:41:21 compute-0 systemd[1]: Started libpod-conmon-a271d0a5d4f7b91cb2f156946a2d05995294949fb279b55205a31951aba81c0c.scope.
Feb 02 15:41:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d72593c3bd737fe15d13e93fca266d6183553fde07e63c1a42dc66c5c778293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d72593c3bd737fe15d13e93fca266d6183553fde07e63c1a42dc66c5c778293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d72593c3bd737fe15d13e93fca266d6183553fde07e63c1a42dc66c5c778293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d72593c3bd737fe15d13e93fca266d6183553fde07e63c1a42dc66c5c778293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d72593c3bd737fe15d13e93fca266d6183553fde07e63c1a42dc66c5c778293/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:21 compute-0 podman[261962]: 2026-02-02 15:41:21.710800121 +0000 UTC m=+0.098203715 container init a271d0a5d4f7b91cb2f156946a2d05995294949fb279b55205a31951aba81c0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_noyce, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:41:21 compute-0 podman[261962]: 2026-02-02 15:41:21.716811507 +0000 UTC m=+0.104215071 container start a271d0a5d4f7b91cb2f156946a2d05995294949fb279b55205a31951aba81c0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 15:41:21 compute-0 podman[261962]: 2026-02-02 15:41:21.719731478 +0000 UTC m=+0.107135052 container attach a271d0a5d4f7b91cb2f156946a2d05995294949fb279b55205a31951aba81c0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:41:21 compute-0 podman[261962]: 2026-02-02 15:41:21.633438183 +0000 UTC m=+0.020841757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/305196908' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/305196908' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:22 compute-0 recursing_noyce[261980]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:41:22 compute-0 recursing_noyce[261980]: --> All data devices are unavailable
Feb 02 15:41:22 compute-0 systemd[1]: libpod-a271d0a5d4f7b91cb2f156946a2d05995294949fb279b55205a31951aba81c0c.scope: Deactivated successfully.
Feb 02 15:41:22 compute-0 podman[261962]: 2026-02-02 15:41:22.109515474 +0000 UTC m=+0.496919048 container died a271d0a5d4f7b91cb2f156946a2d05995294949fb279b55205a31951aba81c0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:41:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Feb 02 15:41:22 compute-0 ceph-mon[75334]: pgmap v1417: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 8.1 KiB/s wr, 117 op/s
Feb 02 15:41:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/305196908' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/305196908' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Feb 02 15:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d72593c3bd737fe15d13e93fca266d6183553fde07e63c1a42dc66c5c778293-merged.mount: Deactivated successfully.
Feb 02 15:41:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Feb 02 15:41:22 compute-0 podman[261962]: 2026-02-02 15:41:22.183584423 +0000 UTC m=+0.570987977 container remove a271d0a5d4f7b91cb2f156946a2d05995294949fb279b55205a31951aba81c0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:41:22 compute-0 systemd[1]: libpod-conmon-a271d0a5d4f7b91cb2f156946a2d05995294949fb279b55205a31951aba81c0c.scope: Deactivated successfully.
Feb 02 15:41:22 compute-0 sudo[261883]: pam_unix(sudo:session): session closed for user root
Feb 02 15:41:22 compute-0 sudo[262012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:41:22 compute-0 sudo[262012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:41:22 compute-0 sudo[262012]: pam_unix(sudo:session): session closed for user root
Feb 02 15:41:22 compute-0 sudo[262037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:41:22 compute-0 sudo[262037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:41:22 compute-0 nova_compute[239545]: 2026-02-02 15:41:22.466 239549 DEBUG nova.compute.manager [req-c80b3ac3-e9dc-45ba-86ce-ce99ec83da8b req-af84169d-629b-4c97-a394-e6670576ffc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Received event network-vif-plugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:41:22 compute-0 nova_compute[239545]: 2026-02-02 15:41:22.467 239549 DEBUG oslo_concurrency.lockutils [req-c80b3ac3-e9dc-45ba-86ce-ce99ec83da8b req-af84169d-629b-4c97-a394-e6670576ffc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:22 compute-0 nova_compute[239545]: 2026-02-02 15:41:22.467 239549 DEBUG oslo_concurrency.lockutils [req-c80b3ac3-e9dc-45ba-86ce-ce99ec83da8b req-af84169d-629b-4c97-a394-e6670576ffc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:22 compute-0 nova_compute[239545]: 2026-02-02 15:41:22.467 239549 DEBUG oslo_concurrency.lockutils [req-c80b3ac3-e9dc-45ba-86ce-ce99ec83da8b req-af84169d-629b-4c97-a394-e6670576ffc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:22 compute-0 nova_compute[239545]: 2026-02-02 15:41:22.468 239549 DEBUG nova.compute.manager [req-c80b3ac3-e9dc-45ba-86ce-ce99ec83da8b req-af84169d-629b-4c97-a394-e6670576ffc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] No waiting events found dispatching network-vif-plugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:41:22 compute-0 nova_compute[239545]: 2026-02-02 15:41:22.468 239549 WARNING nova.compute.manager [req-c80b3ac3-e9dc-45ba-86ce-ce99ec83da8b req-af84169d-629b-4c97-a394-e6670576ffc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Received unexpected event network-vif-plugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 for instance with vm_state building and task_state spawning.
Feb 02 15:41:22 compute-0 podman[262074]: 2026-02-02 15:41:22.654776244 +0000 UTC m=+0.057347834 container create 4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:41:22 compute-0 systemd[1]: Started libpod-conmon-4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e.scope.
Feb 02 15:41:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:41:22 compute-0 podman[262074]: 2026-02-02 15:41:22.616942355 +0000 UTC m=+0.019513995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:41:22 compute-0 podman[262074]: 2026-02-02 15:41:22.74027158 +0000 UTC m=+0.142843200 container init 4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:41:22 compute-0 podman[262074]: 2026-02-02 15:41:22.746559503 +0000 UTC m=+0.149131093 container start 4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:41:22 compute-0 angry_perlman[262091]: 167 167
Feb 02 15:41:22 compute-0 systemd[1]: libpod-4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e.scope: Deactivated successfully.
Feb 02 15:41:22 compute-0 podman[262074]: 2026-02-02 15:41:22.749831672 +0000 UTC m=+0.152403262 container attach 4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_perlman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:41:22 compute-0 conmon[262091]: conmon 4731a5b53faa133c6770 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e.scope/container/memory.events
Feb 02 15:41:22 compute-0 podman[262074]: 2026-02-02 15:41:22.751848762 +0000 UTC m=+0.154420352 container died 4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_perlman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-28d130def3ce6cdb40e8c17db2a9eb39f3a69b80047d0234640cfeff23cc22c4-merged.mount: Deactivated successfully.
Feb 02 15:41:22 compute-0 podman[262074]: 2026-02-02 15:41:22.851141933 +0000 UTC m=+0.253713523 container remove 4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:41:22 compute-0 systemd[1]: libpod-conmon-4731a5b53faa133c67703c57966261a60fcbfa7bae24c5356dbc87d0cf483b7e.scope: Deactivated successfully.
Feb 02 15:41:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Feb 02 15:41:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Feb 02 15:41:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Feb 02 15:41:22 compute-0 podman[262114]: 2026-02-02 15:41:22.992696781 +0000 UTC m=+0.032179773 container create b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_montalcini, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:41:23 compute-0 systemd[1]: Started libpod-conmon-b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef.scope.
Feb 02 15:41:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc809428c2385bc7d6d3f2765d49cc12b03474693c69b03d7182aa279631cd8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc809428c2385bc7d6d3f2765d49cc12b03474693c69b03d7182aa279631cd8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc809428c2385bc7d6d3f2765d49cc12b03474693c69b03d7182aa279631cd8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc809428c2385bc7d6d3f2765d49cc12b03474693c69b03d7182aa279631cd8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.036 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046883.0361814, 9bf64275-8660-44f1-9fbd-a7b53f3b651b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.037 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] VM Started (Lifecycle Event)
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.039 239549 DEBUG nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.052 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:41:23 compute-0 podman[262114]: 2026-02-02 15:41:23.054021899 +0000 UTC m=+0.093504911 container init b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.061 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:41:23 compute-0 podman[262114]: 2026-02-02 15:41:23.063059729 +0000 UTC m=+0.102542731 container start b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.063 239549 INFO nova.virt.libvirt.driver [-] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Instance spawned successfully.
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.063 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.068 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:41:23 compute-0 podman[262114]: 2026-02-02 15:41:23.069566647 +0000 UTC m=+0.109049729 container attach b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:41:23 compute-0 podman[262114]: 2026-02-02 15:41:22.979024848 +0000 UTC m=+0.018507860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.085 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.085 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046883.036383, 9bf64275-8660-44f1-9fbd-a7b53f3b651b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.085 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] VM Paused (Lifecycle Event)
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.094 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.094 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.095 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.095 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.095 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.096 239549 DEBUG nova.virt.libvirt.driver [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.103 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.108 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046883.0463483, 9bf64275-8660-44f1-9fbd-a7b53f3b651b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.108 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] VM Resumed (Lifecycle Event)
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.123 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.126 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.150 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.160 239549 INFO nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Took 6.33 seconds to spawn the instance on the hypervisor.
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.161 239549 DEBUG nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:41:23 compute-0 ceph-mon[75334]: osdmap e388: 3 total, 3 up, 3 in
Feb 02 15:41:23 compute-0 ceph-mon[75334]: osdmap e389: 3 total, 3 up, 3 in
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.216 239549 INFO nova.compute.manager [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Took 8.64 seconds to build instance.
Feb 02 15:41:23 compute-0 nova_compute[239545]: 2026-02-02 15:41:23.237 239549 DEBUG oslo_concurrency.lockutils [None req-8325e4bc-fb5c-4b1d-a014-a5171ed8ea31 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647867520' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647867520' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]: {
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:     "0": [
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:         {
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "devices": [
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "/dev/loop3"
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             ],
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_name": "ceph_lv0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_size": "21470642176",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "name": "ceph_lv0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "tags": {
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.cluster_name": "ceph",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.crush_device_class": "",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.encrypted": "0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.objectstore": "bluestore",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.osd_id": "0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.type": "block",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.vdo": "0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.with_tpm": "0"
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             },
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "type": "block",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "vg_name": "ceph_vg0"
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:         }
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:     ],
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:     "1": [
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:         {
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "devices": [
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "/dev/loop4"
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             ],
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_name": "ceph_lv1",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_size": "21470642176",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "name": "ceph_lv1",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "tags": {
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.cluster_name": "ceph",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.crush_device_class": "",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.encrypted": "0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.objectstore": "bluestore",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.osd_id": "1",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.type": "block",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.vdo": "0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.with_tpm": "0"
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             },
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "type": "block",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "vg_name": "ceph_vg1"
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:         }
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:     ],
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:     "2": [
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:         {
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "devices": [
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "/dev/loop5"
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             ],
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_name": "ceph_lv2",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_size": "21470642176",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "name": "ceph_lv2",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "tags": {
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.cluster_name": "ceph",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.crush_device_class": "",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.encrypted": "0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.objectstore": "bluestore",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.osd_id": "2",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.type": "block",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.vdo": "0",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:                 "ceph.with_tpm": "0"
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             },
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "type": "block",
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:             "vg_name": "ceph_vg2"
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:         }
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]:     ]
Feb 02 15:41:23 compute-0 interesting_montalcini[262136]: }
Feb 02 15:41:23 compute-0 systemd[1]: libpod-b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef.scope: Deactivated successfully.
Feb 02 15:41:23 compute-0 conmon[262136]: conmon b8bd65ee34ecd37ea272 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef.scope/container/memory.events
Feb 02 15:41:23 compute-0 podman[262114]: 2026-02-02 15:41:23.34357537 +0000 UTC m=+0.383058372 container died b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc809428c2385bc7d6d3f2765d49cc12b03474693c69b03d7182aa279631cd8e-merged.mount: Deactivated successfully.
Feb 02 15:41:23 compute-0 podman[262114]: 2026-02-02 15:41:23.380092867 +0000 UTC m=+0.419575859 container remove b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_montalcini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:41:23 compute-0 systemd[1]: libpod-conmon-b8bd65ee34ecd37ea2725fc3151562e56728494f6b7ba81144b619e46965b3ef.scope: Deactivated successfully.
Feb 02 15:41:23 compute-0 sudo[262037]: pam_unix(sudo:session): session closed for user root
Feb 02 15:41:23 compute-0 sudo[262158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:41:23 compute-0 sudo[262158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:41:23 compute-0 sudo[262158]: pam_unix(sudo:session): session closed for user root
Feb 02 15:41:23 compute-0 sudo[262183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:41:23 compute-0 sudo[262183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:41:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 38 KiB/s wr, 69 op/s
Feb 02 15:41:23 compute-0 podman[262220]: 2026-02-02 15:41:23.78008192 +0000 UTC m=+0.041114569 container create fd519a8c2f5f30fea3a9bd9a2ce098f7c84b41818d9914ebd233bde7b07ef937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:41:23 compute-0 systemd[1]: Started libpod-conmon-fd519a8c2f5f30fea3a9bd9a2ce098f7c84b41818d9914ebd233bde7b07ef937.scope.
Feb 02 15:41:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:41:23 compute-0 podman[262220]: 2026-02-02 15:41:23.856738592 +0000 UTC m=+0.117771231 container init fd519a8c2f5f30fea3a9bd9a2ce098f7c84b41818d9914ebd233bde7b07ef937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:41:23 compute-0 podman[262220]: 2026-02-02 15:41:23.759975053 +0000 UTC m=+0.021007682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:41:23 compute-0 podman[262220]: 2026-02-02 15:41:23.863188288 +0000 UTC m=+0.124220897 container start fd519a8c2f5f30fea3a9bd9a2ce098f7c84b41818d9914ebd233bde7b07ef937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:41:23 compute-0 funny_wing[262236]: 167 167
Feb 02 15:41:23 compute-0 podman[262220]: 2026-02-02 15:41:23.867185515 +0000 UTC m=+0.128218114 container attach fd519a8c2f5f30fea3a9bd9a2ce098f7c84b41818d9914ebd233bde7b07ef937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb 02 15:41:23 compute-0 systemd[1]: libpod-fd519a8c2f5f30fea3a9bd9a2ce098f7c84b41818d9914ebd233bde7b07ef937.scope: Deactivated successfully.
Feb 02 15:41:23 compute-0 podman[262220]: 2026-02-02 15:41:23.868489377 +0000 UTC m=+0.129522016 container died fd519a8c2f5f30fea3a9bd9a2ce098f7c84b41818d9914ebd233bde7b07ef937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Feb 02 15:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfd464b35369a3edcbb8fcc2e2f94bb92e06acec3548d428d450022e676debe5-merged.mount: Deactivated successfully.
Feb 02 15:41:23 compute-0 podman[262220]: 2026-02-02 15:41:23.908573251 +0000 UTC m=+0.169605870 container remove fd519a8c2f5f30fea3a9bd9a2ce098f7c84b41818d9914ebd233bde7b07ef937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:41:23 compute-0 systemd[1]: libpod-conmon-fd519a8c2f5f30fea3a9bd9a2ce098f7c84b41818d9914ebd233bde7b07ef937.scope: Deactivated successfully.
Feb 02 15:41:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Feb 02 15:41:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Feb 02 15:41:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Feb 02 15:41:24 compute-0 podman[262261]: 2026-02-02 15:41:24.061690259 +0000 UTC m=+0.041742675 container create 1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:41:24 compute-0 systemd[1]: Started libpod-conmon-1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5.scope.
Feb 02 15:41:24 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128492dc7416942ba32a4a47cd2badaaa8d8d03fb0c51e6901786e122bfd867c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128492dc7416942ba32a4a47cd2badaaa8d8d03fb0c51e6901786e122bfd867c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128492dc7416942ba32a4a47cd2badaaa8d8d03fb0c51e6901786e122bfd867c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128492dc7416942ba32a4a47cd2badaaa8d8d03fb0c51e6901786e122bfd867c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:24 compute-0 podman[262261]: 2026-02-02 15:41:24.045880935 +0000 UTC m=+0.025933401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:41:24 compute-0 podman[262261]: 2026-02-02 15:41:24.14779181 +0000 UTC m=+0.127844246 container init 1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hermann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:41:24 compute-0 podman[262261]: 2026-02-02 15:41:24.158798566 +0000 UTC m=+0.138850992 container start 1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:41:24 compute-0 podman[262261]: 2026-02-02 15:41:24.161827631 +0000 UTC m=+0.141880097 container attach 1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:41:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3647867520' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3647867520' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:24 compute-0 ceph-mon[75334]: pgmap v1420: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 38 KiB/s wr, 69 op/s
Feb 02 15:41:24 compute-0 ceph-mon[75334]: osdmap e390: 3 total, 3 up, 3 in
Feb 02 15:41:24 compute-0 nova_compute[239545]: 2026-02-02 15:41:24.189 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:24 compute-0 nova_compute[239545]: 2026-02-02 15:41:24.406 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:24 compute-0 lvm[262356]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:41:24 compute-0 lvm[262356]: VG ceph_vg0 finished
Feb 02 15:41:24 compute-0 lvm[262357]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:41:24 compute-0 lvm[262357]: VG ceph_vg1 finished
Feb 02 15:41:24 compute-0 lvm[262359]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:41:24 compute-0 lvm[262359]: VG ceph_vg2 finished
Feb 02 15:41:24 compute-0 lvm[262360]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:41:24 compute-0 lvm[262360]: VG ceph_vg0 finished
Feb 02 15:41:24 compute-0 flamboyant_hermann[262278]: {}
Feb 02 15:41:24 compute-0 systemd[1]: libpod-1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5.scope: Deactivated successfully.
Feb 02 15:41:24 compute-0 systemd[1]: libpod-1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5.scope: Consumed 1.062s CPU time.
Feb 02 15:41:24 compute-0 podman[262261]: 2026-02-02 15:41:24.92197147 +0000 UTC m=+0.902023896 container died 1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hermann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-128492dc7416942ba32a4a47cd2badaaa8d8d03fb0c51e6901786e122bfd867c-merged.mount: Deactivated successfully.
Feb 02 15:41:24 compute-0 podman[262261]: 2026-02-02 15:41:24.969319179 +0000 UTC m=+0.949371605 container remove 1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hermann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:41:24 compute-0 systemd[1]: libpod-conmon-1e771f5347983f2cfe9c462aa7648c2646daff1db65332447b588a41098813f5.scope: Deactivated successfully.
Feb 02 15:41:25 compute-0 sudo[262183]: pam_unix(sudo:session): session closed for user root
Feb 02 15:41:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:41:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:41:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:41:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:41:25 compute-0 sudo[262374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:41:25 compute-0 sudo[262374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:41:25 compute-0 sudo[262374]: pam_unix(sudo:session): session closed for user root
Feb 02 15:41:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Feb 02 15:41:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Feb 02 15:41:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Feb 02 15:41:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:25.336 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:41:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:25.338 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:41:25 compute-0 nova_compute[239545]: 2026-02-02 15:41:25.352 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 51 KiB/s wr, 313 op/s
Feb 02 15:41:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:41:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:41:26 compute-0 ceph-mon[75334]: osdmap e391: 3 total, 3 up, 3 in
Feb 02 15:41:26 compute-0 ceph-mon[75334]: pgmap v1423: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 51 KiB/s wr, 313 op/s
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.340 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.368 239549 DEBUG oslo_concurrency.lockutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.369 239549 DEBUG oslo_concurrency.lockutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.369 239549 DEBUG oslo_concurrency.lockutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.369 239549 DEBUG oslo_concurrency.lockutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.369 239549 DEBUG oslo_concurrency.lockutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.370 239549 INFO nova.compute.manager [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Terminating instance
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.371 239549 DEBUG nova.compute.manager [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:41:26 compute-0 kernel: tap6e8bb61d-7a (unregistering): left promiscuous mode
Feb 02 15:41:26 compute-0 NetworkManager[49171]: <info>  [1770046886.4058] device (tap6e8bb61d-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:41:26 compute-0 ovn_controller[144995]: 2026-02-02T15:41:26Z|00153|binding|INFO|Releasing lport 6e8bb61d-7abf-41af-b600-e512dee7d4c1 from this chassis (sb_readonly=0)
Feb 02 15:41:26 compute-0 ovn_controller[144995]: 2026-02-02T15:41:26Z|00154|binding|INFO|Setting lport 6e8bb61d-7abf-41af-b600-e512dee7d4c1 down in Southbound
Feb 02 15:41:26 compute-0 ovn_controller[144995]: 2026-02-02T15:41:26Z|00155|binding|INFO|Removing iface tap6e8bb61d-7a ovn-installed in OVS
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.440 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.442 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.447 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:11:56 10.100.0.6'], port_security=['fa:16:3e:c2:11:56 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9bf64275-8660-44f1-9fbd-a7b53f3b651b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c0eb4e8-bc20-4111-8350-1463414f08ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=6e8bb61d-7abf-41af-b600-e512dee7d4c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.448 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 6e8bb61d-7abf-41af-b600-e512dee7d4c1 in datapath 473fc4ca-a137-447b-9349-9f4677babee6 unbound from our chassis
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.449 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.449 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 473fc4ca-a137-447b-9349-9f4677babee6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.450 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[172b8548-dbd3-4ffb-8bfb-2ea0188609eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.451 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace which is not needed anymore
Feb 02 15:41:26 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Feb 02 15:41:26 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 3.355s CPU time.
Feb 02 15:41:26 compute-0 systemd-machined[207609]: Machine qemu-15-instance-0000000f terminated.
Feb 02 15:41:26 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[261814]: [NOTICE]   (261819) : haproxy version is 2.8.14-c23fe91
Feb 02 15:41:26 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[261814]: [NOTICE]   (261819) : path to executable is /usr/sbin/haproxy
Feb 02 15:41:26 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[261814]: [WARNING]  (261819) : Exiting Master process...
Feb 02 15:41:26 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[261814]: [ALERT]    (261819) : Current worker (261829) exited with code 143 (Terminated)
Feb 02 15:41:26 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[261814]: [WARNING]  (261819) : All workers exited. Exiting... (0)
Feb 02 15:41:26 compute-0 systemd[1]: libpod-2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e.scope: Deactivated successfully.
Feb 02 15:41:26 compute-0 podman[262421]: 2026-02-02 15:41:26.565972029 +0000 UTC m=+0.038078299 container died 2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 15:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e-userdata-shm.mount: Deactivated successfully.
Feb 02 15:41:26 compute-0 NetworkManager[49171]: <info>  [1770046886.5889] manager: (tap6e8bb61d-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.588 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-901a3e1485f22a705c470bbd6df1c7de434b3991294bfd724f1df9fd2cfb3f22-merged.mount: Deactivated successfully.
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.593 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 podman[262421]: 2026-02-02 15:41:26.598511582 +0000 UTC m=+0.070617852 container cleanup 2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:41:26 compute-0 systemd[1]: libpod-conmon-2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e.scope: Deactivated successfully.
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.602 239549 INFO nova.virt.libvirt.driver [-] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Instance destroyed successfully.
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.602 239549 DEBUG nova.objects.instance [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'resources' on Instance uuid 9bf64275-8660-44f1-9fbd-a7b53f3b651b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.618 239549 DEBUG nova.virt.libvirt.vif [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:41:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-799321469',display_name='tempest-TestVolumeBootPattern-server-799321469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-799321469',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:41:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-0l6vwqb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:41:23Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=9bf64275-8660-44f1-9fbd-a7b53f3b651b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.619 239549 DEBUG nova.network.os_vif_util [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "address": "fa:16:3e:c2:11:56", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8bb61d-7a", "ovs_interfaceid": "6e8bb61d-7abf-41af-b600-e512dee7d4c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.620 239549 DEBUG nova.network.os_vif_util [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:11:56,bridge_name='br-int',has_traffic_filtering=True,id=6e8bb61d-7abf-41af-b600-e512dee7d4c1,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e8bb61d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.620 239549 DEBUG os_vif [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:11:56,bridge_name='br-int',has_traffic_filtering=True,id=6e8bb61d-7abf-41af-b600-e512dee7d4c1,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e8bb61d-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.622 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.623 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e8bb61d-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.624 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.627 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.629 239549 INFO os_vif [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:11:56,bridge_name='br-int',has_traffic_filtering=True,id=6e8bb61d-7abf-41af-b600-e512dee7d4c1,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e8bb61d-7a')
Feb 02 15:41:26 compute-0 podman[262459]: 2026-02-02 15:41:26.645780863 +0000 UTC m=+0.033281282 container remove 2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.649 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[fd52df12-f0a5-4a44-8e4a-84671d3d50a0]: (4, ('Mon Feb  2 03:41:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e)\n2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e\nMon Feb  2 03:41:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e)\n2609da350378aa8a0344d2811d6057cf5d2d67f7a6679db89c637a4517e80a7e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.651 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a13192cd-b199-4132-b0b3-b162ccb28b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.651 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.653 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 kernel: tap473fc4ca-a0: left promiscuous mode
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.655 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.658 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ece38660-db4b-4051-86ce-1762743cec98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.662 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.670 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0e52604c-e986-4d74-a020-b53742146793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.671 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[86a02f70-dfd1-4daf-b2b2-9855b344981a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.684 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f304ed-4732-4e3b-8806-ca619103e407]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436605, 'reachable_time': 37427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262492, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.687 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:41:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d473fc4ca\x2da137\x2d447b\x2d9349\x2d9f4677babee6.mount: Deactivated successfully.
Feb 02 15:41:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:26.687 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cd11a6-587a-4e33-b547-eb01e3ab6e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.701 239549 DEBUG nova.compute.manager [req-1d6846fc-6237-4ebe-af32-a19b9570b6d8 req-6ca7292d-5d12-43ea-88cb-8e428e8a580c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Received event network-vif-unplugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.701 239549 DEBUG oslo_concurrency.lockutils [req-1d6846fc-6237-4ebe-af32-a19b9570b6d8 req-6ca7292d-5d12-43ea-88cb-8e428e8a580c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.701 239549 DEBUG oslo_concurrency.lockutils [req-1d6846fc-6237-4ebe-af32-a19b9570b6d8 req-6ca7292d-5d12-43ea-88cb-8e428e8a580c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.702 239549 DEBUG oslo_concurrency.lockutils [req-1d6846fc-6237-4ebe-af32-a19b9570b6d8 req-6ca7292d-5d12-43ea-88cb-8e428e8a580c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.704 239549 DEBUG nova.compute.manager [req-1d6846fc-6237-4ebe-af32-a19b9570b6d8 req-6ca7292d-5d12-43ea-88cb-8e428e8a580c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] No waiting events found dispatching network-vif-unplugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.704 239549 DEBUG nova.compute.manager [req-1d6846fc-6237-4ebe-af32-a19b9570b6d8 req-6ca7292d-5d12-43ea-88cb-8e428e8a580c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Received event network-vif-unplugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.753 239549 INFO nova.virt.libvirt.driver [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Deleting instance files /var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b_del
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.754 239549 INFO nova.virt.libvirt.driver [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Deletion of /var/lib/nova/instances/9bf64275-8660-44f1-9fbd-a7b53f3b651b_del complete
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.806 239549 INFO nova.compute.manager [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Took 0.43 seconds to destroy the instance on the hypervisor.
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.806 239549 DEBUG oslo.service.loopingcall [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.807 239549 DEBUG nova.compute.manager [-] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:41:26 compute-0 nova_compute[239545]: 2026-02-02 15:41:26.807 239549 DEBUG nova.network.neutron [-] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:41:27 compute-0 nova_compute[239545]: 2026-02-02 15:41:27.634 239549 DEBUG nova.network.neutron [-] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:41:27 compute-0 nova_compute[239545]: 2026-02-02 15:41:27.653 239549 INFO nova.compute.manager [-] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Took 0.85 seconds to deallocate network for instance.
Feb 02 15:41:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 37 KiB/s wr, 227 op/s
Feb 02 15:41:27 compute-0 nova_compute[239545]: 2026-02-02 15:41:27.722 239549 DEBUG nova.compute.manager [req-3bfe71f5-6b71-4e26-910a-dd33e03e56c5 req-6de97144-8c62-4529-82db-5d9bd7af04d2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Received event network-vif-deleted-6e8bb61d-7abf-41af-b600-e512dee7d4c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:41:27 compute-0 ceph-mon[75334]: pgmap v1424: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 37 KiB/s wr, 227 op/s
Feb 02 15:41:27 compute-0 nova_compute[239545]: 2026-02-02 15:41:27.926 239549 INFO nova.compute.manager [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Took 0.27 seconds to detach 1 volumes for instance.
Feb 02 15:41:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Feb 02 15:41:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Feb 02 15:41:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Feb 02 15:41:27 compute-0 nova_compute[239545]: 2026-02-02 15:41:27.975 239549 DEBUG oslo_concurrency.lockutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:27 compute-0 nova_compute[239545]: 2026-02-02 15:41:27.975 239549 DEBUG oslo_concurrency.lockutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.033 239549 DEBUG oslo_concurrency.processutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2749185344' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2749185344' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:41:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1318967188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.609 239549 DEBUG oslo_concurrency.processutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.615 239549 DEBUG nova.compute.provider_tree [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.638 239549 DEBUG nova.scheduler.client.report [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.732 239549 DEBUG oslo_concurrency.lockutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.802 239549 INFO nova.scheduler.client.report [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Deleted allocations for instance 9bf64275-8660-44f1-9fbd-a7b53f3b651b
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.805 239549 DEBUG nova.compute.manager [req-2315b8fa-b320-4516-8bdb-329070d78a43 req-37ca7bc2-dfa5-42f7-a007-ca4132b26d51 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Received event network-vif-plugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.806 239549 DEBUG oslo_concurrency.lockutils [req-2315b8fa-b320-4516-8bdb-329070d78a43 req-37ca7bc2-dfa5-42f7-a007-ca4132b26d51 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.806 239549 DEBUG oslo_concurrency.lockutils [req-2315b8fa-b320-4516-8bdb-329070d78a43 req-37ca7bc2-dfa5-42f7-a007-ca4132b26d51 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.806 239549 DEBUG oslo_concurrency.lockutils [req-2315b8fa-b320-4516-8bdb-329070d78a43 req-37ca7bc2-dfa5-42f7-a007-ca4132b26d51 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.806 239549 DEBUG nova.compute.manager [req-2315b8fa-b320-4516-8bdb-329070d78a43 req-37ca7bc2-dfa5-42f7-a007-ca4132b26d51 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] No waiting events found dispatching network-vif-plugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.807 239549 WARNING nova.compute.manager [req-2315b8fa-b320-4516-8bdb-329070d78a43 req-37ca7bc2-dfa5-42f7-a007-ca4132b26d51 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Received unexpected event network-vif-plugged-6e8bb61d-7abf-41af-b600-e512dee7d4c1 for instance with vm_state deleted and task_state None.
Feb 02 15:41:28 compute-0 nova_compute[239545]: 2026-02-02 15:41:28.925 239549 DEBUG oslo_concurrency.lockutils [None req-8d7b031a-cfe8-467e-a403-61541033ab00 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "9bf64275-8660-44f1-9fbd-a7b53f3b651b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Feb 02 15:41:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Feb 02 15:41:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Feb 02 15:41:28 compute-0 ceph-mon[75334]: osdmap e392: 3 total, 3 up, 3 in
Feb 02 15:41:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2749185344' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2749185344' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1318967188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:29 compute-0 nova_compute[239545]: 2026-02-02 15:41:29.191 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 11 KiB/s wr, 230 op/s
Feb 02 15:41:30 compute-0 ceph-mon[75334]: osdmap e393: 3 total, 3 up, 3 in
Feb 02 15:41:30 compute-0 ceph-mon[75334]: pgmap v1427: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 11 KiB/s wr, 230 op/s
Feb 02 15:41:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/215926798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/215926798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/215926798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/215926798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952065717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952065717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:31 compute-0 nova_compute[239545]: 2026-02-02 15:41:31.626 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 8.6 KiB/s wr, 169 op/s
Feb 02 15:41:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2952065717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2952065717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:32 compute-0 ceph-mon[75334]: pgmap v1428: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 8.6 KiB/s wr, 169 op/s
Feb 02 15:41:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Feb 02 15:41:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Feb 02 15:41:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Feb 02 15:41:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 8.5 KiB/s wr, 174 op/s
Feb 02 15:41:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Feb 02 15:41:34 compute-0 ceph-mon[75334]: osdmap e394: 3 total, 3 up, 3 in
Feb 02 15:41:34 compute-0 ceph-mon[75334]: pgmap v1430: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 8.5 KiB/s wr, 174 op/s
Feb 02 15:41:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Feb 02 15:41:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Feb 02 15:41:34 compute-0 nova_compute[239545]: 2026-02-02 15:41:34.232 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:35 compute-0 ceph-mon[75334]: osdmap e395: 3 total, 3 up, 3 in
Feb 02 15:41:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 8.5 KiB/s wr, 199 op/s
Feb 02 15:41:36 compute-0 ceph-mon[75334]: pgmap v1432: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 8.5 KiB/s wr, 199 op/s
Feb 02 15:41:36 compute-0 nova_compute[239545]: 2026-02-02 15:41:36.629 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 7.1 KiB/s wr, 167 op/s
Feb 02 15:41:37 compute-0 ceph-mon[75334]: pgmap v1433: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 7.1 KiB/s wr, 167 op/s
Feb 02 15:41:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Feb 02 15:41:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Feb 02 15:41:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Feb 02 15:41:39 compute-0 nova_compute[239545]: 2026-02-02 15:41:39.234 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:39 compute-0 podman[262517]: 2026-02-02 15:41:39.334488026 +0000 UTC m=+0.062987205 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:41:39 compute-0 podman[262516]: 2026-02-02 15:41:39.359484485 +0000 UTC m=+0.087120423 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:41:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.5 KiB/s wr, 124 op/s
Feb 02 15:41:39 compute-0 ceph-mon[75334]: osdmap e396: 3 total, 3 up, 3 in
Feb 02 15:41:39 compute-0 ceph-mon[75334]: pgmap v1435: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.5 KiB/s wr, 124 op/s
Feb 02 15:41:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Feb 02 15:41:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Feb 02 15:41:41 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Feb 02 15:41:41 compute-0 nova_compute[239545]: 2026-02-02 15:41:41.601 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046886.5997436, 9bf64275-8660-44f1-9fbd-a7b53f3b651b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:41:41 compute-0 nova_compute[239545]: 2026-02-02 15:41:41.601 239549 INFO nova.compute.manager [-] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] VM Stopped (Lifecycle Event)
Feb 02 15:41:41 compute-0 nova_compute[239545]: 2026-02-02 15:41:41.621 239549 DEBUG nova.compute.manager [None req-f5785073-b09f-44de-942f-341f6db9ab8e - - - - - -] [instance: 9bf64275-8660-44f1-9fbd-a7b53f3b651b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:41:41 compute-0 nova_compute[239545]: 2026-02-02 15:41:41.632 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 121 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 148 op/s
Feb 02 15:41:42 compute-0 ceph-mon[75334]: osdmap e397: 3 total, 3 up, 3 in
Feb 02 15:41:42 compute-0 ceph-mon[75334]: pgmap v1437: 305 pgs: 305 active+clean; 121 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 148 op/s
Feb 02 15:41:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:41:42
Feb 02 15:41:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:41:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:41:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Feb 02 15:41:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:41:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Feb 02 15:41:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Feb 02 15:41:43 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Feb 02 15:41:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:43 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/216731502' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:43 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/216731502' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 134 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 160 op/s
Feb 02 15:41:44 compute-0 ceph-mon[75334]: osdmap e398: 3 total, 3 up, 3 in
Feb 02 15:41:44 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/216731502' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:44 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/216731502' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:44 compute-0 ceph-mon[75334]: pgmap v1439: 305 pgs: 305 active+clean; 134 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 160 op/s
Feb 02 15:41:44 compute-0 nova_compute[239545]: 2026-02-02 15:41:44.235 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:41:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:41:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 3.2 MiB/s wr, 257 op/s
Feb 02 15:41:45 compute-0 ceph-mon[75334]: pgmap v1440: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 3.2 MiB/s wr, 257 op/s
Feb 02 15:41:45 compute-0 nova_compute[239545]: 2026-02-02 15:41:45.986 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "17df128a-d6af-4570-b50f-c5fd7654c580" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:45 compute-0 nova_compute[239545]: 2026-02-02 15:41:45.987 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.014 239549 DEBUG nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.112 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.113 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.125 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.126 239549 INFO nova.compute.claims [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.265 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.634 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:41:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296924482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.795 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.800 239549 DEBUG nova.compute.provider_tree [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.816 239549 DEBUG nova.scheduler.client.report [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.842 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:46 compute-0 nova_compute[239545]: 2026-02-02 15:41:46.843 239549 DEBUG nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:41:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Feb 02 15:41:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Feb 02 15:41:46 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Feb 02 15:41:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1296924482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:41:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.6 MiB/s wr, 171 op/s
Feb 02 15:41:47 compute-0 ceph-mon[75334]: osdmap e399: 3 total, 3 up, 3 in
Feb 02 15:41:47 compute-0 ceph-mon[75334]: pgmap v1442: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.6 MiB/s wr, 171 op/s
Feb 02 15:41:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:48 compute-0 nova_compute[239545]: 2026-02-02 15:41:48.158 239549 DEBUG nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:41:48 compute-0 nova_compute[239545]: 2026-02-02 15:41:48.158 239549 DEBUG nova.network.neutron [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:41:48 compute-0 nova_compute[239545]: 2026-02-02 15:41:48.214 239549 INFO nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:41:48 compute-0 nova_compute[239545]: 2026-02-02 15:41:48.249 239549 DEBUG nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:41:48 compute-0 nova_compute[239545]: 2026-02-02 15:41:48.327 239549 INFO nova.virt.block_device [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Booting with volume snapshot b9faabcc-80ef-4392-86ee-7ac7a4cded35 at /dev/vda
Feb 02 15:41:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:48 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253671172' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:48 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253671172' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:48 compute-0 nova_compute[239545]: 2026-02-02 15:41:48.427 239549 DEBUG nova.policy [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8e72a1cb6344869821da1cfc41bf8fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:41:48 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3253671172' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:48 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3253671172' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:49 compute-0 nova_compute[239545]: 2026-02-02 15:41:49.238 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:49 compute-0 nova_compute[239545]: 2026-02-02 15:41:49.301 239549 DEBUG nova.network.neutron [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Successfully created port: f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:41:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 1.4 MiB/s wr, 147 op/s
Feb 02 15:41:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:41:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241492653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:41:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241492653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:49 compute-0 ceph-mon[75334]: pgmap v1443: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 1.4 MiB/s wr, 147 op/s
Feb 02 15:41:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2241492653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:41:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2241492653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:41:51 compute-0 nova_compute[239545]: 2026-02-02 15:41:51.060 239549 DEBUG nova.network.neutron [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Successfully updated port: f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:41:51 compute-0 nova_compute[239545]: 2026-02-02 15:41:51.079 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "refresh_cache-17df128a-d6af-4570-b50f-c5fd7654c580" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:41:51 compute-0 nova_compute[239545]: 2026-02-02 15:41:51.081 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquired lock "refresh_cache-17df128a-d6af-4570-b50f-c5fd7654c580" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:41:51 compute-0 nova_compute[239545]: 2026-02-02 15:41:51.081 239549 DEBUG nova.network.neutron [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:41:51 compute-0 nova_compute[239545]: 2026-02-02 15:41:51.267 239549 DEBUG nova.compute.manager [req-dd4f8f6d-108f-4c65-9abe-417435afe551 req-83503637-3954-47fa-8b98-bdf3ae279d27 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Received event network-changed-f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:41:51 compute-0 nova_compute[239545]: 2026-02-02 15:41:51.267 239549 DEBUG nova.compute.manager [req-dd4f8f6d-108f-4c65-9abe-417435afe551 req-83503637-3954-47fa-8b98-bdf3ae279d27 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Refreshing instance network info cache due to event network-changed-f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:41:51 compute-0 nova_compute[239545]: 2026-02-02 15:41:51.267 239549 DEBUG oslo_concurrency.lockutils [req-dd4f8f6d-108f-4c65-9abe-417435afe551 req-83503637-3954-47fa-8b98-bdf3ae279d27 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-17df128a-d6af-4570-b50f-c5fd7654c580" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:41:51 compute-0 nova_compute[239545]: 2026-02-02 15:41:51.312 239549 DEBUG nova.network.neutron [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:41:51 compute-0 nova_compute[239545]: 2026-02-02 15:41:51.638 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 1.3 MiB/s wr, 193 op/s
Feb 02 15:41:51 compute-0 ceph-mon[75334]: pgmap v1444: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 1.3 MiB/s wr, 193 op/s
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.165 239549 DEBUG nova.network.neutron [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Updating instance_info_cache with network_info: [{"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.202 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Releasing lock "refresh_cache-17df128a-d6af-4570-b50f-c5fd7654c580" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.203 239549 DEBUG nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Instance network_info: |[{"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.204 239549 DEBUG oslo_concurrency.lockutils [req-dd4f8f6d-108f-4c65-9abe-417435afe551 req-83503637-3954-47fa-8b98-bdf3ae279d27 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-17df128a-d6af-4570-b50f-c5fd7654c580" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.205 239549 DEBUG nova.network.neutron [req-dd4f8f6d-108f-4c65-9abe-417435afe551 req-83503637-3954-47fa-8b98-bdf3ae279d27 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Refreshing network info cache for port f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.884 239549 DEBUG os_brick.utils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.885 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.894 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.895 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[08f2492c-dc9c-4e49-8af0-43bdd85801cc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.896 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.901 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.902 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[77e9a544-8153-42b9-bda7-24c3c9695b9d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.904 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.910 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.910 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[984ee1f0-d905-4734-836b-9e13a98e7e23]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.914 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[fda9f8e9-52d8-421f-813f-cdc499c4cbe5]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.914 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.928 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "nvme version" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.930 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.931 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.931 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.931 239549 DEBUG os_brick.utils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:41:52 compute-0 nova_compute[239545]: 2026-02-02 15:41:52.931 239549 DEBUG nova.virt.block_device [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Updating existing volume attachment record: 110809ac-62b4-44c9-be5e-f1c62b06869d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:41:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Feb 02 15:41:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Feb 02 15:41:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Feb 02 15:41:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:41:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954415605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:41:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 2.9 KiB/s wr, 84 op/s
Feb 02 15:41:53 compute-0 ceph-mon[75334]: osdmap e400: 3 total, 3 up, 3 in
Feb 02 15:41:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/954415605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:41:53 compute-0 ceph-mon[75334]: pgmap v1446: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 2.9 KiB/s wr, 84 op/s
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.045 239549 DEBUG nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.048 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.048 239549 INFO nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Creating image(s)
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.049 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.050 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Ensure instance console log exists: /var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.050 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.051 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.052 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.056 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Start _get_guest_xml network_info=[{"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '110809ac-62b4-44c9-be5e-f1c62b06869d', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': True, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bb98fb58-2a03-4106-995b-33c7e57a0901', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bb98fb58-2a03-4106-995b-33c7e57a0901', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '17df128a-d6af-4570-b50f-c5fd7654c580', 'attached_at': '', 'detached_at': '', 'volume_id': 'bb98fb58-2a03-4106-995b-33c7e57a0901', 'serial': 'bb98fb58-2a03-4106-995b-33c7e57a0901'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.062 239549 WARNING nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.072 239549 DEBUG nova.virt.libvirt.host [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.073 239549 DEBUG nova.virt.libvirt.host [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.077 239549 DEBUG nova.virt.libvirt.host [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.077 239549 DEBUG nova.virt.libvirt.host [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.078 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.079 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.079 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.080 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.080 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.080 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.081 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.081 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.081 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.081 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.082 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.082 239549 DEBUG nova.virt.hardware [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.108 239549 DEBUG nova.storage.rbd_utils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 17df128a-d6af-4570-b50f-c5fd7654c580_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.111 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.141 239549 DEBUG nova.network.neutron [req-dd4f8f6d-108f-4c65-9abe-417435afe551 req-83503637-3954-47fa-8b98-bdf3ae279d27 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Updated VIF entry in instance network info cache for port f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.141 239549 DEBUG nova.network.neutron [req-dd4f8f6d-108f-4c65-9abe-417435afe551 req-83503637-3954-47fa-8b98-bdf3ae279d27 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Updating instance_info_cache with network_info: [{"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.161 239549 DEBUG oslo_concurrency.lockutils [req-dd4f8f6d-108f-4c65-9abe-417435afe551 req-83503637-3954-47fa-8b98-bdf3ae279d27 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-17df128a-d6af-4570-b50f-c5fd7654c580" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.278 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.73018560045017e-07 of space, bias 1.0, pg target 0.0002919055680135051 quantized to 32 (current 32)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0007069071436670443 of space, bias 1.0, pg target 0.2120721431001133 quantized to 32 (current 32)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.450462336837372e-06 of space, bias 1.0, pg target 0.0007351387010512116 quantized to 32 (current 32)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675858388012598 of space, bias 1.0, pg target 0.20027575164037795 quantized to 32 (current 32)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4936830054008224e-06 of space, bias 4.0, pg target 0.001792419606480987 quantized to 16 (current 16)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:41:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:41:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:41:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177164660' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.695 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.718 239549 DEBUG nova.virt.libvirt.vif [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:41:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1361320349',display_name='tempest-TestVolumeBootPattern-server-1361320349',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1361320349',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-ayxcdh7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:41:48Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=17df128a-d6af-4570-b50f-c5fd7654c580,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.719 239549 DEBUG nova.network.os_vif_util [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.721 239549 DEBUG nova.network.os_vif_util [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:56:52,bridge_name='br-int',has_traffic_filtering=True,id=f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8ad1f20-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.723 239549 DEBUG nova.objects.instance [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17df128a-d6af-4570-b50f-c5fd7654c580 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.739 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <uuid>17df128a-d6af-4570-b50f-c5fd7654c580</uuid>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <name>instance-00000010</name>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <nova:name>tempest-TestVolumeBootPattern-server-1361320349</nova:name>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:41:54</nova:creationTime>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <nova:user uuid="b8e72a1cb6344869821da1cfc41bf8fc">tempest-TestVolumeBootPattern-77302308-project-member</nova:user>
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <nova:project uuid="8a28227cdc0a4390bebe7549f189bfe5">tempest-TestVolumeBootPattern-77302308</nova:project>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <nova:port uuid="f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259">
Feb 02 15:41:54 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <system>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <entry name="serial">17df128a-d6af-4570-b50f-c5fd7654c580</entry>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <entry name="uuid">17df128a-d6af-4570-b50f-c5fd7654c580</entry>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     </system>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <os>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   </os>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <features>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   </features>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/17df128a-d6af-4570-b50f-c5fd7654c580_disk.config">
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       </source>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-bb98fb58-2a03-4106-995b-33c7e57a0901">
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       </source>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:41:54 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <serial>bb98fb58-2a03-4106-995b-33c7e57a0901</serial>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:aa:56:52"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <target dev="tapf8ad1f20-9d"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580/console.log" append="off"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <video>
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     </video>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:41:54 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:41:54 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:41:54 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:41:54 compute-0 nova_compute[239545]: </domain>
Feb 02 15:41:54 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.740 239549 DEBUG nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Preparing to wait for external event network-vif-plugged-f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.741 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.741 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.741 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.743 239549 DEBUG nova.virt.libvirt.vif [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:41:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1361320349',display_name='tempest-TestVolumeBootPattern-server-1361320349',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1361320349',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-ayxcdh7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:41:48Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=17df128a-d6af-4570-b50f-c5fd7654c580,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.743 239549 DEBUG nova.network.os_vif_util [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.744 239549 DEBUG nova.network.os_vif_util [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:56:52,bridge_name='br-int',has_traffic_filtering=True,id=f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8ad1f20-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.745 239549 DEBUG os_vif [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:56:52,bridge_name='br-int',has_traffic_filtering=True,id=f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8ad1f20-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.745 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.746 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.747 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.750 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.751 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8ad1f20-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.752 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf8ad1f20-9d, col_values=(('external_ids', {'iface-id': 'f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:56:52', 'vm-uuid': '17df128a-d6af-4570-b50f-c5fd7654c580'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.754 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:54 compute-0 NetworkManager[49171]: <info>  [1770046914.7550] manager: (tapf8ad1f20-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.757 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.759 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.759 239549 INFO os_vif [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:56:52,bridge_name='br-int',has_traffic_filtering=True,id=f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8ad1f20-9d')
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.801 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.801 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.801 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No VIF found with MAC fa:16:3e:aa:56:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.802 239549 INFO nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Using config drive
Feb 02 15:41:54 compute-0 nova_compute[239545]: 2026-02-02 15:41:54.819 239549 DEBUG nova.storage.rbd_utils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 17df128a-d6af-4570-b50f-c5fd7654c580_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:41:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/177164660' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.181 239549 INFO nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Creating config drive at /var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580/disk.config
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.187 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8znivkny execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.310 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8znivkny" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.333 239549 DEBUG nova.storage.rbd_utils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 17df128a-d6af-4570-b50f-c5fd7654c580_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.339 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580/disk.config 17df128a-d6af-4570-b50f-c5fd7654c580_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.472 239549 DEBUG oslo_concurrency.processutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580/disk.config 17df128a-d6af-4570-b50f-c5fd7654c580_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.473 239549 INFO nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Deleting local config drive /var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580/disk.config because it was imported into RBD.
Feb 02 15:41:55 compute-0 kernel: tapf8ad1f20-9d: entered promiscuous mode
Feb 02 15:41:55 compute-0 NetworkManager[49171]: <info>  [1770046915.5347] manager: (tapf8ad1f20-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Feb 02 15:41:55 compute-0 ovn_controller[144995]: 2026-02-02T15:41:55Z|00156|binding|INFO|Claiming lport f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 for this chassis.
Feb 02 15:41:55 compute-0 ovn_controller[144995]: 2026-02-02T15:41:55Z|00157|binding|INFO|f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259: Claiming fa:16:3e:aa:56:52 10.100.0.11
Feb 02 15:41:55 compute-0 systemd-udevd[262699]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.582 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:55 compute-0 NetworkManager[49171]: <info>  [1770046915.5946] device (tapf8ad1f20-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:41:55 compute-0 ovn_controller[144995]: 2026-02-02T15:41:55Z|00158|binding|INFO|Setting lport f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 ovn-installed in OVS
Feb 02 15:41:55 compute-0 ovn_controller[144995]: 2026-02-02T15:41:55Z|00159|binding|INFO|Setting lport f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 up in Southbound
Feb 02 15:41:55 compute-0 NetworkManager[49171]: <info>  [1770046915.5965] device (tapf8ad1f20-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.596 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.593 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:56:52 10.100.0.11'], port_security=['fa:16:3e:aa:56:52 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '17df128a-d6af-4570-b50f-c5fd7654c580', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8c0eb4e8-bc20-4111-8350-1463414f08ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.595 154982 INFO neutron.agent.ovn.metadata.agent [-] Port f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 in datapath 473fc4ca-a137-447b-9349-9f4677babee6 bound to our chassis
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.597 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.601 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:55 compute-0 systemd-machined[207609]: New machine qemu-16-instance-00000010.
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.605 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a5f4c4-6e25-4a95-afe4-64cdd3b2e711]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.606 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap473fc4ca-a1 in ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.608 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap473fc4ca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.608 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1eac2627-9a40-41da-854d-5cae2445089c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.609 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3ed058-5e26-46e0-a15e-f36e8df1916a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.620 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[a0de4663-e56e-4324-839b-1b6a3e0e2950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.631 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c419f3-3a68-4ca7-8612-7f43b591d85c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.661 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[e7590e74-bf6c-4856-b59c-7522d97de8ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.670 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[56bdfc0a-280a-474b-bfa8-99cfdc26d0f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 NetworkManager[49171]: <info>  [1770046915.6726] manager: (tap473fc4ca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Feb 02 15:41:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.0 KiB/s wr, 78 op/s
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.698 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[281cd3b4-3482-418a-97f6-654fc80089c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.702 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[27b1c76a-b27e-42a1-a387-d9577c96d7ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 NetworkManager[49171]: <info>  [1770046915.7260] device (tap473fc4ca-a0): carrier: link connected
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.730 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[9ecc3922-617a-45ba-8f51-03c368fa407b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.741 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cf0cf4-0838-43d6-9e30-0c7807ae75ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440164, 'reachable_time': 37048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262735, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.751 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c9fed2-0139-418e-8025-29001da23faf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:14cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440164, 'tstamp': 440164}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262736, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.765 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b182f6-ad8e-45eb-b710-ee33f3a884d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440164, 'reachable_time': 37048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262737, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.789 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[43e329c4-e2c4-44c4-a0d2-1dfb26bfc44d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.838 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0f95cb-b863-45c2-bc4b-7c3343b566fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.840 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.840 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.840 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473fc4ca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:55 compute-0 NetworkManager[49171]: <info>  [1770046915.8433] manager: (tap473fc4ca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Feb 02 15:41:55 compute-0 kernel: tap473fc4ca-a0: entered promiscuous mode
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.847 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.847 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473fc4ca-a0, col_values=(('external_ids', {'iface-id': '8ec763b2-de85-4ed5-bb5d-67e76d81beae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:55 compute-0 ovn_controller[144995]: 2026-02-02T15:41:55Z|00160|binding|INFO|Releasing lport 8ec763b2-de85-4ed5-bb5d-67e76d81beae from this chassis (sb_readonly=0)
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.851 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.852 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[85cc6d83-1be0-42f6-aa67-eedaebad8ce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.854 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:41:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:55.855 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'env', 'PROCESS_TAG=haproxy-473fc4ca-a137-447b-9349-9f4677babee6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/473fc4ca-a137-447b-9349-9f4677babee6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.858 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.860 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.982 239549 DEBUG nova.compute.manager [req-1e1022c3-4be2-4596-98a6-6d2afaec3f05 req-3c92c3a4-fe9e-4322-b035-5dfd6be559d6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Received event network-vif-plugged-f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.982 239549 DEBUG oslo_concurrency.lockutils [req-1e1022c3-4be2-4596-98a6-6d2afaec3f05 req-3c92c3a4-fe9e-4322-b035-5dfd6be559d6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.983 239549 DEBUG oslo_concurrency.lockutils [req-1e1022c3-4be2-4596-98a6-6d2afaec3f05 req-3c92c3a4-fe9e-4322-b035-5dfd6be559d6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.983 239549 DEBUG oslo_concurrency.lockutils [req-1e1022c3-4be2-4596-98a6-6d2afaec3f05 req-3c92c3a4-fe9e-4322-b035-5dfd6be559d6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:55 compute-0 nova_compute[239545]: 2026-02-02 15:41:55.983 239549 DEBUG nova.compute.manager [req-1e1022c3-4be2-4596-98a6-6d2afaec3f05 req-3c92c3a4-fe9e-4322-b035-5dfd6be559d6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Processing event network-vif-plugged-f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:41:55 compute-0 ceph-mon[75334]: pgmap v1447: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.0 KiB/s wr, 78 op/s
Feb 02 15:41:56 compute-0 podman[262769]: 2026-02-02 15:41:56.224372795 +0000 UTC m=+0.047856046 container create 13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:41:56 compute-0 systemd[1]: Started libpod-conmon-13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a.scope.
Feb 02 15:41:56 compute-0 podman[262769]: 2026-02-02 15:41:56.197384948 +0000 UTC m=+0.020868209 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:41:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8b2d72791ad81e43a54b8093254af584b81852ee4b49e3dfc9d9222f109944/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:41:56 compute-0 podman[262769]: 2026-02-02 15:41:56.315962067 +0000 UTC m=+0.139445338 container init 13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:41:56 compute-0 podman[262769]: 2026-02-02 15:41:56.321558073 +0000 UTC m=+0.145041324 container start 13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:41:56 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[262784]: [NOTICE]   (262788) : New worker (262790) forked
Feb 02 15:41:56 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[262784]: [NOTICE]   (262788) : Loading success.
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.543 239549 DEBUG nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.544 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046916.5424578, 17df128a-d6af-4570-b50f-c5fd7654c580 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.544 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] VM Started (Lifecycle Event)
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.546 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.550 239549 INFO nova.virt.libvirt.driver [-] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Instance spawned successfully.
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.550 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.562 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.568 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.571 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.571 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.572 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.572 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.572 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.573 239549 DEBUG nova.virt.libvirt.driver [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.600 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.601 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046916.542757, 17df128a-d6af-4570-b50f-c5fd7654c580 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.601 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] VM Paused (Lifecycle Event)
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.624 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.627 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046916.5465388, 17df128a-d6af-4570-b50f-c5fd7654c580 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.627 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] VM Resumed (Lifecycle Event)
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.630 239549 INFO nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Took 2.58 seconds to spawn the instance on the hypervisor.
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.631 239549 DEBUG nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.688 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.691 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.719 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.727 239549 INFO nova.compute.manager [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Took 10.65 seconds to build instance.
Feb 02 15:41:56 compute-0 nova_compute[239545]: 2026-02-02 15:41:56.740 239549 DEBUG oslo_concurrency.lockutils [None req-4e2008ce-b836-42dc-8029-57afd0414029 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Feb 02 15:41:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Feb 02 15:41:57 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Feb 02 15:41:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.1 KiB/s wr, 80 op/s
Feb 02 15:41:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.050 239549 DEBUG nova.compute.manager [req-c51dedb5-7fde-4231-b237-27c8764b4e43 req-3fad28e9-705f-4d47-a73f-bc2021cf27ef d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Received event network-vif-plugged-f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.050 239549 DEBUG oslo_concurrency.lockutils [req-c51dedb5-7fde-4231-b237-27c8764b4e43 req-3fad28e9-705f-4d47-a73f-bc2021cf27ef d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.050 239549 DEBUG oslo_concurrency.lockutils [req-c51dedb5-7fde-4231-b237-27c8764b4e43 req-3fad28e9-705f-4d47-a73f-bc2021cf27ef d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.051 239549 DEBUG oslo_concurrency.lockutils [req-c51dedb5-7fde-4231-b237-27c8764b4e43 req-3fad28e9-705f-4d47-a73f-bc2021cf27ef d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.051 239549 DEBUG nova.compute.manager [req-c51dedb5-7fde-4231-b237-27c8764b4e43 req-3fad28e9-705f-4d47-a73f-bc2021cf27ef d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] No waiting events found dispatching network-vif-plugged-f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.051 239549 WARNING nova.compute.manager [req-c51dedb5-7fde-4231-b237-27c8764b4e43 req-3fad28e9-705f-4d47-a73f-bc2021cf27ef d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Received unexpected event network-vif-plugged-f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 for instance with vm_state active and task_state None.
Feb 02 15:41:58 compute-0 ceph-mon[75334]: osdmap e401: 3 total, 3 up, 3 in
Feb 02 15:41:58 compute-0 ceph-mon[75334]: pgmap v1449: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.1 KiB/s wr, 80 op/s
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.878 239549 DEBUG oslo_concurrency.lockutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "17df128a-d6af-4570-b50f-c5fd7654c580" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.879 239549 DEBUG oslo_concurrency.lockutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.880 239549 DEBUG oslo_concurrency.lockutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.881 239549 DEBUG oslo_concurrency.lockutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.882 239549 DEBUG oslo_concurrency.lockutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.886 239549 INFO nova.compute.manager [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Terminating instance
Feb 02 15:41:58 compute-0 nova_compute[239545]: 2026-02-02 15:41:58.888 239549 DEBUG nova.compute.manager [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:41:59 compute-0 kernel: tapf8ad1f20-9d (unregistering): left promiscuous mode
Feb 02 15:41:59 compute-0 NetworkManager[49171]: <info>  [1770046919.0106] device (tapf8ad1f20-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.018 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:59 compute-0 ovn_controller[144995]: 2026-02-02T15:41:59Z|00161|binding|INFO|Releasing lport f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 from this chassis (sb_readonly=0)
Feb 02 15:41:59 compute-0 ovn_controller[144995]: 2026-02-02T15:41:59Z|00162|binding|INFO|Setting lport f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 down in Southbound
Feb 02 15:41:59 compute-0 ovn_controller[144995]: 2026-02-02T15:41:59Z|00163|binding|INFO|Removing iface tapf8ad1f20-9d ovn-installed in OVS
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.021 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.025 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:56:52 10.100.0.11'], port_security=['fa:16:3e:aa:56:52 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '17df128a-d6af-4570-b50f-c5fd7654c580', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c0eb4e8-bc20-4111-8350-1463414f08ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.027 154982 INFO neutron.agent.ovn.metadata.agent [-] Port f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 in datapath 473fc4ca-a137-447b-9349-9f4677babee6 unbound from our chassis
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.028 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 473fc4ca-a137-447b-9349-9f4677babee6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.029 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[18b09132-f78b-4be4-b280-a268e55023ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.029 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace which is not needed anymore
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.030 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:59 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Feb 02 15:41:59 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 3.417s CPU time.
Feb 02 15:41:59 compute-0 systemd-machined[207609]: Machine qemu-16-instance-00000010 terminated.
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.148 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:59 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[262784]: [NOTICE]   (262788) : haproxy version is 2.8.14-c23fe91
Feb 02 15:41:59 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[262784]: [NOTICE]   (262788) : path to executable is /usr/sbin/haproxy
Feb 02 15:41:59 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[262784]: [WARNING]  (262788) : Exiting Master process...
Feb 02 15:41:59 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[262784]: [ALERT]    (262788) : Current worker (262790) exited with code 143 (Terminated)
Feb 02 15:41:59 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[262784]: [WARNING]  (262788) : All workers exited. Exiting... (0)
Feb 02 15:41:59 compute-0 systemd[1]: libpod-13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a.scope: Deactivated successfully.
Feb 02 15:41:59 compute-0 conmon[262784]: conmon 13adf45279be9911ba5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a.scope/container/memory.events
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.158 239549 INFO nova.virt.libvirt.driver [-] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Instance destroyed successfully.
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.158 239549 DEBUG nova.objects.instance [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'resources' on Instance uuid 17df128a-d6af-4570-b50f-c5fd7654c580 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:41:59 compute-0 podman[262865]: 2026-02-02 15:41:59.161693783 +0000 UTC m=+0.064798280 container died 13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.179 239549 DEBUG nova.virt.libvirt.vif [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:41:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1361320349',display_name='tempest-TestVolumeBootPattern-server-1361320349',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1361320349',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:41:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-ayxcdh7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:41:56Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=17df128a-d6af-4570-b50f-c5fd7654c580,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.180 239549 DEBUG nova.network.os_vif_util [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "address": "fa:16:3e:aa:56:52", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8ad1f20-9d", "ovs_interfaceid": "f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.181 239549 DEBUG nova.network.os_vif_util [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:56:52,bridge_name='br-int',has_traffic_filtering=True,id=f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8ad1f20-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.181 239549 DEBUG os_vif [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:56:52,bridge_name='br-int',has_traffic_filtering=True,id=f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8ad1f20-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.182 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.182 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8ad1f20-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.184 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff8b2d72791ad81e43a54b8093254af584b81852ee4b49e3dfc9d9222f109944-merged.mount: Deactivated successfully.
Feb 02 15:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a-userdata-shm.mount: Deactivated successfully.
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.186 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.188 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.190 239549 INFO os_vif [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:56:52,bridge_name='br-int',has_traffic_filtering=True,id=f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8ad1f20-9d')
Feb 02 15:41:59 compute-0 podman[262865]: 2026-02-02 15:41:59.198258873 +0000 UTC m=+0.101363370 container cleanup 13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 15:41:59 compute-0 systemd[1]: libpod-conmon-13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a.scope: Deactivated successfully.
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.252 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.253 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.253 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:41:59 compute-0 podman[262919]: 2026-02-02 15:41:59.26379725 +0000 UTC m=+0.046898454 container remove 13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.268 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b9414983-b7f6-40f3-af8f-f61fa8613a06]: (4, ('Mon Feb  2 03:41:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a)\n13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a\nMon Feb  2 03:41:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a)\n13adf45279be9911ba5f120f92db92abaa7772831b008f7f17292df8fa2c665a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.269 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b781ad53-b1a8-4e18-9fee-7e592329a88a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.270 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.272 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:59 compute-0 kernel: tap473fc4ca-a0: left promiscuous mode
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.279 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.282 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[977464ea-e027-4be0-b473-1d890039cfc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.292 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0c634f1b-e603-4ed1-98b8-07d229bd9ae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.293 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a24fd889-a9d9-4e64-88d1-12b2765fc1d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.304 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f270aa2c-41a7-4a84-879c-7c95a7987a95]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440157, 'reachable_time': 37632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262937, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d473fc4ca\x2da137\x2d447b\x2d9349\x2d9f4677babee6.mount: Deactivated successfully.
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.307 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:41:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:41:59.307 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[4702ec19-8aef-4c71-a8e4-ff88670959c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.350 239549 INFO nova.virt.libvirt.driver [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Deleting instance files /var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580_del
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.351 239549 INFO nova.virt.libvirt.driver [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Deletion of /var/lib/nova/instances/17df128a-d6af-4570-b50f-c5fd7654c580_del complete
Feb 02 15:41:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Feb 02 15:41:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Feb 02 15:41:59 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.409 239549 INFO nova.compute.manager [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Took 0.52 seconds to destroy the instance on the hypervisor.
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.410 239549 DEBUG oslo.service.loopingcall [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.411 239549 DEBUG nova.compute.manager [-] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:41:59 compute-0 nova_compute[239545]: 2026-02-02 15:41:59.411 239549 DEBUG nova.network.neutron [-] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:41:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 KiB/s wr, 94 op/s
Feb 02 15:42:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492160665' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492160665' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:00 compute-0 nova_compute[239545]: 2026-02-02 15:42:00.175 239549 DEBUG nova.network.neutron [-] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:42:00 compute-0 nova_compute[239545]: 2026-02-02 15:42:00.195 239549 INFO nova.compute.manager [-] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Took 0.78 seconds to deallocate network for instance.
Feb 02 15:42:00 compute-0 nova_compute[239545]: 2026-02-02 15:42:00.237 239549 DEBUG nova.compute.manager [req-351b8cd5-85c6-46a2-878a-2a109c0cf623 req-674ae7d8-ad75-456b-87ce-c95b36028b21 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Received event network-vif-deleted-f8ad1f20-9d2d-4e06-bd3c-8fcf1870c259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:42:00 compute-0 ceph-mon[75334]: osdmap e402: 3 total, 3 up, 3 in
Feb 02 15:42:00 compute-0 ceph-mon[75334]: pgmap v1451: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 KiB/s wr, 94 op/s
Feb 02 15:42:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3492160665' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:00 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3492160665' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:00 compute-0 nova_compute[239545]: 2026-02-02 15:42:00.394 239549 INFO nova.compute.manager [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Took 0.20 seconds to detach 1 volumes for instance.
Feb 02 15:42:00 compute-0 nova_compute[239545]: 2026-02-02 15:42:00.396 239549 DEBUG nova.compute.manager [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Deleting volume: bb98fb58-2a03-4106-995b-33c7e57a0901 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Feb 02 15:42:00 compute-0 nova_compute[239545]: 2026-02-02 15:42:00.580 239549 DEBUG oslo_concurrency.lockutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:00 compute-0 nova_compute[239545]: 2026-02-02 15:42:00.581 239549 DEBUG oslo_concurrency.lockutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:00 compute-0 nova_compute[239545]: 2026-02-02 15:42:00.629 239549 DEBUG oslo_concurrency.processutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1417866700' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1417866700' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:42:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591022842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:01 compute-0 nova_compute[239545]: 2026-02-02 15:42:01.169 239549 DEBUG oslo_concurrency.processutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:01 compute-0 nova_compute[239545]: 2026-02-02 15:42:01.174 239549 DEBUG nova.compute.provider_tree [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:42:01 compute-0 nova_compute[239545]: 2026-02-02 15:42:01.189 239549 DEBUG nova.scheduler.client.report [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:42:01 compute-0 nova_compute[239545]: 2026-02-02 15:42:01.224 239549 DEBUG oslo_concurrency.lockutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:01 compute-0 nova_compute[239545]: 2026-02-02 15:42:01.255 239549 INFO nova.scheduler.client.report [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Deleted allocations for instance 17df128a-d6af-4570-b50f-c5fd7654c580
Feb 02 15:42:01 compute-0 nova_compute[239545]: 2026-02-02 15:42:01.344 239549 DEBUG oslo_concurrency.lockutils [None req-952801aa-4377-4a45-9673-e572f3d94aa3 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "17df128a-d6af-4570-b50f-c5fd7654c580" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/692853910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/692853910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1417866700' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1417866700' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1591022842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/692853910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/692853910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 177 op/s
Feb 02 15:42:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Feb 02 15:42:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Feb 02 15:42:02 compute-0 ceph-mon[75334]: pgmap v1452: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 177 op/s
Feb 02 15:42:02 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Feb 02 15:42:02 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:03 compute-0 ceph-mon[75334]: osdmap e403: 3 total, 3 up, 3 in
Feb 02 15:42:03 compute-0 nova_compute[239545]: 2026-02-02 15:42:03.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 28 KiB/s wr, 280 op/s
Feb 02 15:42:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2670656187' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2670656187' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:04 compute-0 nova_compute[239545]: 2026-02-02 15:42:04.186 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:04 compute-0 nova_compute[239545]: 2026-02-02 15:42:04.281 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:04 compute-0 ceph-mon[75334]: pgmap v1454: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 28 KiB/s wr, 280 op/s
Feb 02 15:42:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2670656187' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2670656187' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:04 compute-0 nova_compute[239545]: 2026-02-02 15:42:04.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:04 compute-0 nova_compute[239545]: 2026-02-02 15:42:04.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:42:04 compute-0 nova_compute[239545]: 2026-02-02 15:42:04.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:42:04 compute-0 nova_compute[239545]: 2026-02-02 15:42:04.561 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:42:05 compute-0 nova_compute[239545]: 2026-02-02 15:42:05.556 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 101 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 25 KiB/s wr, 248 op/s
Feb 02 15:42:05 compute-0 ceph-mon[75334]: pgmap v1455: 305 pgs: 305 active+clean; 101 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 25 KiB/s wr, 248 op/s
Feb 02 15:42:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 101 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 24 KiB/s wr, 182 op/s
Feb 02 15:42:07 compute-0 ceph-mon[75334]: pgmap v1456: 305 pgs: 305 active+clean; 101 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 24 KiB/s wr, 182 op/s
Feb 02 15:42:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Feb 02 15:42:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Feb 02 15:42:07 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Feb 02 15:42:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3554449986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:08 compute-0 nova_compute[239545]: 2026-02-02 15:42:08.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:08 compute-0 nova_compute[239545]: 2026-02-02 15:42:08.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:08 compute-0 nova_compute[239545]: 2026-02-02 15:42:08.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:42:08 compute-0 nova_compute[239545]: 2026-02-02 15:42:08.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:08 compute-0 nova_compute[239545]: 2026-02-02 15:42:08.567 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:08 compute-0 nova_compute[239545]: 2026-02-02 15:42:08.567 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:08 compute-0 nova_compute[239545]: 2026-02-02 15:42:08.567 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:08 compute-0 nova_compute[239545]: 2026-02-02 15:42:08.568 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:42:08 compute-0 nova_compute[239545]: 2026-02-02 15:42:08.568 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Feb 02 15:42:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Feb 02 15:42:08 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Feb 02 15:42:08 compute-0 ceph-mon[75334]: osdmap e404: 3 total, 3 up, 3 in
Feb 02 15:42:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3554449986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:42:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3752909662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.131 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.235 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.282 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.320 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.322 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4420MB free_disk=59.988223294727504GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.322 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.322 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.392 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.393 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.410 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing inventories for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.434 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating ProviderTree inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.434 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.455 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing aggregate associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.490 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing trait associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, traits: COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 15:42:09 compute-0 nova_compute[239545]: 2026-02-02 15:42:09.507 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.2 KiB/s wr, 106 op/s
Feb 02 15:42:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Feb 02 15:42:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Feb 02 15:42:09 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.006085) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046930006120, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2801, "num_deletes": 525, "total_data_size": 3624109, "memory_usage": 3705720, "flush_reason": "Manual Compaction"}
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Feb 02 15:42:10 compute-0 ceph-mon[75334]: osdmap e405: 3 total, 3 up, 3 in
Feb 02 15:42:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3752909662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:10 compute-0 ceph-mon[75334]: pgmap v1459: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.2 KiB/s wr, 106 op/s
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046930022761, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3109992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26653, "largest_seqno": 29452, "table_properties": {"data_size": 3098066, "index_size": 7332, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 30118, "raw_average_key_size": 21, "raw_value_size": 3071572, "raw_average_value_size": 2181, "num_data_blocks": 315, "num_entries": 1408, "num_filter_entries": 1408, "num_deletions": 525, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770046784, "oldest_key_time": 1770046784, "file_creation_time": 1770046930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 16731 microseconds, and 8087 cpu microseconds.
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.022811) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3109992 bytes OK
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.022835) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.025247) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.025263) EVENT_LOG_v1 {"time_micros": 1770046930025257, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.025277) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3610949, prev total WAL file size 3610949, number of live WAL files 2.
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.025924) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3037KB)], [59(10MB)]
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046930025982, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 14090685, "oldest_snapshot_seqno": -1}
Feb 02 15:42:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:42:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1910579762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:10 compute-0 nova_compute[239545]: 2026-02-02 15:42:10.056 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:10 compute-0 nova_compute[239545]: 2026-02-02 15:42:10.062 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6002 keys, 9429532 bytes, temperature: kUnknown
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046930068276, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9429532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9384639, "index_size": 28783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 149962, "raw_average_key_size": 24, "raw_value_size": 9271885, "raw_average_value_size": 1544, "num_data_blocks": 1160, "num_entries": 6002, "num_filter_entries": 6002, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770046930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.068514) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9429532 bytes
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.069581) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 332.5 rd, 222.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(7.6) write-amplify(3.0) OK, records in: 7017, records dropped: 1015 output_compression: NoCompression
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.069605) EVENT_LOG_v1 {"time_micros": 1770046930069591, "job": 32, "event": "compaction_finished", "compaction_time_micros": 42377, "compaction_time_cpu_micros": 16894, "output_level": 6, "num_output_files": 1, "total_output_size": 9429532, "num_input_records": 7017, "num_output_records": 6002, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046930070009, "job": 32, "event": "table_file_deletion", "file_number": 61}
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770046930070997, "job": 32, "event": "table_file_deletion", "file_number": 59}
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.025832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.071106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.071114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.071117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.071120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:42:10 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:42:10.071122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:42:10 compute-0 nova_compute[239545]: 2026-02-02 15:42:10.079 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:42:10 compute-0 nova_compute[239545]: 2026-02-02 15:42:10.099 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:42:10 compute-0 nova_compute[239545]: 2026-02-02 15:42:10.100 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:10 compute-0 podman[263007]: 2026-02-02 15:42:10.305410172 +0000 UTC m=+0.047527129 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 15:42:10 compute-0 podman[263006]: 2026-02-02 15:42:10.367134455 +0000 UTC m=+0.111030095 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 15:42:11 compute-0 ceph-mon[75334]: osdmap e406: 3 total, 3 up, 3 in
Feb 02 15:42:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1910579762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:11 compute-0 nova_compute[239545]: 2026-02-02 15:42:11.096 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:11 compute-0 nova_compute[239545]: 2026-02-02 15:42:11.123 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:11 compute-0 nova_compute[239545]: 2026-02-02 15:42:11.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2793917566' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.2 KiB/s wr, 57 op/s
Feb 02 15:42:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Feb 02 15:42:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2793917566' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:12 compute-0 ceph-mon[75334]: pgmap v1461: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.2 KiB/s wr, 57 op/s
Feb 02 15:42:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Feb 02 15:42:12 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Feb 02 15:42:12 compute-0 nova_compute[239545]: 2026-02-02 15:42:12.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:42:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Feb 02 15:42:13 compute-0 ceph-mon[75334]: osdmap e407: 3 total, 3 up, 3 in
Feb 02 15:42:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Feb 02 15:42:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Feb 02 15:42:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 102 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Feb 02 15:42:14 compute-0 ceph-mon[75334]: osdmap e408: 3 total, 3 up, 3 in
Feb 02 15:42:14 compute-0 ceph-mon[75334]: pgmap v1464: 305 pgs: 305 active+clean; 102 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Feb 02 15:42:14 compute-0 nova_compute[239545]: 2026-02-02 15:42:14.156 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046919.1556182, 17df128a-d6af-4570-b50f-c5fd7654c580 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:42:14 compute-0 nova_compute[239545]: 2026-02-02 15:42:14.157 239549 INFO nova.compute.manager [-] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] VM Stopped (Lifecycle Event)
Feb 02 15:42:14 compute-0 nova_compute[239545]: 2026-02-02 15:42:14.182 239549 DEBUG nova.compute.manager [None req-4decfc30-b12a-4b17-88f3-9a6e37bf362b - - - - - -] [instance: 17df128a-d6af-4570-b50f-c5fd7654c580] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:42:14 compute-0 nova_compute[239545]: 2026-02-02 15:42:14.238 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:14 compute-0 nova_compute[239545]: 2026-02-02 15:42:14.284 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:42:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:42:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:42:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:42:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:42:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:42:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Feb 02 15:42:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Feb 02 15:42:15 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Feb 02 15:42:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 3.7 MiB/s wr, 180 op/s
Feb 02 15:42:15 compute-0 nova_compute[239545]: 2026-02-02 15:42:15.995 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:15 compute-0 nova_compute[239545]: 2026-02-02 15:42:15.995 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.011 239549 DEBUG nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:42:16 compute-0 ceph-mon[75334]: osdmap e409: 3 total, 3 up, 3 in
Feb 02 15:42:16 compute-0 ceph-mon[75334]: pgmap v1466: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 3.7 MiB/s wr, 180 op/s
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.087 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.088 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.098 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.098 239549 INFO nova.compute.claims [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.243 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3000603317' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:42:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666233890' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.788 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.793 239549 DEBUG nova.compute.provider_tree [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.811 239549 DEBUG nova.scheduler.client.report [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.845 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.846 239549 DEBUG nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.910 239549 DEBUG nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.911 239549 DEBUG nova.network.neutron [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.939 239549 INFO nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:42:16 compute-0 nova_compute[239545]: 2026-02-02 15:42:16.967 239549 DEBUG nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.012 239549 INFO nova.virt.block_device [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Booting with volume 3e04b1a3-0372-4a95-8313-15b657dee567 at /dev/vda
Feb 02 15:42:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Feb 02 15:42:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3000603317' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3666233890' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Feb 02 15:42:17 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.113 239549 DEBUG nova.policy [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8e72a1cb6344869821da1cfc41bf8fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.215 239549 DEBUG os_brick.utils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.217 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.230 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.230 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3e405d-4144-4f5c-b7ff-872d3b8f272b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.232 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.240 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.241 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[09bc718d-0e43-408e-b69f-849ad6fd0611]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.242 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.251 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.251 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[277afc39-7021-4f67-bc83-5ae666771e41]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.253 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[5c07e491-2d3e-4ead-9c3f-67cea8a8b1f3]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.253 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.281 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.283 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.283 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.283 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.283 239549 DEBUG os_brick.utils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:42:17 compute-0 nova_compute[239545]: 2026-02-02 15:42:17.284 239549 DEBUG nova.virt.block_device [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Updating existing volume attachment record: 5fbaa6fd-9905-4439-904c-1c813f3d5447 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:42:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 3.8 MiB/s wr, 153 op/s
Feb 02 15:42:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Feb 02 15:42:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Feb 02 15:42:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Feb 02 15:42:18 compute-0 ceph-mon[75334]: osdmap e410: 3 total, 3 up, 3 in
Feb 02 15:42:18 compute-0 ceph-mon[75334]: pgmap v1468: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 3.8 MiB/s wr, 153 op/s
Feb 02 15:42:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1426037410' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:18 compute-0 nova_compute[239545]: 2026-02-02 15:42:18.342 239549 DEBUG nova.network.neutron [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Successfully created port: f325e981-4c0c-4aa0-814b-8e0d58e800d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:42:18 compute-0 nova_compute[239545]: 2026-02-02 15:42:18.584 239549 DEBUG nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:42:18 compute-0 nova_compute[239545]: 2026-02-02 15:42:18.586 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:42:18 compute-0 nova_compute[239545]: 2026-02-02 15:42:18.586 239549 INFO nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Creating image(s)
Feb 02 15:42:18 compute-0 nova_compute[239545]: 2026-02-02 15:42:18.587 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:42:18 compute-0 nova_compute[239545]: 2026-02-02 15:42:18.587 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Ensure instance console log exists: /var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:42:18 compute-0 nova_compute[239545]: 2026-02-02 15:42:18.587 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:18 compute-0 nova_compute[239545]: 2026-02-02 15:42:18.588 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:18 compute-0 nova_compute[239545]: 2026-02-02 15:42:18.588 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:19 compute-0 ceph-mon[75334]: osdmap e411: 3 total, 3 up, 3 in
Feb 02 15:42:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1426037410' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.240 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.254 239549 DEBUG nova.network.neutron [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Successfully updated port: f325e981-4c0c-4aa0-814b-8e0d58e800d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.277 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.277 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquired lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.277 239549 DEBUG nova.network.neutron [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.285 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.380 239549 DEBUG nova.compute.manager [req-ed36f24a-ae49-4435-bd07-7993ff449186 req-4bd654be-aa24-4000-a126-ff051e606951 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received event network-changed-f325e981-4c0c-4aa0-814b-8e0d58e800d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.381 239549 DEBUG nova.compute.manager [req-ed36f24a-ae49-4435-bd07-7993ff449186 req-4bd654be-aa24-4000-a126-ff051e606951 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Refreshing instance network info cache due to event network-changed-f325e981-4c0c-4aa0-814b-8e0d58e800d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.382 239549 DEBUG oslo_concurrency.lockutils [req-ed36f24a-ae49-4435-bd07-7993ff449186 req-4bd654be-aa24-4000-a126-ff051e606951 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:42:19 compute-0 nova_compute[239545]: 2026-02-02 15:42:19.440 239549 DEBUG nova.network.neutron [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:42:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.0 MiB/s wr, 150 op/s
Feb 02 15:42:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Feb 02 15:42:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Feb 02 15:42:20 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Feb 02 15:42:20 compute-0 ceph-mon[75334]: pgmap v1470: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.0 MiB/s wr, 150 op/s
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.451 239549 DEBUG nova.network.neutron [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Updating instance_info_cache with network_info: [{"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.478 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Releasing lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.478 239549 DEBUG nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Instance network_info: |[{"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.479 239549 DEBUG oslo_concurrency.lockutils [req-ed36f24a-ae49-4435-bd07-7993ff449186 req-4bd654be-aa24-4000-a126-ff051e606951 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.480 239549 DEBUG nova.network.neutron [req-ed36f24a-ae49-4435-bd07-7993ff449186 req-4bd654be-aa24-4000-a126-ff051e606951 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Refreshing network info cache for port f325e981-4c0c-4aa0-814b-8e0d58e800d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.483 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Start _get_guest_xml network_info=[{"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '5fbaa6fd-9905-4439-904c-1c813f3d5447', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': True, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3e04b1a3-0372-4a95-8313-15b657dee567', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3e04b1a3-0372-4a95-8313-15b657dee567', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '4d22e226-bdcc-49f4-b9b5-85c81397a0f3', 'attached_at': '', 'detached_at': '', 'volume_id': '3e04b1a3-0372-4a95-8313-15b657dee567', 'serial': '3e04b1a3-0372-4a95-8313-15b657dee567'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.490 239549 WARNING nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.496 239549 DEBUG nova.virt.libvirt.host [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.496 239549 DEBUG nova.virt.libvirt.host [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.499 239549 DEBUG nova.virt.libvirt.host [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.499 239549 DEBUG nova.virt.libvirt.host [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.499 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.500 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.500 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.500 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.500 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.501 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.501 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.501 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.501 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.501 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.502 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.502 239549 DEBUG nova.virt.hardware [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:42:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1056455761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1056455761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.523 239549 DEBUG nova.storage.rbd_utils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 4d22e226-bdcc-49f4-b9b5-85c81397a0f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:42:20 compute-0 nova_compute[239545]: 2026-02-02 15:42:20.527 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2941200264' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.041 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.073 239549 DEBUG nova.virt.libvirt.vif [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-2112237726',display_name='tempest-TestVolumeBootPattern-volume-backed-server-2112237726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-2112237726',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ3I7ej2QEq412Pxe8wdTTgEK6xdMVdaKUJK8NpNgcYZHZmL1ut3LqFHWwwEEk4vb9ouHqvw3XDrJ+X+Wi45pbQkXF60G3n4jYLfmhBujBWP8h1RUz8SU1iZp6vasJ04pw==',key_name='tempest-keypair-927612033',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-fw00ids4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:42:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=4d22e226-bdcc-49f4-b9b5-85c81397a0f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.074 239549 DEBUG nova.network.os_vif_util [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.075 239549 DEBUG nova.network.os_vif_util [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:1d:eb,bridge_name='br-int',has_traffic_filtering=True,id=f325e981-4c0c-4aa0-814b-8e0d58e800d4,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf325e981-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.076 239549 DEBUG nova.objects.instance [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d22e226-bdcc-49f4-b9b5-85c81397a0f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.092 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <uuid>4d22e226-bdcc-49f4-b9b5-85c81397a0f3</uuid>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <name>instance-00000011</name>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-2112237726</nova:name>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:42:20</nova:creationTime>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <nova:user uuid="b8e72a1cb6344869821da1cfc41bf8fc">tempest-TestVolumeBootPattern-77302308-project-member</nova:user>
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <nova:project uuid="8a28227cdc0a4390bebe7549f189bfe5">tempest-TestVolumeBootPattern-77302308</nova:project>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <nova:port uuid="f325e981-4c0c-4aa0-814b-8e0d58e800d4">
Feb 02 15:42:21 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <system>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <entry name="serial">4d22e226-bdcc-49f4-b9b5-85c81397a0f3</entry>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <entry name="uuid">4d22e226-bdcc-49f4-b9b5-85c81397a0f3</entry>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     </system>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <os>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   </os>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <features>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   </features>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/4d22e226-bdcc-49f4-b9b5-85c81397a0f3_disk.config">
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       </source>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-3e04b1a3-0372-4a95-8313-15b657dee567">
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       </source>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:42:21 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <serial>3e04b1a3-0372-4a95-8313-15b657dee567</serial>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:3d:1d:eb"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <target dev="tapf325e981-4c"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3/console.log" append="off"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <video>
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     </video>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:42:21 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:42:21 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:42:21 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:42:21 compute-0 nova_compute[239545]: </domain>
Feb 02 15:42:21 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.093 239549 DEBUG nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Preparing to wait for external event network-vif-plugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.094 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.094 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.094 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.095 239549 DEBUG nova.virt.libvirt.vif [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-2112237726',display_name='tempest-TestVolumeBootPattern-volume-backed-server-2112237726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-2112237726',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ3I7ej2QEq412Pxe8wdTTgEK6xdMVdaKUJK8NpNgcYZHZmL1ut3LqFHWwwEEk4vb9ouHqvw3XDrJ+X+Wi45pbQkXF60G3n4jYLfmhBujBWP8h1RUz8SU1iZp6vasJ04pw==',key_name='tempest-keypair-927612033',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-fw00ids4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:42:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=4d22e226-bdcc-49f4-b9b5-85c81397a0f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.095 239549 DEBUG nova.network.os_vif_util [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.096 239549 DEBUG nova.network.os_vif_util [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:1d:eb,bridge_name='br-int',has_traffic_filtering=True,id=f325e981-4c0c-4aa0-814b-8e0d58e800d4,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf325e981-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.096 239549 DEBUG os_vif [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:1d:eb,bridge_name='br-int',has_traffic_filtering=True,id=f325e981-4c0c-4aa0-814b-8e0d58e800d4,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf325e981-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.096 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.097 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.097 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.100 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.100 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf325e981-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.101 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf325e981-4c, col_values=(('external_ids', {'iface-id': 'f325e981-4c0c-4aa0-814b-8e0d58e800d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:1d:eb', 'vm-uuid': '4d22e226-bdcc-49f4-b9b5-85c81397a0f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.106 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:21 compute-0 NetworkManager[49171]: <info>  [1770046941.1076] manager: (tapf325e981-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.109 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.113 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.114 239549 INFO os_vif [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:1d:eb,bridge_name='br-int',has_traffic_filtering=True,id=f325e981-4c0c-4aa0-814b-8e0d58e800d4,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf325e981-4c')
Feb 02 15:42:21 compute-0 ceph-mon[75334]: osdmap e412: 3 total, 3 up, 3 in
Feb 02 15:42:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1056455761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1056455761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2941200264' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.190 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.190 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.191 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No VIF found with MAC fa:16:3e:3d:1d:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.191 239549 INFO nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Using config drive
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.209 239549 DEBUG nova.storage.rbd_utils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 4d22e226-bdcc-49f4-b9b5-85c81397a0f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.631 239549 INFO nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Creating config drive at /var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3/disk.config
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.636 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprc3ohelj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 KiB/s wr, 63 op/s
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.704 239549 DEBUG nova.network.neutron [req-ed36f24a-ae49-4435-bd07-7993ff449186 req-4bd654be-aa24-4000-a126-ff051e606951 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Updated VIF entry in instance network info cache for port f325e981-4c0c-4aa0-814b-8e0d58e800d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.705 239549 DEBUG nova.network.neutron [req-ed36f24a-ae49-4435-bd07-7993ff449186 req-4bd654be-aa24-4000-a126-ff051e606951 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Updating instance_info_cache with network_info: [{"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.727 239549 DEBUG oslo_concurrency.lockutils [req-ed36f24a-ae49-4435-bd07-7993ff449186 req-4bd654be-aa24-4000-a126-ff051e606951 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.758 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprc3ohelj" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.777 239549 DEBUG nova.storage.rbd_utils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 4d22e226-bdcc-49f4-b9b5-85c81397a0f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.780 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3/disk.config 4d22e226-bdcc-49f4-b9b5-85c81397a0f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.884 239549 DEBUG oslo_concurrency.processutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3/disk.config 4d22e226-bdcc-49f4-b9b5-85c81397a0f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.886 239549 INFO nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Deleting local config drive /var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3/disk.config because it was imported into RBD.
Feb 02 15:42:21 compute-0 kernel: tapf325e981-4c: entered promiscuous mode
Feb 02 15:42:21 compute-0 NetworkManager[49171]: <info>  [1770046941.9235] manager: (tapf325e981-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Feb 02 15:42:21 compute-0 ovn_controller[144995]: 2026-02-02T15:42:21Z|00164|binding|INFO|Claiming lport f325e981-4c0c-4aa0-814b-8e0d58e800d4 for this chassis.
Feb 02 15:42:21 compute-0 ovn_controller[144995]: 2026-02-02T15:42:21Z|00165|binding|INFO|f325e981-4c0c-4aa0-814b-8e0d58e800d4: Claiming fa:16:3e:3d:1d:eb 10.100.0.10
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.924 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:21 compute-0 ovn_controller[144995]: 2026-02-02T15:42:21Z|00166|binding|INFO|Setting lport f325e981-4c0c-4aa0-814b-8e0d58e800d4 ovn-installed in OVS
Feb 02 15:42:21 compute-0 ovn_controller[144995]: 2026-02-02T15:42:21Z|00167|binding|INFO|Setting lport f325e981-4c0c-4aa0-814b-8e0d58e800d4 up in Southbound
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.931 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:1d:eb 10.100.0.10'], port_security=['fa:16:3e:3d:1d:eb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d22e226-bdcc-49f4-b9b5-85c81397a0f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2f81cf6a-8223-4b83-8701-8d2d1d8d2a2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=f325e981-4c0c-4aa0-814b-8e0d58e800d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.931 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.932 154982 INFO neutron.agent.ovn.metadata.agent [-] Port f325e981-4c0c-4aa0-814b-8e0d58e800d4 in datapath 473fc4ca-a137-447b-9349-9f4677babee6 bound to our chassis
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.934 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:42:21 compute-0 nova_compute[239545]: 2026-02-02 15:42:21.936 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.942 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf161314-c1ad-4f43-a13b-59d3acfe7a00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.943 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap473fc4ca-a1 in ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.945 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap473fc4ca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.946 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[995728a5-62e3-4937-a6d9-1ccee85312c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.946 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1c08e6-ce76-4efa-8af1-13152d464234]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:21 compute-0 systemd-udevd[263187]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.954 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[309d8b80-3da8-433e-a1cb-b5163d0ebe96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:21 compute-0 systemd-machined[207609]: New machine qemu-17-instance-00000011.
Feb 02 15:42:21 compute-0 NetworkManager[49171]: <info>  [1770046941.9607] device (tapf325e981-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:42:21 compute-0 NetworkManager[49171]: <info>  [1770046941.9614] device (tapf325e981-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.966 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2c047e88-4de8-424c-8e0e-96ed818d3bbd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:21 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.993 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[f5320847-ad8e-41de-8d97-bc46d654a616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:21.997 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c212006e-cf69-4b3d-8bcc-3dd4b2f53665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:21 compute-0 NetworkManager[49171]: <info>  [1770046941.9983] manager: (tap473fc4ca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Feb 02 15:42:21 compute-0 systemd-udevd[263192]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.024 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[2db16e63-8c9b-4834-bdb7-6c1a2cc429ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.028 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc968cf-ac53-40bd-97d3-35f21a34573c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:22 compute-0 NetworkManager[49171]: <info>  [1770046942.0456] device (tap473fc4ca-a0): carrier: link connected
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.049 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[f0aeb1a8-cfa8-4440-b960-84d2eacf8a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.060 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6d37d47b-d241-4133-a87a-4c00c0adc1d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442796, 'reachable_time': 17277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263220, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.069 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a96f46cf-6d2a-4c3e-98ec-1176abf83e45]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:14cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442796, 'tstamp': 442796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263221, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.081 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[40c027b8-9118-4db2-b555-d2a92a7532b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442796, 'reachable_time': 17277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263222, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.105 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfed2c5-8174-49fe-998a-f87e1a3bb900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.151 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3263e181-9912-4730-81a7-baf0768570e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.152 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.153 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.153 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473fc4ca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.154 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:22 compute-0 NetworkManager[49171]: <info>  [1770046942.1555] manager: (tap473fc4ca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Feb 02 15:42:22 compute-0 kernel: tap473fc4ca-a0: entered promiscuous mode
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.157 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.160 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473fc4ca-a0, col_values=(('external_ids', {'iface-id': '8ec763b2-de85-4ed5-bb5d-67e76d81beae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.162 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:22 compute-0 ovn_controller[144995]: 2026-02-02T15:42:22Z|00168|binding|INFO|Releasing lport 8ec763b2-de85-4ed5-bb5d-67e76d81beae from this chassis (sb_readonly=0)
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.162 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.163 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.164 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a3be14dd-7aad-46fc-9919-b3bc7a0eaa53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.164 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:42:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:22.165 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'env', 'PROCESS_TAG=haproxy-473fc4ca-a137-447b-9349-9f4677babee6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/473fc4ca-a137-447b-9349-9f4677babee6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:42:22 compute-0 ceph-mon[75334]: pgmap v1472: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 KiB/s wr, 63 op/s
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.170 239549 DEBUG nova.compute.manager [req-2060195f-9bdb-4d77-967a-701fb7c4b13a req-1d03c8cf-8d24-410b-a624-d568a6ecaecf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received event network-vif-plugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.170 239549 DEBUG oslo_concurrency.lockutils [req-2060195f-9bdb-4d77-967a-701fb7c4b13a req-1d03c8cf-8d24-410b-a624-d568a6ecaecf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.170 239549 DEBUG oslo_concurrency.lockutils [req-2060195f-9bdb-4d77-967a-701fb7c4b13a req-1d03c8cf-8d24-410b-a624-d568a6ecaecf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.171 239549 DEBUG oslo_concurrency.lockutils [req-2060195f-9bdb-4d77-967a-701fb7c4b13a req-1d03c8cf-8d24-410b-a624-d568a6ecaecf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.171 239549 DEBUG nova.compute.manager [req-2060195f-9bdb-4d77-967a-701fb7c4b13a req-1d03c8cf-8d24-410b-a624-d568a6ecaecf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Processing event network-vif-plugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.171 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.298 239549 DEBUG nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.299 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046942.298286, 4d22e226-bdcc-49f4-b9b5-85c81397a0f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.299 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] VM Started (Lifecycle Event)
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.301 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.304 239549 INFO nova.virt.libvirt.driver [-] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Instance spawned successfully.
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.305 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.324 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.328 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.329 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.329 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.330 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.330 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.331 239549 DEBUG nova.virt.libvirt.driver [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.335 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.363 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.364 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046942.2990925, 4d22e226-bdcc-49f4-b9b5-85c81397a0f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.365 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] VM Paused (Lifecycle Event)
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.391 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.397 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046942.3017936, 4d22e226-bdcc-49f4-b9b5-85c81397a0f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.397 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] VM Resumed (Lifecycle Event)
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.404 239549 INFO nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Took 3.82 seconds to spawn the instance on the hypervisor.
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.405 239549 DEBUG nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.420 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.424 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.473 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.495 239549 INFO nova.compute.manager [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Took 6.44 seconds to build instance.
Feb 02 15:42:22 compute-0 podman[263295]: 2026-02-02 15:42:22.508915887 +0000 UTC m=+0.046889094 container create 04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 15:42:22 compute-0 nova_compute[239545]: 2026-02-02 15:42:22.515 239549 DEBUG oslo_concurrency.lockutils [None req-4bcc3e2a-71ee-470a-b52a-0afd3ab04b8b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:22 compute-0 systemd[1]: Started libpod-conmon-04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc.scope.
Feb 02 15:42:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b5a9b4c502ccf5b3497c0f5e33c8f59d5f94c1c6a0b682fbfa6471709f1173a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:22 compute-0 podman[263295]: 2026-02-02 15:42:22.48646657 +0000 UTC m=+0.024439787 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:42:22 compute-0 podman[263295]: 2026-02-02 15:42:22.593126608 +0000 UTC m=+0.131099845 container init 04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 02 15:42:22 compute-0 podman[263295]: 2026-02-02 15:42:22.598602392 +0000 UTC m=+0.136575609 container start 04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 02 15:42:22 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[263311]: [NOTICE]   (263315) : New worker (263317) forked
Feb 02 15:42:22 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[263311]: [NOTICE]   (263315) : Loading success.
Feb 02 15:42:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:22 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2025343295' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Feb 02 15:42:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Feb 02 15:42:22 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Feb 02 15:42:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2025343295' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:23 compute-0 ceph-mon[75334]: osdmap e413: 3 total, 3 up, 3 in
Feb 02 15:42:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 33 KiB/s wr, 105 op/s
Feb 02 15:42:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Feb 02 15:42:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Feb 02 15:42:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Feb 02 15:42:24 compute-0 ceph-mon[75334]: pgmap v1474: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 33 KiB/s wr, 105 op/s
Feb 02 15:42:24 compute-0 ceph-mon[75334]: osdmap e414: 3 total, 3 up, 3 in
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.247 239549 DEBUG nova.compute.manager [req-3d992475-ccad-4550-8c33-d72ecc037a68 req-44ff202a-67fe-454d-9515-a93404bd1786 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received event network-vif-plugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.247 239549 DEBUG oslo_concurrency.lockutils [req-3d992475-ccad-4550-8c33-d72ecc037a68 req-44ff202a-67fe-454d-9515-a93404bd1786 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.247 239549 DEBUG oslo_concurrency.lockutils [req-3d992475-ccad-4550-8c33-d72ecc037a68 req-44ff202a-67fe-454d-9515-a93404bd1786 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.248 239549 DEBUG oslo_concurrency.lockutils [req-3d992475-ccad-4550-8c33-d72ecc037a68 req-44ff202a-67fe-454d-9515-a93404bd1786 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.248 239549 DEBUG nova.compute.manager [req-3d992475-ccad-4550-8c33-d72ecc037a68 req-44ff202a-67fe-454d-9515-a93404bd1786 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] No waiting events found dispatching network-vif-plugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.248 239549 WARNING nova.compute.manager [req-3d992475-ccad-4550-8c33-d72ecc037a68 req-44ff202a-67fe-454d-9515-a93404bd1786 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received unexpected event network-vif-plugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 for instance with vm_state active and task_state None.
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.288 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.350 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:24 compute-0 NetworkManager[49171]: <info>  [1770046944.3516] manager: (patch-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Feb 02 15:42:24 compute-0 NetworkManager[49171]: <info>  [1770046944.3528] manager: (patch-br-int-to-provnet-d1981747-82d9-4ed4-8c37-fe8d420812f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.424 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:24 compute-0 ovn_controller[144995]: 2026-02-02T15:42:24Z|00169|binding|INFO|Releasing lport 8ec763b2-de85-4ed5-bb5d-67e76d81beae from this chassis (sb_readonly=0)
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.442 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.619 239549 DEBUG nova.compute.manager [req-d1203801-cd63-4f9e-9b64-6a1306ce3f11 req-7f94b543-b034-4bd1-95bd-4335d4a087e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received event network-changed-f325e981-4c0c-4aa0-814b-8e0d58e800d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.619 239549 DEBUG nova.compute.manager [req-d1203801-cd63-4f9e-9b64-6a1306ce3f11 req-7f94b543-b034-4bd1-95bd-4335d4a087e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Refreshing instance network info cache due to event network-changed-f325e981-4c0c-4aa0-814b-8e0d58e800d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.620 239549 DEBUG oslo_concurrency.lockutils [req-d1203801-cd63-4f9e-9b64-6a1306ce3f11 req-7f94b543-b034-4bd1-95bd-4335d4a087e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.620 239549 DEBUG oslo_concurrency.lockutils [req-d1203801-cd63-4f9e-9b64-6a1306ce3f11 req-7f94b543-b034-4bd1-95bd-4335d4a087e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:42:24 compute-0 nova_compute[239545]: 2026-02-02 15:42:24.620 239549 DEBUG nova.network.neutron [req-d1203801-cd63-4f9e-9b64-6a1306ce3f11 req-7f94b543-b034-4bd1-95bd-4335d4a087e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Refreshing network info cache for port f325e981-4c0c-4aa0-814b-8e0d58e800d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:42:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Feb 02 15:42:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Feb 02 15:42:24 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Feb 02 15:42:25 compute-0 sudo[263328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:42:25 compute-0 sudo[263328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:42:25 compute-0 sudo[263328]: pam_unix(sudo:session): session closed for user root
Feb 02 15:42:25 compute-0 sudo[263353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:42:25 compute-0 sudo[263353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:42:25 compute-0 sudo[263353]: pam_unix(sudo:session): session closed for user root
Feb 02 15:42:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:42:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:42:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:42:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:42:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:42:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:42:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:42:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:42:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:42:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:42:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:42:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:42:25 compute-0 nova_compute[239545]: 2026-02-02 15:42:25.688 239549 DEBUG nova.network.neutron [req-d1203801-cd63-4f9e-9b64-6a1306ce3f11 req-7f94b543-b034-4bd1-95bd-4335d4a087e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Updated VIF entry in instance network info cache for port f325e981-4c0c-4aa0-814b-8e0d58e800d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:42:25 compute-0 nova_compute[239545]: 2026-02-02 15:42:25.689 239549 DEBUG nova.network.neutron [req-d1203801-cd63-4f9e-9b64-6a1306ce3f11 req-7f94b543-b034-4bd1-95bd-4335d4a087e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Updating instance_info_cache with network_info: [{"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:42:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 83 KiB/s wr, 225 op/s
Feb 02 15:42:25 compute-0 nova_compute[239545]: 2026-02-02 15:42:25.713 239549 DEBUG oslo_concurrency.lockutils [req-d1203801-cd63-4f9e-9b64-6a1306ce3f11 req-7f94b543-b034-4bd1-95bd-4335d4a087e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:42:25 compute-0 sudo[263407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:42:25 compute-0 sudo[263407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:42:25 compute-0 sudo[263407]: pam_unix(sudo:session): session closed for user root
Feb 02 15:42:25 compute-0 sudo[263432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:42:25 compute-0 sudo[263432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:42:25 compute-0 ceph-mon[75334]: osdmap e415: 3 total, 3 up, 3 in
Feb 02 15:42:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:42:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:42:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:42:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:42:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:42:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:42:25 compute-0 ceph-mon[75334]: pgmap v1477: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 83 KiB/s wr, 225 op/s
Feb 02 15:42:26 compute-0 podman[263469]: 2026-02-02 15:42:26.003016213 +0000 UTC m=+0.037440352 container create faffaead71de7008f55039804190e753595cebbce81ed1b5e1c76c58cdbde90f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:42:26 compute-0 systemd[1]: Started libpod-conmon-faffaead71de7008f55039804190e753595cebbce81ed1b5e1c76c58cdbde90f.scope.
Feb 02 15:42:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:42:26 compute-0 podman[263469]: 2026-02-02 15:42:26.072882155 +0000 UTC m=+0.107306324 container init faffaead71de7008f55039804190e753595cebbce81ed1b5e1c76c58cdbde90f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:42:26 compute-0 podman[263469]: 2026-02-02 15:42:26.079956107 +0000 UTC m=+0.114380236 container start faffaead71de7008f55039804190e753595cebbce81ed1b5e1c76c58cdbde90f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:42:26 compute-0 podman[263469]: 2026-02-02 15:42:25.986522362 +0000 UTC m=+0.020946521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:42:26 compute-0 podman[263469]: 2026-02-02 15:42:26.08376769 +0000 UTC m=+0.118191849 container attach faffaead71de7008f55039804190e753595cebbce81ed1b5e1c76c58cdbde90f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:42:26 compute-0 ecstatic_shannon[263483]: 167 167
Feb 02 15:42:26 compute-0 systemd[1]: libpod-faffaead71de7008f55039804190e753595cebbce81ed1b5e1c76c58cdbde90f.scope: Deactivated successfully.
Feb 02 15:42:26 compute-0 podman[263469]: 2026-02-02 15:42:26.08619307 +0000 UTC m=+0.120617209 container died faffaead71de7008f55039804190e753595cebbce81ed1b5e1c76c58cdbde90f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 15:42:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-564f20bba620c88787e765e5b36f540bd7abf3ec48bf4384ddefc72494ece4a5-merged.mount: Deactivated successfully.
Feb 02 15:42:26 compute-0 nova_compute[239545]: 2026-02-02 15:42:26.108 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:26 compute-0 podman[263469]: 2026-02-02 15:42:26.125403125 +0000 UTC m=+0.159827264 container remove faffaead71de7008f55039804190e753595cebbce81ed1b5e1c76c58cdbde90f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:42:26 compute-0 systemd[1]: libpod-conmon-faffaead71de7008f55039804190e753595cebbce81ed1b5e1c76c58cdbde90f.scope: Deactivated successfully.
Feb 02 15:42:26 compute-0 podman[263509]: 2026-02-02 15:42:26.258599759 +0000 UTC m=+0.037834923 container create be16490bda0fd775654963a553ccffba799034423916c55caca57a49ca3e79a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:42:26 compute-0 systemd[1]: Started libpod-conmon-be16490bda0fd775654963a553ccffba799034423916c55caca57a49ca3e79a9.scope.
Feb 02 15:42:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bab824438c762d3d79f6505ff43526eaa3d1ef2371785999aa1aa1f639343cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bab824438c762d3d79f6505ff43526eaa3d1ef2371785999aa1aa1f639343cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bab824438c762d3d79f6505ff43526eaa3d1ef2371785999aa1aa1f639343cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:26 compute-0 podman[263509]: 2026-02-02 15:42:26.242401254 +0000 UTC m=+0.021636438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bab824438c762d3d79f6505ff43526eaa3d1ef2371785999aa1aa1f639343cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bab824438c762d3d79f6505ff43526eaa3d1ef2371785999aa1aa1f639343cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:26 compute-0 podman[263509]: 2026-02-02 15:42:26.355389526 +0000 UTC m=+0.134624720 container init be16490bda0fd775654963a553ccffba799034423916c55caca57a49ca3e79a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:42:26 compute-0 podman[263509]: 2026-02-02 15:42:26.363410112 +0000 UTC m=+0.142645276 container start be16490bda0fd775654963a553ccffba799034423916c55caca57a49ca3e79a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:42:26 compute-0 podman[263509]: 2026-02-02 15:42:26.366499987 +0000 UTC m=+0.145735161 container attach be16490bda0fd775654963a553ccffba799034423916c55caca57a49ca3e79a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shockley, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:42:26 compute-0 loving_shockley[263526]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:42:26 compute-0 loving_shockley[263526]: --> All data devices are unavailable
Feb 02 15:42:26 compute-0 systemd[1]: libpod-be16490bda0fd775654963a553ccffba799034423916c55caca57a49ca3e79a9.scope: Deactivated successfully.
Feb 02 15:42:26 compute-0 podman[263509]: 2026-02-02 15:42:26.756552738 +0000 UTC m=+0.535787912 container died be16490bda0fd775654963a553ccffba799034423916c55caca57a49ca3e79a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:42:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bab824438c762d3d79f6505ff43526eaa3d1ef2371785999aa1aa1f639343cb-merged.mount: Deactivated successfully.
Feb 02 15:42:26 compute-0 podman[263509]: 2026-02-02 15:42:26.793224301 +0000 UTC m=+0.572459465 container remove be16490bda0fd775654963a553ccffba799034423916c55caca57a49ca3e79a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shockley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:42:26 compute-0 systemd[1]: libpod-conmon-be16490bda0fd775654963a553ccffba799034423916c55caca57a49ca3e79a9.scope: Deactivated successfully.
Feb 02 15:42:26 compute-0 sudo[263432]: pam_unix(sudo:session): session closed for user root
Feb 02 15:42:26 compute-0 sudo[263558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:42:26 compute-0 sudo[263558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:42:26 compute-0 sudo[263558]: pam_unix(sudo:session): session closed for user root
Feb 02 15:42:26 compute-0 sudo[263583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:42:26 compute-0 sudo[263583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:42:27 compute-0 podman[263621]: 2026-02-02 15:42:27.144991679 +0000 UTC m=+0.038156860 container create 25223edbb5766ce0933da4f25c55997d5feb8889a63f309d4dc243128c4d2f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:42:27 compute-0 systemd[1]: Started libpod-conmon-25223edbb5766ce0933da4f25c55997d5feb8889a63f309d4dc243128c4d2f7d.scope.
Feb 02 15:42:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:42:27 compute-0 podman[263621]: 2026-02-02 15:42:27.215508396 +0000 UTC m=+0.108673607 container init 25223edbb5766ce0933da4f25c55997d5feb8889a63f309d4dc243128c4d2f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:42:27 compute-0 podman[263621]: 2026-02-02 15:42:27.126626701 +0000 UTC m=+0.019791902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:42:27 compute-0 podman[263621]: 2026-02-02 15:42:27.223353887 +0000 UTC m=+0.116519068 container start 25223edbb5766ce0933da4f25c55997d5feb8889a63f309d4dc243128c4d2f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:42:27 compute-0 podman[263621]: 2026-02-02 15:42:27.227347415 +0000 UTC m=+0.120512626 container attach 25223edbb5766ce0933da4f25c55997d5feb8889a63f309d4dc243128c4d2f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:42:27 compute-0 frosty_lichterman[263638]: 167 167
Feb 02 15:42:27 compute-0 systemd[1]: libpod-25223edbb5766ce0933da4f25c55997d5feb8889a63f309d4dc243128c4d2f7d.scope: Deactivated successfully.
Feb 02 15:42:27 compute-0 podman[263621]: 2026-02-02 15:42:27.230143603 +0000 UTC m=+0.123308784 container died 25223edbb5766ce0933da4f25c55997d5feb8889a63f309d4dc243128c4d2f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d570e627b7eb9041a6acd481b36d4a1eca72ba63eb66ae1beb39cbdb190eee9c-merged.mount: Deactivated successfully.
Feb 02 15:42:27 compute-0 podman[263621]: 2026-02-02 15:42:27.271655624 +0000 UTC m=+0.164820805 container remove 25223edbb5766ce0933da4f25c55997d5feb8889a63f309d4dc243128c4d2f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:42:27 compute-0 systemd[1]: libpod-conmon-25223edbb5766ce0933da4f25c55997d5feb8889a63f309d4dc243128c4d2f7d.scope: Deactivated successfully.
Feb 02 15:42:27 compute-0 podman[263660]: 2026-02-02 15:42:27.411413358 +0000 UTC m=+0.045514429 container create b179fbe09a3afed5158dc978602a5d6e22e8ac6a1e1c93133e12dfa856999b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:42:27 compute-0 systemd[1]: Started libpod-conmon-b179fbe09a3afed5158dc978602a5d6e22e8ac6a1e1c93133e12dfa856999b03.scope.
Feb 02 15:42:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433842d8a8806ffb9ddded83d9450175e6aadd6a3b45b06f0e2b3ac11b38ac60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433842d8a8806ffb9ddded83d9450175e6aadd6a3b45b06f0e2b3ac11b38ac60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433842d8a8806ffb9ddded83d9450175e6aadd6a3b45b06f0e2b3ac11b38ac60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433842d8a8806ffb9ddded83d9450175e6aadd6a3b45b06f0e2b3ac11b38ac60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:27 compute-0 podman[263660]: 2026-02-02 15:42:27.39503788 +0000 UTC m=+0.029138981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:42:27 compute-0 podman[263660]: 2026-02-02 15:42:27.48538353 +0000 UTC m=+0.119484601 container init b179fbe09a3afed5158dc978602a5d6e22e8ac6a1e1c93133e12dfa856999b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:42:27 compute-0 podman[263660]: 2026-02-02 15:42:27.491001516 +0000 UTC m=+0.125102587 container start b179fbe09a3afed5158dc978602a5d6e22e8ac6a1e1c93133e12dfa856999b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:42:27 compute-0 podman[263660]: 2026-02-02 15:42:27.496744826 +0000 UTC m=+0.130845897 container attach b179fbe09a3afed5158dc978602a5d6e22e8ac6a1e1c93133e12dfa856999b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:42:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Feb 02 15:42:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Feb 02 15:42:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Feb 02 15:42:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 96 KiB/s wr, 257 op/s
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]: {
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:     "0": [
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:         {
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "devices": [
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "/dev/loop3"
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             ],
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_name": "ceph_lv0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_size": "21470642176",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "name": "ceph_lv0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "tags": {
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.cluster_name": "ceph",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.crush_device_class": "",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.encrypted": "0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.objectstore": "bluestore",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.osd_id": "0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.type": "block",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.vdo": "0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.with_tpm": "0"
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             },
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "type": "block",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "vg_name": "ceph_vg0"
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:         }
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:     ],
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:     "1": [
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:         {
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "devices": [
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "/dev/loop4"
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             ],
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_name": "ceph_lv1",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_size": "21470642176",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "name": "ceph_lv1",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "tags": {
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.cluster_name": "ceph",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.crush_device_class": "",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.encrypted": "0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.objectstore": "bluestore",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.osd_id": "1",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.type": "block",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.vdo": "0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.with_tpm": "0"
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             },
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "type": "block",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "vg_name": "ceph_vg1"
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:         }
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:     ],
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:     "2": [
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:         {
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "devices": [
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "/dev/loop5"
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             ],
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_name": "ceph_lv2",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_size": "21470642176",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "name": "ceph_lv2",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "tags": {
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.cluster_name": "ceph",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.crush_device_class": "",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.encrypted": "0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.objectstore": "bluestore",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.osd_id": "2",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.type": "block",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.vdo": "0",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:                 "ceph.with_tpm": "0"
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             },
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "type": "block",
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:             "vg_name": "ceph_vg2"
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:         }
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]:     ]
Feb 02 15:42:27 compute-0 vibrant_varahamihira[263677]: }
Feb 02 15:42:27 compute-0 systemd[1]: libpod-b179fbe09a3afed5158dc978602a5d6e22e8ac6a1e1c93133e12dfa856999b03.scope: Deactivated successfully.
Feb 02 15:42:27 compute-0 podman[263660]: 2026-02-02 15:42:27.817844928 +0000 UTC m=+0.451945999 container died b179fbe09a3afed5158dc978602a5d6e22e8ac6a1e1c93133e12dfa856999b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-433842d8a8806ffb9ddded83d9450175e6aadd6a3b45b06f0e2b3ac11b38ac60-merged.mount: Deactivated successfully.
Feb 02 15:42:27 compute-0 podman[263660]: 2026-02-02 15:42:27.851571149 +0000 UTC m=+0.485672220 container remove b179fbe09a3afed5158dc978602a5d6e22e8ac6a1e1c93133e12dfa856999b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:42:27 compute-0 systemd[1]: libpod-conmon-b179fbe09a3afed5158dc978602a5d6e22e8ac6a1e1c93133e12dfa856999b03.scope: Deactivated successfully.
Feb 02 15:42:27 compute-0 sudo[263583]: pam_unix(sudo:session): session closed for user root
Feb 02 15:42:27 compute-0 sudo[263699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:42:27 compute-0 sudo[263699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:42:27 compute-0 sudo[263699]: pam_unix(sudo:session): session closed for user root
Feb 02 15:42:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Feb 02 15:42:27 compute-0 sudo[263724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:42:27 compute-0 sudo[263724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:42:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Feb 02 15:42:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Feb 02 15:42:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/510133963' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/510133963' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3638252986' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3638252986' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:28 compute-0 podman[263761]: 2026-02-02 15:42:28.276186582 +0000 UTC m=+0.040684482 container create f51443e1d2fee91d2760a939d40956573edd1dc37a469e3c896708ae4b2c0a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_buck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:42:28 compute-0 systemd[1]: Started libpod-conmon-f51443e1d2fee91d2760a939d40956573edd1dc37a469e3c896708ae4b2c0a94.scope.
Feb 02 15:42:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:42:28 compute-0 podman[263761]: 2026-02-02 15:42:28.347423397 +0000 UTC m=+0.111921307 container init f51443e1d2fee91d2760a939d40956573edd1dc37a469e3c896708ae4b2c0a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_buck, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:42:28 compute-0 podman[263761]: 2026-02-02 15:42:28.352153432 +0000 UTC m=+0.116651322 container start f51443e1d2fee91d2760a939d40956573edd1dc37a469e3c896708ae4b2c0a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:42:28 compute-0 podman[263761]: 2026-02-02 15:42:28.259575927 +0000 UTC m=+0.024073857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:42:28 compute-0 hardcore_buck[263778]: 167 167
Feb 02 15:42:28 compute-0 systemd[1]: libpod-f51443e1d2fee91d2760a939d40956573edd1dc37a469e3c896708ae4b2c0a94.scope: Deactivated successfully.
Feb 02 15:42:28 compute-0 podman[263761]: 2026-02-02 15:42:28.358266541 +0000 UTC m=+0.122764431 container attach f51443e1d2fee91d2760a939d40956573edd1dc37a469e3c896708ae4b2c0a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_buck, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:42:28 compute-0 podman[263761]: 2026-02-02 15:42:28.358737432 +0000 UTC m=+0.123235322 container died f51443e1d2fee91d2760a939d40956573edd1dc37a469e3c896708ae4b2c0a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-81e75a922eb1a5ab38b8b0acdd0697ad2d567a441abb1f97045eb86f5cd00ceb-merged.mount: Deactivated successfully.
Feb 02 15:42:28 compute-0 podman[263761]: 2026-02-02 15:42:28.399541686 +0000 UTC m=+0.164039576 container remove f51443e1d2fee91d2760a939d40956573edd1dc37a469e3c896708ae4b2c0a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_buck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:42:28 compute-0 systemd[1]: libpod-conmon-f51443e1d2fee91d2760a939d40956573edd1dc37a469e3c896708ae4b2c0a94.scope: Deactivated successfully.
Feb 02 15:42:28 compute-0 podman[263802]: 2026-02-02 15:42:28.563485069 +0000 UTC m=+0.057973963 container create f5029fcf61bd7014065bf604fbe9462b539e000ff5cfa3b6129c35cdd061e30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banzai, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:42:28 compute-0 systemd[1]: Started libpod-conmon-f5029fcf61bd7014065bf604fbe9462b539e000ff5cfa3b6129c35cdd061e30b.scope.
Feb 02 15:42:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71ec53f635ed903ed06ddfe4679c2df34b51d0f45e4ba493adf98c28eb63d37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71ec53f635ed903ed06ddfe4679c2df34b51d0f45e4ba493adf98c28eb63d37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71ec53f635ed903ed06ddfe4679c2df34b51d0f45e4ba493adf98c28eb63d37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71ec53f635ed903ed06ddfe4679c2df34b51d0f45e4ba493adf98c28eb63d37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:28 compute-0 podman[263802]: 2026-02-02 15:42:28.630965033 +0000 UTC m=+0.125453947 container init f5029fcf61bd7014065bf604fbe9462b539e000ff5cfa3b6129c35cdd061e30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banzai, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 15:42:28 compute-0 podman[263802]: 2026-02-02 15:42:28.541133605 +0000 UTC m=+0.035622579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:42:28 compute-0 podman[263802]: 2026-02-02 15:42:28.638605409 +0000 UTC m=+0.133094293 container start f5029fcf61bd7014065bf604fbe9462b539e000ff5cfa3b6129c35cdd061e30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:42:28 compute-0 podman[263802]: 2026-02-02 15:42:28.642217587 +0000 UTC m=+0.136706501 container attach f5029fcf61bd7014065bf604fbe9462b539e000ff5cfa3b6129c35cdd061e30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banzai, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:42:28 compute-0 ceph-mon[75334]: osdmap e416: 3 total, 3 up, 3 in
Feb 02 15:42:28 compute-0 ceph-mon[75334]: pgmap v1479: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 96 KiB/s wr, 257 op/s
Feb 02 15:42:28 compute-0 ceph-mon[75334]: osdmap e417: 3 total, 3 up, 3 in
Feb 02 15:42:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/510133963' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/510133963' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3638252986' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3638252986' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:29 compute-0 lvm[263895]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:42:29 compute-0 lvm[263895]: VG ceph_vg0 finished
Feb 02 15:42:29 compute-0 lvm[263898]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:42:29 compute-0 lvm[263898]: VG ceph_vg1 finished
Feb 02 15:42:29 compute-0 lvm[263900]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:42:29 compute-0 lvm[263900]: VG ceph_vg2 finished
Feb 02 15:42:29 compute-0 nova_compute[239545]: 2026-02-02 15:42:29.292 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:29 compute-0 gifted_banzai[263818]: {}
Feb 02 15:42:29 compute-0 systemd[1]: libpod-f5029fcf61bd7014065bf604fbe9462b539e000ff5cfa3b6129c35cdd061e30b.scope: Deactivated successfully.
Feb 02 15:42:29 compute-0 podman[263802]: 2026-02-02 15:42:29.379877804 +0000 UTC m=+0.874366698 container died f5029fcf61bd7014065bf604fbe9462b539e000ff5cfa3b6129c35cdd061e30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:42:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a71ec53f635ed903ed06ddfe4679c2df34b51d0f45e4ba493adf98c28eb63d37-merged.mount: Deactivated successfully.
Feb 02 15:42:29 compute-0 podman[263802]: 2026-02-02 15:42:29.413273338 +0000 UTC m=+0.907762232 container remove f5029fcf61bd7014065bf604fbe9462b539e000ff5cfa3b6129c35cdd061e30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banzai, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:42:29 compute-0 systemd[1]: libpod-conmon-f5029fcf61bd7014065bf604fbe9462b539e000ff5cfa3b6129c35cdd061e30b.scope: Deactivated successfully.
Feb 02 15:42:29 compute-0 sudo[263724]: pam_unix(sudo:session): session closed for user root
Feb 02 15:42:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:42:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:42:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:42:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:42:29 compute-0 sudo[263914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:42:29 compute-0 sudo[263914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:42:29 compute-0 sudo[263914]: pam_unix(sudo:session): session closed for user root
Feb 02 15:42:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 49 KiB/s wr, 209 op/s
Feb 02 15:42:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:42:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:42:30 compute-0 ceph-mon[75334]: pgmap v1481: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 49 KiB/s wr, 209 op/s
Feb 02 15:42:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1415096069' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:31 compute-0 nova_compute[239545]: 2026-02-02 15:42:31.111 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Feb 02 15:42:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Feb 02 15:42:31 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Feb 02 15:42:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1415096069' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 KiB/s wr, 86 op/s
Feb 02 15:42:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Feb 02 15:42:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Feb 02 15:42:32 compute-0 ceph-mon[75334]: osdmap e418: 3 total, 3 up, 3 in
Feb 02 15:42:32 compute-0 ceph-mon[75334]: pgmap v1483: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 KiB/s wr, 86 op/s
Feb 02 15:42:32 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Feb 02 15:42:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Feb 02 15:42:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Feb 02 15:42:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Feb 02 15:42:33 compute-0 ceph-mon[75334]: osdmap e419: 3 total, 3 up, 3 in
Feb 02 15:42:33 compute-0 ceph-mon[75334]: osdmap e420: 3 total, 3 up, 3 in
Feb 02 15:42:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 KiB/s wr, 121 op/s
Feb 02 15:42:34 compute-0 nova_compute[239545]: 2026-02-02 15:42:34.291 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Feb 02 15:42:34 compute-0 ceph-mon[75334]: pgmap v1486: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 KiB/s wr, 121 op/s
Feb 02 15:42:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Feb 02 15:42:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Feb 02 15:42:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2177445434' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2177445434' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:35 compute-0 ceph-mon[75334]: osdmap e421: 3 total, 3 up, 3 in
Feb 02 15:42:35 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2177445434' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:35 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2177445434' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 159 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 503 KiB/s rd, 4.0 MiB/s wr, 162 op/s
Feb 02 15:42:36 compute-0 ovn_controller[144995]: 2026-02-02T15:42:36Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:1d:eb 10.100.0.10
Feb 02 15:42:36 compute-0 ovn_controller[144995]: 2026-02-02T15:42:36Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:1d:eb 10.100.0.10
Feb 02 15:42:36 compute-0 nova_compute[239545]: 2026-02-02 15:42:36.115 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:36 compute-0 ceph-mon[75334]: pgmap v1488: 305 pgs: 305 active+clean; 159 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 503 KiB/s rd, 4.0 MiB/s wr, 162 op/s
Feb 02 15:42:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 159 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.8 MiB/s wr, 113 op/s
Feb 02 15:42:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3394458656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:37 compute-0 ceph-mon[75334]: pgmap v1489: 305 pgs: 305 active+clean; 159 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.8 MiB/s wr, 113 op/s
Feb 02 15:42:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3394458656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Feb 02 15:42:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Feb 02 15:42:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Feb 02 15:42:39 compute-0 nova_compute[239545]: 2026-02-02 15:42:39.293 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 205 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 634 KiB/s rd, 9.4 MiB/s wr, 220 op/s
Feb 02 15:42:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Feb 02 15:42:39 compute-0 ceph-mon[75334]: osdmap e422: 3 total, 3 up, 3 in
Feb 02 15:42:39 compute-0 ceph-mon[75334]: pgmap v1491: 305 pgs: 305 active+clean; 205 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 634 KiB/s rd, 9.4 MiB/s wr, 220 op/s
Feb 02 15:42:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Feb 02 15:42:39 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Feb 02 15:42:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/152436570' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/152436570' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Feb 02 15:42:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Feb 02 15:42:40 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Feb 02 15:42:40 compute-0 ceph-mon[75334]: osdmap e423: 3 total, 3 up, 3 in
Feb 02 15:42:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/152436570' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/152436570' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:41 compute-0 nova_compute[239545]: 2026-02-02 15:42:41.121 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:41 compute-0 podman[263941]: 2026-02-02 15:42:41.339601794 +0000 UTC m=+0.070254634 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 15:42:41 compute-0 podman[263940]: 2026-02-02 15:42:41.358734048 +0000 UTC m=+0.091447269 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 02 15:42:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 281 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 20 MiB/s wr, 206 op/s
Feb 02 15:42:41 compute-0 ceph-mon[75334]: osdmap e424: 3 total, 3 up, 3 in
Feb 02 15:42:41 compute-0 ceph-mon[75334]: pgmap v1494: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 281 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 20 MiB/s wr, 206 op/s
Feb 02 15:42:42 compute-0 nova_compute[239545]: 2026-02-02 15:42:42.287 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:42 compute-0 nova_compute[239545]: 2026-02-02 15:42:42.288 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:42 compute-0 nova_compute[239545]: 2026-02-02 15:42:42.313 239549 DEBUG nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:42:42 compute-0 nova_compute[239545]: 2026-02-02 15:42:42.392 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:42 compute-0 nova_compute[239545]: 2026-02-02 15:42:42.393 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:42 compute-0 nova_compute[239545]: 2026-02-02 15:42:42.403 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:42:42 compute-0 nova_compute[239545]: 2026-02-02 15:42:42.404 239549 INFO nova.compute.claims [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:42:42 compute-0 nova_compute[239545]: 2026-02-02 15:42:42.522 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:42:42
Feb 02 15:42:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:42:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:42:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'images', 'vms', 'default.rgw.meta']
Feb 02 15:42:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:42:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Feb 02 15:42:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Feb 02 15:42:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Feb 02 15:42:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:42:43 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2170500660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Feb 02 15:42:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Feb 02 15:42:43 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.118 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.123 239549 DEBUG nova.compute.provider_tree [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.137 239549 DEBUG nova.scheduler.client.report [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.161 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.161 239549 DEBUG nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.214 239549 DEBUG nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.215 239549 DEBUG nova.network.neutron [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.232 239549 INFO nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.250 239549 DEBUG nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.287 239549 INFO nova.virt.block_device [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Booting with volume 0d27bdc2-c098-4067-bfd3-bb3f0f4711d2 at /dev/vda
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.360 239549 DEBUG nova.policy [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df03e4d41ae644fca567cfe648b7bad6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.391 239549 DEBUG os_brick.utils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.392 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.405 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.405 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[ea58b4e7-b31d-45d9-a0df-0fa57da8af2c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.407 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.415 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.415 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[a2587194-a512-44fb-8b44-da7287c2bdbc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.416 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.423 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.423 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[72fdac8a-0217-4fa6-aaa0-3a957561f78a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.424 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[60d544e2-e78e-4fdf-8935-76bded25c686]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.424 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.444 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.446 239549 DEBUG os_brick.initiator.connectors.lightos [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.447 239549 DEBUG os_brick.initiator.connectors.lightos [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.447 239549 DEBUG os_brick.initiator.connectors.lightos [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.447 239549 DEBUG os_brick.utils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] <== get_connector_properties: return (54ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.447 239549 DEBUG nova.virt.block_device [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Updating existing volume attachment record: 5f4c38ac-5ef5-455c-ae67-d78e4e5711d6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:42:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 19 MiB/s wr, 203 op/s
Feb 02 15:42:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:43.887 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.887 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:43 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:43.888 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:42:43 compute-0 ceph-mon[75334]: osdmap e425: 3 total, 3 up, 3 in
Feb 02 15:42:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2170500660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:42:43 compute-0 ceph-mon[75334]: osdmap e426: 3 total, 3 up, 3 in
Feb 02 15:42:43 compute-0 ceph-mon[75334]: pgmap v1497: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 19 MiB/s wr, 203 op/s
Feb 02 15:42:43 compute-0 nova_compute[239545]: 2026-02-02 15:42:43.990 239549 DEBUG nova.network.neutron [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Successfully created port: 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:42:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/293552941' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.295 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.476 239549 DEBUG nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.478 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.478 239549 INFO nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Creating image(s)
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.478 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.479 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Ensure instance console log exists: /var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.479 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.479 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.480 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.728 239549 DEBUG nova.network.neutron [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Successfully updated port: 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.743 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "refresh_cache-587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.745 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquired lock "refresh_cache-587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.745 239549 DEBUG nova.network.neutron [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.807 239549 DEBUG nova.compute.manager [req-72283d4d-4f64-4902-a313-08f4627c9969 req-9c399117-3b05-4426-9f24-4b622fbbd5dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received event network-changed-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.807 239549 DEBUG nova.compute.manager [req-72283d4d-4f64-4902-a313-08f4627c9969 req-9c399117-3b05-4426-9f24-4b622fbbd5dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Refreshing instance network info cache due to event network-changed-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.808 239549 DEBUG oslo_concurrency.lockutils [req-72283d4d-4f64-4902-a313-08f4627c9969 req-9c399117-3b05-4426-9f24-4b622fbbd5dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:42:44 compute-0 nova_compute[239545]: 2026-02-02 15:42:44.881 239549 DEBUG nova.network.neutron [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:42:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:42:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Feb 02 15:42:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Feb 02 15:42:45 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Feb 02 15:42:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/293552941' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.489 239549 DEBUG nova.network.neutron [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Updating instance_info_cache with network_info: [{"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.505 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Releasing lock "refresh_cache-587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.505 239549 DEBUG nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Instance network_info: |[{"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.505 239549 DEBUG oslo_concurrency.lockutils [req-72283d4d-4f64-4902-a313-08f4627c9969 req-9c399117-3b05-4426-9f24-4b622fbbd5dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.506 239549 DEBUG nova.network.neutron [req-72283d4d-4f64-4902-a313-08f4627c9969 req-9c399117-3b05-4426-9f24-4b622fbbd5dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Refreshing network info cache for port 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.508 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Start _get_guest_xml network_info=[{"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '5f4c38ac-5ef5-455c-ae67-d78e4e5711d6', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '587dcef5-85a2-49c6-8c3f-2cb01dd68aeb', 'attached_at': '', 'detached_at': '', 'volume_id': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'serial': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.512 239549 WARNING nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.517 239549 DEBUG nova.virt.libvirt.host [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.517 239549 DEBUG nova.virt.libvirt.host [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.521 239549 DEBUG nova.virt.libvirt.host [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.522 239549 DEBUG nova.virt.libvirt.host [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.522 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.522 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.523 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.523 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.523 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.524 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.524 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.524 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.524 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.525 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.525 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.525 239549 DEBUG nova.virt.hardware [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.545 239549 DEBUG nova.storage.rbd_utils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:42:45 compute-0 nova_compute[239545]: 2026-02-02 15:42:45.549 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 34 KiB/s wr, 152 op/s
Feb 02 15:42:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:42:46 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4151592224' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.119 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:46 compute-0 ceph-mon[75334]: osdmap e427: 3 total, 3 up, 3 in
Feb 02 15:42:46 compute-0 ceph-mon[75334]: pgmap v1499: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 34 KiB/s wr, 152 op/s
Feb 02 15:42:46 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4151592224' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.125 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.253 239549 DEBUG os_brick.encryptors [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Using volume encryption metadata '{'encryption_key_id': '986e0cc6-d5e6-4b49-82b3-ac5262c222cd', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '587dcef5-85a2-49c6-8c3f-2cb01dd68aeb', 'attached_at': '', 'detached_at': '', 'volume_id': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.255 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.268 239549 DEBUG barbicanclient.v1.secrets [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.268 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.288 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.289 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.309 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.310 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.332 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.332 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.357 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.358 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.383 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.384 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.402 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.403 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.420 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.420 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.440 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.440 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.461 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.462 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.780 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.781 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.791 239549 DEBUG nova.network.neutron [req-72283d4d-4f64-4902-a313-08f4627c9969 req-9c399117-3b05-4426-9f24-4b622fbbd5dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Updated VIF entry in instance network info cache for port 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.792 239549 DEBUG nova.network.neutron [req-72283d4d-4f64-4902-a313-08f4627c9969 req-9c399117-3b05-4426-9f24-4b622fbbd5dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Updating instance_info_cache with network_info: [{"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.807 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.807 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.810 239549 DEBUG oslo_concurrency.lockutils [req-72283d4d-4f64-4902-a313-08f4627c9969 req-9c399117-3b05-4426-9f24-4b622fbbd5dd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.829 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.829 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.853 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.854 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.878 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.878 239549 INFO barbicanclient.base [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/986e0cc6-d5e6-4b49-82b3-ac5262c222cd
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.902 239549 DEBUG barbicanclient.client [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.903 239549 DEBUG nova.virt.libvirt.host [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <usage type="volume">
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <volume>0d27bdc2-c098-4067-bfd3-bb3f0f4711d2</volume>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   </usage>
Feb 02 15:42:46 compute-0 nova_compute[239545]: </secret>
Feb 02 15:42:46 compute-0 nova_compute[239545]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.939 239549 DEBUG nova.virt.libvirt.vif [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:42:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1939615838',display_name='tempest-TransferEncryptedVolumeTest-server-1939615838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1939615838',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKMsL4MWgDZKbVhB6IQOiF8pp1EYQGvyTWbcn/zV7b4n3z7hapmnFr4nrZxT7tbDh4OrqjSbFL2giowZbe7RVbM1MVvSBqtMgXFfoAVQEbSkdr0VJtIIAKRxEkeVY0YVeg==',key_name='tempest-TransferEncryptedVolumeTest-347177902',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-wyyy0ue2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:42:43Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=587dcef5-85a2-49c6-8c3f-2cb01dd68aeb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.939 239549 DEBUG nova.network.os_vif_util [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.940 239549 DEBUG nova.network.os_vif_util [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:ef,bridge_name='br-int',has_traffic_filtering=True,id=9a01f6e3-a5ae-4664-9f09-3c75fa4331a2,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a01f6e3-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.942 239549 DEBUG nova.objects.instance [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.960 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <uuid>587dcef5-85a2-49c6-8c3f-2cb01dd68aeb</uuid>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <name>instance-00000012</name>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1939615838</nova:name>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:42:45</nova:creationTime>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <nova:user uuid="df03e4d41ae644fca567cfe648b7bad6">tempest-TransferEncryptedVolumeTest-1895614673-project-member</nova:user>
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <nova:project uuid="6d6011a66bdb41cea09b6018ceeec7d4">tempest-TransferEncryptedVolumeTest-1895614673</nova:project>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <nova:port uuid="9a01f6e3-a5ae-4664-9f09-3c75fa4331a2">
Feb 02 15:42:46 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <system>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <entry name="serial">587dcef5-85a2-49c6-8c3f-2cb01dd68aeb</entry>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <entry name="uuid">587dcef5-85a2-49c6-8c3f-2cb01dd68aeb</entry>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     </system>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <os>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   </os>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <features>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   </features>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb_disk.config">
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       </source>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-0d27bdc2-c098-4067-bfd3-bb3f0f4711d2">
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       </source>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <serial>0d27bdc2-c098-4067-bfd3-bb3f0f4711d2</serial>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <encryption format="luks">
Feb 02 15:42:46 compute-0 nova_compute[239545]:         <secret type="passphrase" uuid="b8e05571-4eb9-428d-b185-8e5231011c63"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       </encryption>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:f7:96:ef"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <target dev="tap9a01f6e3-a5"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb/console.log" append="off"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <video>
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     </video>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:42:46 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:42:46 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:42:46 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:42:46 compute-0 nova_compute[239545]: </domain>
Feb 02 15:42:46 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.960 239549 DEBUG nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Preparing to wait for external event network-vif-plugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.961 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.961 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.961 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.962 239549 DEBUG nova.virt.libvirt.vif [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:42:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1939615838',display_name='tempest-TransferEncryptedVolumeTest-server-1939615838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1939615838',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKMsL4MWgDZKbVhB6IQOiF8pp1EYQGvyTWbcn/zV7b4n3z7hapmnFr4nrZxT7tbDh4OrqjSbFL2giowZbe7RVbM1MVvSBqtMgXFfoAVQEbSkdr0VJtIIAKRxEkeVY0YVeg==',key_name='tempest-TransferEncryptedVolumeTest-347177902',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-wyyy0ue2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:42:43Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=587dcef5-85a2-49c6-8c3f-2cb01dd68aeb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.962 239549 DEBUG nova.network.os_vif_util [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.962 239549 DEBUG nova.network.os_vif_util [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:ef,bridge_name='br-int',has_traffic_filtering=True,id=9a01f6e3-a5ae-4664-9f09-3c75fa4331a2,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a01f6e3-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.963 239549 DEBUG os_vif [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:ef,bridge_name='br-int',has_traffic_filtering=True,id=9a01f6e3-a5ae-4664-9f09-3c75fa4331a2,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a01f6e3-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.963 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.964 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.964 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.966 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.966 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a01f6e3-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.967 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9a01f6e3-a5, col_values=(('external_ids', {'iface-id': '9a01f6e3-a5ae-4664-9f09-3c75fa4331a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:96:ef', 'vm-uuid': '587dcef5-85a2-49c6-8c3f-2cb01dd68aeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.968 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:46 compute-0 NetworkManager[49171]: <info>  [1770046966.9697] manager: (tap9a01f6e3-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.970 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.974 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:46 compute-0 nova_compute[239545]: 2026-02-02 15:42:46.974 239549 INFO os_vif [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:96:ef,bridge_name='br-int',has_traffic_filtering=True,id=9a01f6e3-a5ae-4664-9f09-3c75fa4331a2,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a01f6e3-a5')
Feb 02 15:42:47 compute-0 nova_compute[239545]: 2026-02-02 15:42:47.019 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:42:47 compute-0 nova_compute[239545]: 2026-02-02 15:42:47.020 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:42:47 compute-0 nova_compute[239545]: 2026-02-02 15:42:47.020 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No VIF found with MAC fa:16:3e:f7:96:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:42:47 compute-0 nova_compute[239545]: 2026-02-02 15:42:47.021 239549 INFO nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Using config drive
Feb 02 15:42:47 compute-0 nova_compute[239545]: 2026-02-02 15:42:47.040 239549 DEBUG nova.storage.rbd_utils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:42:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Feb 02 15:42:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Feb 02 15:42:47 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Feb 02 15:42:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 33 KiB/s wr, 146 op/s
Feb 02 15:42:47 compute-0 nova_compute[239545]: 2026-02-02 15:42:47.922 239549 INFO nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Creating config drive at /var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb/disk.config
Feb 02 15:42:47 compute-0 nova_compute[239545]: 2026-02-02 15:42:47.925 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgum0si7f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.048 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgum0si7f" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.071 239549 DEBUG nova.storage.rbd_utils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.074 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb/disk.config 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:42:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Feb 02 15:42:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Feb 02 15:42:48 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Feb 02 15:42:48 compute-0 ceph-mon[75334]: osdmap e428: 3 total, 3 up, 3 in
Feb 02 15:42:48 compute-0 ceph-mon[75334]: pgmap v1501: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 33 KiB/s wr, 146 op/s
Feb 02 15:42:48 compute-0 ceph-mon[75334]: osdmap e429: 3 total, 3 up, 3 in
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.205 239549 DEBUG oslo_concurrency.processutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb/disk.config 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.206 239549 INFO nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Deleting local config drive /var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb/disk.config because it was imported into RBD.
Feb 02 15:42:48 compute-0 kernel: tap9a01f6e3-a5: entered promiscuous mode
Feb 02 15:42:48 compute-0 NetworkManager[49171]: <info>  [1770046968.2640] manager: (tap9a01f6e3-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.266 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:48 compute-0 ovn_controller[144995]: 2026-02-02T15:42:48Z|00170|binding|INFO|Claiming lport 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 for this chassis.
Feb 02 15:42:48 compute-0 ovn_controller[144995]: 2026-02-02T15:42:48Z|00171|binding|INFO|9a01f6e3-a5ae-4664-9f09-3c75fa4331a2: Claiming fa:16:3e:f7:96:ef 10.100.0.7
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.274 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:96:ef 10.100.0.7'], port_security=['fa:16:3e:f7:96:ef 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '587dcef5-85a2-49c6-8c3f-2cb01dd68aeb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8d353b3-b1bd-4128-966b-cb49804d5ec9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=9a01f6e3-a5ae-4664-9f09-3c75fa4331a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:42:48 compute-0 ovn_controller[144995]: 2026-02-02T15:42:48Z|00172|binding|INFO|Setting lport 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 ovn-installed in OVS
Feb 02 15:42:48 compute-0 ovn_controller[144995]: 2026-02-02T15:42:48Z|00173|binding|INFO|Setting lport 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 up in Southbound
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.275 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c bound to our chassis
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.276 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.277 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.281 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.284 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[25271195-7ce4-4e68-b347-45f210051668]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.285 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6f67b7a-31 in ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.288 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6f67b7a-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.288 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[6bee266b-ae32-441f-8121-3b566a807b97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.289 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3d122b8a-0472-42b2-8fa9-747dc98a4c69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 systemd-machined[207609]: New machine qemu-18-instance-00000012.
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.300 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee0e1b4-f2ea-4bec-bb60-3287bfc3ea56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.311 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[40825ba2-85bf-4c34-ac89-3602a25b5dea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Feb 02 15:42:48 compute-0 systemd-udevd[264134]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.337 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[84b6044b-c184-4fdf-8f66-455f3ac26269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.341 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0c361db2-cb1c-4309-81c7-98bcdc69bf0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 NetworkManager[49171]: <info>  [1770046968.3426] manager: (tapb6f67b7a-30): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
Feb 02 15:42:48 compute-0 systemd-udevd[264136]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:42:48 compute-0 NetworkManager[49171]: <info>  [1770046968.3491] device (tap9a01f6e3-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:42:48 compute-0 NetworkManager[49171]: <info>  [1770046968.3496] device (tap9a01f6e3-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.369 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[b02baad0-f4e0-4853-8719-c3f0dca48b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.373 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[e20dc8ad-0417-4db3-bc8d-a14b5560b5b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 NetworkManager[49171]: <info>  [1770046968.3939] device (tapb6f67b7a-30): carrier: link connected
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.396 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf04902-4a65-4b56-9d06-62c540cc36b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.408 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d9f176-e797-46f4-9a80-c9efe70f5210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6f67b7a-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445430, 'reachable_time': 17747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264162, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.424 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[eb016f73-ba32-47e9-af47-1e708ce22e73]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe04:b29'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445430, 'tstamp': 445430}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264163, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:42:48 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2183728371' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:42:48 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2183728371' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.437 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea3f080-9f8e-47f2-88ce-2289931a5f15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6f67b7a-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445430, 'reachable_time': 17747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264164, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.466 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e0aafa01-1e6d-4b18-a149-d202528f5256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.492 239549 DEBUG nova.compute.manager [req-09eee552-f2af-4cab-b3b6-da6cf1f41abd req-d3bcdc27-259d-4261-85d3-d3ae709b0acc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received event network-vif-plugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.492 239549 DEBUG oslo_concurrency.lockutils [req-09eee552-f2af-4cab-b3b6-da6cf1f41abd req-d3bcdc27-259d-4261-85d3-d3ae709b0acc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.493 239549 DEBUG oslo_concurrency.lockutils [req-09eee552-f2af-4cab-b3b6-da6cf1f41abd req-d3bcdc27-259d-4261-85d3-d3ae709b0acc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.493 239549 DEBUG oslo_concurrency.lockutils [req-09eee552-f2af-4cab-b3b6-da6cf1f41abd req-d3bcdc27-259d-4261-85d3-d3ae709b0acc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.493 239549 DEBUG nova.compute.manager [req-09eee552-f2af-4cab-b3b6-da6cf1f41abd req-d3bcdc27-259d-4261-85d3-d3ae709b0acc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Processing event network-vif-plugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.525 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[11294796-0c4d-40bc-b6c7-22c2ec639afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.527 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6f67b7a-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.527 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.528 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6f67b7a-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.530 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:48 compute-0 NetworkManager[49171]: <info>  [1770046968.5307] manager: (tapb6f67b7a-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Feb 02 15:42:48 compute-0 kernel: tapb6f67b7a-30: entered promiscuous mode
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.533 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.535 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6f67b7a-30, col_values=(('external_ids', {'iface-id': '4216aeff-7d93-404b-9880-8737d42e9d19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.536 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:48 compute-0 ovn_controller[144995]: 2026-02-02T15:42:48Z|00174|binding|INFO|Releasing lport 4216aeff-7d93-404b-9880-8737d42e9d19 from this chassis (sb_readonly=0)
Feb 02 15:42:48 compute-0 nova_compute[239545]: 2026-02-02 15:42:48.542 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.544 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.545 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c176ae97-e4b3-40f4-a2be-6c988830dc17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.545 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:42:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:48.546 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'env', 'PROCESS_TAG=haproxy-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:42:48 compute-0 podman[264203]: 2026-02-02 15:42:48.891315262 +0000 UTC m=+0.045795512 container create 6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:42:48 compute-0 systemd[1]: Started libpod-conmon-6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff.scope.
Feb 02 15:42:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd7d15c9820ae7eb8ce20b0975176efcaffb44c3cccb8d4f8b13b73a36d21f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:42:48 compute-0 podman[264203]: 2026-02-02 15:42:48.9564265 +0000 UTC m=+0.110906780 container init 6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 15:42:48 compute-0 podman[264203]: 2026-02-02 15:42:48.863538298 +0000 UTC m=+0.018018568 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:42:48 compute-0 podman[264203]: 2026-02-02 15:42:48.961801961 +0000 UTC m=+0.116282211 container start 6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 02 15:42:48 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[264247]: [NOTICE]   (264251) : New worker (264253) forked
Feb 02 15:42:48 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[264247]: [NOTICE]   (264251) : Loading success.
Feb 02 15:42:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2183728371' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:42:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2183728371' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:42:49 compute-0 nova_compute[239545]: 2026-02-02 15:42:49.298 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 25 KiB/s wr, 65 op/s
Feb 02 15:42:50 compute-0 ceph-mon[75334]: pgmap v1503: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 25 KiB/s wr, 65 op/s
Feb 02 15:42:50 compute-0 nova_compute[239545]: 2026-02-02 15:42:50.569 239549 DEBUG nova.compute.manager [req-e70c6a12-6ebe-4dbb-af28-69235c2509b6 req-936f09bc-3ce9-44d5-8f48-7f1bd01b2049 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received event network-vif-plugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:42:50 compute-0 nova_compute[239545]: 2026-02-02 15:42:50.569 239549 DEBUG oslo_concurrency.lockutils [req-e70c6a12-6ebe-4dbb-af28-69235c2509b6 req-936f09bc-3ce9-44d5-8f48-7f1bd01b2049 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:50 compute-0 nova_compute[239545]: 2026-02-02 15:42:50.569 239549 DEBUG oslo_concurrency.lockutils [req-e70c6a12-6ebe-4dbb-af28-69235c2509b6 req-936f09bc-3ce9-44d5-8f48-7f1bd01b2049 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:50 compute-0 nova_compute[239545]: 2026-02-02 15:42:50.570 239549 DEBUG oslo_concurrency.lockutils [req-e70c6a12-6ebe-4dbb-af28-69235c2509b6 req-936f09bc-3ce9-44d5-8f48-7f1bd01b2049 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:50 compute-0 nova_compute[239545]: 2026-02-02 15:42:50.570 239549 DEBUG nova.compute.manager [req-e70c6a12-6ebe-4dbb-af28-69235c2509b6 req-936f09bc-3ce9-44d5-8f48-7f1bd01b2049 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] No waiting events found dispatching network-vif-plugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:42:50 compute-0 nova_compute[239545]: 2026-02-02 15:42:50.570 239549 WARNING nova.compute.manager [req-e70c6a12-6ebe-4dbb-af28-69235c2509b6 req-936f09bc-3ce9-44d5-8f48-7f1bd01b2049 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received unexpected event network-vif-plugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 for instance with vm_state building and task_state spawning.
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.380 239549 DEBUG nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.381 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046971.379885, 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.381 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] VM Started (Lifecycle Event)
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.383 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.386 239549 INFO nova.virt.libvirt.driver [-] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Instance spawned successfully.
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.386 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.407 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.411 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.412 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.412 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.412 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.413 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.413 239549 DEBUG nova.virt.libvirt.driver [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.417 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.451 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.452 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046971.3829951, 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.452 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] VM Paused (Lifecycle Event)
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.482 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.485 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046971.383315, 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.486 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] VM Resumed (Lifecycle Event)
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.495 239549 INFO nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Took 7.02 seconds to spawn the instance on the hypervisor.
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.495 239549 DEBUG nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.506 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.509 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.544 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.573 239549 INFO nova.compute.manager [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Took 9.21 seconds to build instance.
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.588 239549 DEBUG oslo_concurrency.lockutils [None req-eb095f2d-3a4c-4e42-8654-e431e6029cdc df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 34 KiB/s wr, 88 op/s
Feb 02 15:42:51 compute-0 nova_compute[239545]: 2026-02-02 15:42:51.971 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:51 compute-0 ceph-mon[75334]: pgmap v1504: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 34 KiB/s wr, 88 op/s
Feb 02 15:42:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Feb 02 15:42:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Feb 02 15:42:53 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Feb 02 15:42:53 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:53.890 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:42:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 31 KiB/s wr, 126 op/s
Feb 02 15:42:54 compute-0 ceph-mon[75334]: osdmap e430: 3 total, 3 up, 3 in
Feb 02 15:42:54 compute-0 ceph-mon[75334]: pgmap v1506: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 31 KiB/s wr, 126 op/s
Feb 02 15:42:54 compute-0 nova_compute[239545]: 2026-02-02 15:42:54.300 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.279566872202963e-06 of space, bias 1.0, pg target 0.0018838700616608888 quantized to 32 (current 32)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00294549812785876 of space, bias 1.0, pg target 0.883649438357628 quantized to 32 (current 32)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.586244644112378e-06 of space, bias 1.0, pg target 0.0007758733932337134 quantized to 32 (current 32)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006676020004044925 of space, bias 1.0, pg target 0.20028060012134777 quantized to 32 (current 32)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.501802619705056e-06 of space, bias 4.0, pg target 0.0018021631436460673 quantized to 16 (current 16)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:42:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:42:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 28 KiB/s wr, 169 op/s
Feb 02 15:42:56 compute-0 nova_compute[239545]: 2026-02-02 15:42:56.975 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:56 compute-0 ceph-mon[75334]: pgmap v1507: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 28 KiB/s wr, 169 op/s
Feb 02 15:42:57 compute-0 nova_compute[239545]: 2026-02-02 15:42:57.207 239549 DEBUG nova.compute.manager [req-b15c8ec0-80dd-40ee-98a3-c5cf44c31b43 req-1cf2bb9f-9eab-4093-b64b-672f024c58ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received event network-changed-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:42:57 compute-0 nova_compute[239545]: 2026-02-02 15:42:57.207 239549 DEBUG nova.compute.manager [req-b15c8ec0-80dd-40ee-98a3-c5cf44c31b43 req-1cf2bb9f-9eab-4093-b64b-672f024c58ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Refreshing instance network info cache due to event network-changed-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:42:57 compute-0 nova_compute[239545]: 2026-02-02 15:42:57.208 239549 DEBUG oslo_concurrency.lockutils [req-b15c8ec0-80dd-40ee-98a3-c5cf44c31b43 req-1cf2bb9f-9eab-4093-b64b-672f024c58ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:42:57 compute-0 nova_compute[239545]: 2026-02-02 15:42:57.208 239549 DEBUG oslo_concurrency.lockutils [req-b15c8ec0-80dd-40ee-98a3-c5cf44c31b43 req-1cf2bb9f-9eab-4093-b64b-672f024c58ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:42:57 compute-0 nova_compute[239545]: 2026-02-02 15:42:57.208 239549 DEBUG nova.network.neutron [req-b15c8ec0-80dd-40ee-98a3-c5cf44c31b43 req-1cf2bb9f-9eab-4093-b64b-672f024c58ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Refreshing network info cache for port 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:42:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 22 KiB/s wr, 138 op/s
Feb 02 15:42:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:42:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Feb 02 15:42:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Feb 02 15:42:58 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Feb 02 15:42:58 compute-0 nova_compute[239545]: 2026-02-02 15:42:58.611 239549 DEBUG nova.network.neutron [req-b15c8ec0-80dd-40ee-98a3-c5cf44c31b43 req-1cf2bb9f-9eab-4093-b64b-672f024c58ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Updated VIF entry in instance network info cache for port 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:42:58 compute-0 nova_compute[239545]: 2026-02-02 15:42:58.612 239549 DEBUG nova.network.neutron [req-b15c8ec0-80dd-40ee-98a3-c5cf44c31b43 req-1cf2bb9f-9eab-4093-b64b-672f024c58ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Updating instance_info_cache with network_info: [{"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:42:58 compute-0 nova_compute[239545]: 2026-02-02 15:42:58.631 239549 DEBUG oslo_concurrency.lockutils [req-b15c8ec0-80dd-40ee-98a3-c5cf44c31b43 req-1cf2bb9f-9eab-4093-b64b-672f024c58ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:42:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Feb 02 15:42:59 compute-0 ceph-mon[75334]: pgmap v1508: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 22 KiB/s wr, 138 op/s
Feb 02 15:42:59 compute-0 ceph-mon[75334]: osdmap e431: 3 total, 3 up, 3 in
Feb 02 15:42:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Feb 02 15:42:59 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Feb 02 15:42:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:59.253 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:42:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:59.253 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:42:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:42:59.254 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:42:59 compute-0 nova_compute[239545]: 2026-02-02 15:42:59.341 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:42:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 KiB/s wr, 117 op/s
Feb 02 15:43:00 compute-0 ceph-mon[75334]: osdmap e432: 3 total, 3 up, 3 in
Feb 02 15:43:01 compute-0 ceph-mon[75334]: pgmap v1511: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 KiB/s wr, 117 op/s
Feb 02 15:43:01 compute-0 anacron[35674]: Job `cron.weekly' started
Feb 02 15:43:01 compute-0 anacron[35674]: Job `cron.weekly' terminated
Feb 02 15:43:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 74 op/s
Feb 02 15:43:01 compute-0 nova_compute[239545]: 2026-02-02 15:43:01.978 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:02 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 02 15:43:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:03 compute-0 ceph-mon[75334]: pgmap v1512: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 74 op/s
Feb 02 15:43:03 compute-0 nova_compute[239545]: 2026-02-02 15:43:03.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:03 compute-0 ovn_controller[144995]: 2026-02-02T15:43:03Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:96:ef 10.100.0.7
Feb 02 15:43:03 compute-0 ovn_controller[144995]: 2026-02-02T15:43:03Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:96:ef 10.100.0.7
Feb 02 15:43:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 281 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 1.3 MiB/s wr, 48 op/s
Feb 02 15:43:04 compute-0 nova_compute[239545]: 2026-02-02 15:43:04.344 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:05 compute-0 ceph-mon[75334]: pgmap v1513: 305 pgs: 305 active+clean; 281 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 1.3 MiB/s wr, 48 op/s
Feb 02 15:43:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 327 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 813 KiB/s rd, 6.6 MiB/s wr, 116 op/s
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.475 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.475 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.498 239549 DEBUG nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.610 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.611 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.620 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.620 239549 INFO nova.compute.claims [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.756 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.957 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.958 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.958 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.959 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4d22e226-bdcc-49f4-b9b5-85c81397a0f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:43:06 compute-0 nova_compute[239545]: 2026-02-02 15:43:06.982 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Feb 02 15:43:07 compute-0 ceph-mon[75334]: pgmap v1514: 305 pgs: 305 active+clean; 327 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 813 KiB/s rd, 6.6 MiB/s wr, 116 op/s
Feb 02 15:43:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Feb 02 15:43:07 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Feb 02 15:43:07 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:43:07 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3355306721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.297 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.303 239549 DEBUG nova.compute.provider_tree [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.328 239549 DEBUG nova.scheduler.client.report [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.349 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.350 239549 DEBUG nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.410 239549 INFO nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.412 239549 DEBUG nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.413 239549 DEBUG nova.network.neutron [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.435 239549 DEBUG nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.482 239549 INFO nova.virt.block_device [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Booting with volume snapshot a1c6e9a5-82e5-4d45-835f-6d88434a11cb at /dev/vda
Feb 02 15:43:07 compute-0 nova_compute[239545]: 2026-02-02 15:43:07.577 239549 DEBUG nova.policy [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8e72a1cb6344869821da1cfc41bf8fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:43:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 327 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 739 KiB/s rd, 6.0 MiB/s wr, 105 op/s
Feb 02 15:43:08 compute-0 nova_compute[239545]: 2026-02-02 15:43:08.022 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Updating instance_info_cache with network_info: [{"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:08 compute-0 nova_compute[239545]: 2026-02-02 15:43:08.041 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-4d22e226-bdcc-49f4-b9b5-85c81397a0f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:43:08 compute-0 nova_compute[239545]: 2026-02-02 15:43:08.042 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:43:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:08 compute-0 ceph-mon[75334]: osdmap e433: 3 total, 3 up, 3 in
Feb 02 15:43:08 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3355306721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:08 compute-0 nova_compute[239545]: 2026-02-02 15:43:08.243 239549 DEBUG nova.network.neutron [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Successfully created port: 733bac79-4a7c-42c0-90c2-1a1e68e1543f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:43:08 compute-0 nova_compute[239545]: 2026-02-02 15:43:08.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:08 compute-0 nova_compute[239545]: 2026-02-02 15:43:08.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Feb 02 15:43:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Feb 02 15:43:09 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Feb 02 15:43:09 compute-0 ceph-mon[75334]: pgmap v1516: 305 pgs: 305 active+clean; 327 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 739 KiB/s rd, 6.0 MiB/s wr, 105 op/s
Feb 02 15:43:09 compute-0 nova_compute[239545]: 2026-02-02 15:43:09.350 239549 DEBUG nova.network.neutron [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Successfully updated port: 733bac79-4a7c-42c0-90c2-1a1e68e1543f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:43:09 compute-0 nova_compute[239545]: 2026-02-02 15:43:09.353 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:09 compute-0 nova_compute[239545]: 2026-02-02 15:43:09.375 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "refresh_cache-84365bea-19f8-4121-86d5-dd9e1a5eeaa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:43:09 compute-0 nova_compute[239545]: 2026-02-02 15:43:09.376 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquired lock "refresh_cache-84365bea-19f8-4121-86d5-dd9e1a5eeaa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:43:09 compute-0 nova_compute[239545]: 2026-02-02 15:43:09.376 239549 DEBUG nova.network.neutron [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:43:09 compute-0 nova_compute[239545]: 2026-02-02 15:43:09.455 239549 DEBUG nova.compute.manager [req-183eb5da-118c-439d-8518-dfefb34aa7af req-deeef354-e39f-47b0-bc36-2d69a9e414db d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received event network-changed-733bac79-4a7c-42c0-90c2-1a1e68e1543f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:09 compute-0 nova_compute[239545]: 2026-02-02 15:43:09.456 239549 DEBUG nova.compute.manager [req-183eb5da-118c-439d-8518-dfefb34aa7af req-deeef354-e39f-47b0-bc36-2d69a9e414db d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Refreshing instance network info cache due to event network-changed-733bac79-4a7c-42c0-90c2-1a1e68e1543f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:43:09 compute-0 nova_compute[239545]: 2026-02-02 15:43:09.456 239549 DEBUG oslo_concurrency.lockutils [req-183eb5da-118c-439d-8518-dfefb34aa7af req-deeef354-e39f-47b0-bc36-2d69a9e414db d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-84365bea-19f8-4121-86d5-dd9e1a5eeaa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:43:09 compute-0 nova_compute[239545]: 2026-02-02 15:43:09.579 239549 DEBUG nova.network.neutron [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:43:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2640494500' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2640494500' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 8.7 MiB/s wr, 129 op/s
Feb 02 15:43:10 compute-0 ceph-mon[75334]: osdmap e434: 3 total, 3 up, 3 in
Feb 02 15:43:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2640494500' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:10 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2640494500' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.548 239549 DEBUG nova.network.neutron [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Updating instance_info_cache with network_info: [{"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.574 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.574 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.574 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.587 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Releasing lock "refresh_cache-84365bea-19f8-4121-86d5-dd9e1a5eeaa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.588 239549 DEBUG nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Instance network_info: |[{"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.589 239549 DEBUG oslo_concurrency.lockutils [req-183eb5da-118c-439d-8518-dfefb34aa7af req-deeef354-e39f-47b0-bc36-2d69a9e414db d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-84365bea-19f8-4121-86d5-dd9e1a5eeaa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:43:10 compute-0 nova_compute[239545]: 2026-02-02 15:43:10.589 239549 DEBUG nova.network.neutron [req-183eb5da-118c-439d-8518-dfefb34aa7af req-deeef354-e39f-47b0-bc36-2d69a9e414db d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Refreshing network info cache for port 733bac79-4a7c-42c0-90c2-1a1e68e1543f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:43:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:43:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422393927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.141 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:11 compute-0 ceph-mon[75334]: pgmap v1518: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 8.7 MiB/s wr, 129 op/s
Feb 02 15:43:11 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/422393927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.229 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.230 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.234 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.234 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.436 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.437 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3983MB free_disk=59.98790469113737GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.437 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.438 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.566 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 4d22e226-bdcc-49f4-b9b5-85c81397a0f3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.566 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.567 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 84365bea-19f8-4121-86d5-dd9e1a5eeaa3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.567 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.567 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.635 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.654 239549 DEBUG nova.network.neutron [req-183eb5da-118c-439d-8518-dfefb34aa7af req-deeef354-e39f-47b0-bc36-2d69a9e414db d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Updated VIF entry in instance network info cache for port 733bac79-4a7c-42c0-90c2-1a1e68e1543f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.655 239549 DEBUG nova.network.neutron [req-183eb5da-118c-439d-8518-dfefb34aa7af req-deeef354-e39f-47b0-bc36-2d69a9e414db d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Updating instance_info_cache with network_info: [{"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.702 239549 DEBUG os_brick.utils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.706 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.720 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.720 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[4bec66cf-17a3-4214-8b93-c7c612a466cf]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.722 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.731 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.731 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3daed1-2537-4857-8d8d-64579f35469f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.733 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.741 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.742 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe78ad5-56f5-4fb4-bacd-9937a2fa3eb1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.743 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[6fcb59f1-9b16-4985-a7e2-7b791c26edd7]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.744 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.766 239549 DEBUG oslo_concurrency.lockutils [req-183eb5da-118c-439d-8518-dfefb34aa7af req-deeef354-e39f-47b0-bc36-2d69a9e414db d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-84365bea-19f8-4121-86d5-dd9e1a5eeaa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.768 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.769 239549 DEBUG os_brick.initiator.connectors.lightos [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.769 239549 DEBUG os_brick.initiator.connectors.lightos [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.770 239549 DEBUG os_brick.initiator.connectors.lightos [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.770 239549 DEBUG os_brick.utils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.770 239549 DEBUG nova.virt.block_device [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Updating existing volume attachment record: 3b1aa97a-589f-46ee-96c7-d83a5288e1ec _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:43:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 7.4 MiB/s wr, 155 op/s
Feb 02 15:43:11 compute-0 nova_compute[239545]: 2026-02-02 15:43:11.985 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:43:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/491482937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.266 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.271 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.288 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.317 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.317 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:12 compute-0 podman[264348]: 2026-02-02 15:43:12.319569042 +0000 UTC m=+0.051384796 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 02 15:43:12 compute-0 podman[264345]: 2026-02-02 15:43:12.378590984 +0000 UTC m=+0.110472370 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 02 15:43:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:43:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2830976639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.756 239549 DEBUG oslo_concurrency.lockutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.756 239549 DEBUG oslo_concurrency.lockutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.757 239549 DEBUG oslo_concurrency.lockutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.757 239549 DEBUG oslo_concurrency.lockutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.757 239549 DEBUG oslo_concurrency.lockutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.759 239549 INFO nova.compute.manager [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Terminating instance
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.760 239549 DEBUG nova.compute.manager [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:43:12 compute-0 kernel: tap9a01f6e3-a5 (unregistering): left promiscuous mode
Feb 02 15:43:12 compute-0 NetworkManager[49171]: <info>  [1770046992.8041] device (tap9a01f6e3-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:43:12 compute-0 ovn_controller[144995]: 2026-02-02T15:43:12Z|00175|binding|INFO|Releasing lport 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 from this chassis (sb_readonly=0)
Feb 02 15:43:12 compute-0 ovn_controller[144995]: 2026-02-02T15:43:12Z|00176|binding|INFO|Setting lport 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 down in Southbound
Feb 02 15:43:12 compute-0 ovn_controller[144995]: 2026-02-02T15:43:12Z|00177|binding|INFO|Removing iface tap9a01f6e3-a5 ovn-installed in OVS
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.814 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:12.819 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:96:ef 10.100.0.7'], port_security=['fa:16:3e:f7:96:ef 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '587dcef5-85a2-49c6-8c3f-2cb01dd68aeb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8d353b3-b1bd-4128-966b-cb49804d5ec9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=9a01f6e3-a5ae-4664-9f09-3c75fa4331a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:43:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:12.820 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c unbound from our chassis
Feb 02 15:43:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:12.821 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:43:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:12.824 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[dcecc30e-d59f-4e59-87cd-b8f738cdd474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:12.825 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c namespace which is not needed anymore
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.825 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:12 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Feb 02 15:43:12 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 15.095s CPU time.
Feb 02 15:43:12 compute-0 systemd-machined[207609]: Machine qemu-18-instance-00000012 terminated.
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.872 239549 DEBUG nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.873 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.874 239549 INFO nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Creating image(s)
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.874 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.874 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Ensure instance console log exists: /var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.875 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.875 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.875 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.877 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Start _get_guest_xml network_info=[{"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-02-02T15:42:58Z,direct_url=<?>,disk_format='qcow2',id=d4b335c4-e07b-4ee0-9761-2796fef45b8d,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1688846528',owner='8a28227cdc0a4390bebe7549f189bfe5',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-02-02T15:42:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '3b1aa97a-589f-46ee-96c7-d83a5288e1ec', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': True, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f4e480c6-ad80-4ac0-bde3-8ca6f7670b08', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f4e480c6-ad80-4ac0-bde3-8ca6f7670b08', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '84365bea-19f8-4121-86d5-dd9e1a5eeaa3', 'attached_at': '', 'detached_at': '', 'volume_id': 'f4e480c6-ad80-4ac0-bde3-8ca6f7670b08', 'serial': 'f4e480c6-ad80-4ac0-bde3-8ca6f7670b08'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.882 239549 WARNING nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.886 239549 DEBUG nova.virt.libvirt.host [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.887 239549 DEBUG nova.virt.libvirt.host [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.889 239549 DEBUG nova.virt.libvirt.host [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.889 239549 DEBUG nova.virt.libvirt.host [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.890 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.890 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-02-02T15:42:58Z,direct_url=<?>,disk_format='qcow2',id=d4b335c4-e07b-4ee0-9761-2796fef45b8d,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1688846528',owner='8a28227cdc0a4390bebe7549f189bfe5',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-02-02T15:42:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.891 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.891 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.891 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.891 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.891 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.891 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.892 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.892 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.892 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.892 239549 DEBUG nova.virt.hardware [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.915 239549 DEBUG nova.storage.rbd_utils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 84365bea-19f8-4121-86d5-dd9e1a5eeaa3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:43:12 compute-0 nova_compute[239545]: 2026-02-02 15:43:12.918 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:12 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[264247]: [NOTICE]   (264251) : haproxy version is 2.8.14-c23fe91
Feb 02 15:43:12 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[264247]: [NOTICE]   (264251) : path to executable is /usr/sbin/haproxy
Feb 02 15:43:12 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[264247]: [WARNING]  (264251) : Exiting Master process...
Feb 02 15:43:12 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[264247]: [ALERT]    (264251) : Current worker (264253) exited with code 143 (Terminated)
Feb 02 15:43:12 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[264247]: [WARNING]  (264251) : All workers exited. Exiting... (0)
Feb 02 15:43:12 compute-0 systemd[1]: libpod-6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff.scope: Deactivated successfully.
Feb 02 15:43:12 compute-0 podman[264412]: 2026-02-02 15:43:12.963774823 +0000 UTC m=+0.047758319 container died 6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff-userdata-shm.mount: Deactivated successfully.
Feb 02 15:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bd7d15c9820ae7eb8ce20b0975176efcaffb44c3cccb8d4f8b13b73a36d21f3-merged.mount: Deactivated successfully.
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.001 239549 INFO nova.virt.libvirt.driver [-] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Instance destroyed successfully.
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.002 239549 DEBUG nova.objects.instance [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lazy-loading 'resources' on Instance uuid 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:43:13 compute-0 podman[264412]: 2026-02-02 15:43:13.013857188 +0000 UTC m=+0.097840664 container cleanup 6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:43:13 compute-0 systemd[1]: libpod-conmon-6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff.scope: Deactivated successfully.
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.032 239549 DEBUG nova.virt.libvirt.vif [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:42:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1939615838',display_name='tempest-TransferEncryptedVolumeTest-server-1939615838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1939615838',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKMsL4MWgDZKbVhB6IQOiF8pp1EYQGvyTWbcn/zV7b4n3z7hapmnFr4nrZxT7tbDh4OrqjSbFL2giowZbe7RVbM1MVvSBqtMgXFfoAVQEbSkdr0VJtIIAKRxEkeVY0YVeg==',key_name='tempest-TransferEncryptedVolumeTest-347177902',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:42:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-wyyy0ue2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:42:51Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=587dcef5-85a2-49c6-8c3f-2cb01dd68aeb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.033 239549 DEBUG nova.network.os_vif_util [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "address": "fa:16:3e:f7:96:ef", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a01f6e3-a5", "ovs_interfaceid": "9a01f6e3-a5ae-4664-9f09-3c75fa4331a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.034 239549 DEBUG nova.network.os_vif_util [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:96:ef,bridge_name='br-int',has_traffic_filtering=True,id=9a01f6e3-a5ae-4664-9f09-3c75fa4331a2,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a01f6e3-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.035 239549 DEBUG os_vif [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:96:ef,bridge_name='br-int',has_traffic_filtering=True,id=9a01f6e3-a5ae-4664-9f09-3c75fa4331a2,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a01f6e3-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.038 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.038 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a01f6e3-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.040 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.043 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.047 239549 INFO os_vif [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:96:ef,bridge_name='br-int',has_traffic_filtering=True,id=9a01f6e3-a5ae-4664-9f09-3c75fa4331a2,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a01f6e3-a5')
Feb 02 15:43:13 compute-0 podman[264473]: 2026-02-02 15:43:13.081658482 +0000 UTC m=+0.046342664 container remove 6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 02 15:43:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:13.090 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[770fd87b-b842-40a2-ac2d-620540eed1c0]: (4, ('Mon Feb  2 03:43:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c (6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff)\n6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff\nMon Feb  2 03:43:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c (6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff)\n6aa10dd2368852dd5ab5a9030987b2b2ac3ae90b1dfa0c5ae7fc73d431391bff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:13.092 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[db0ce4ce-d70b-411f-b599-6d845dc82fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:13.093 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6f67b7a-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.095 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:13 compute-0 kernel: tapb6f67b7a-30: left promiscuous mode
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.101 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:13.105 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd9159c-91d7-4eb7-866a-b67ca2402c2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:13.122 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbf2d6b-a18b-4f30-8f14-8a0a6a57662b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:13.123 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[abb65389-fdb6-4554-86ee-c22fd8566931]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:13.136 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d63563ba-f85f-42b2-904a-9cafd42bd09b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445424, 'reachable_time': 37711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264523, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:13 compute-0 systemd[1]: run-netns-ovnmeta\x2db6f67b7a\x2d3fd7\x2d4623\x2d9937\x2d142eb5dabe2c.mount: Deactivated successfully.
Feb 02 15:43:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:13.141 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:43:13 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:13.141 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[22e80d28-a8dc-4fc6-833d-96917dd19ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.209 239549 INFO nova.virt.libvirt.driver [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Deleting instance files /var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb_del
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.209 239549 INFO nova.virt.libvirt.driver [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Deletion of /var/lib/nova/instances/587dcef5-85a2-49c6-8c3f-2cb01dd68aeb_del complete
Feb 02 15:43:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Feb 02 15:43:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Feb 02 15:43:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Feb 02 15:43:13 compute-0 ceph-mon[75334]: pgmap v1519: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 7.4 MiB/s wr, 155 op/s
Feb 02 15:43:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/491482937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2830976639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.276 239549 INFO nova.compute.manager [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Took 0.52 seconds to destroy the instance on the hypervisor.
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.276 239549 DEBUG oslo.service.loopingcall [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.277 239549 DEBUG nova.compute.manager [-] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.277 239549 DEBUG nova.network.neutron [-] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:43:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:43:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055176977' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.494 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.526 239549 DEBUG nova.virt.libvirt.vif [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-945783403',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-945783403',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-945783403',id=19,image_ref='d4b335c4-e07b-4ee0-9761-2796fef45b8d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmrRjEkb6cgWeSYjc37lQrrxVPle3ppFSK+u78pO20zuPWTQn5idX7F6RNg8VypNDKVczqmhPVrv5jrSZc9gmN0i7G3lT8N9zdxz/YDHLLnJS8oiVtr1W60g1y/bS6ftw==',key_name='tempest-keypair-846465039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-ub0gw0t0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-77302308',image_owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:43:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=84365bea-19f8-4121-86d5-dd9e1a5eeaa3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.526 239549 DEBUG nova.network.os_vif_util [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.527 239549 DEBUG nova.network.os_vif_util [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b5:8f,bridge_name='br-int',has_traffic_filtering=True,id=733bac79-4a7c-42c0-90c2-1a1e68e1543f,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733bac79-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.528 239549 DEBUG nova.objects.instance [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84365bea-19f8-4121-86d5-dd9e1a5eeaa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.532 239549 DEBUG nova.compute.manager [req-057907ed-fa68-45d1-95a8-c1eb84a67cb2 req-85e34e0d-0b47-4592-861e-c1d8619fd8e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received event network-vif-unplugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.532 239549 DEBUG oslo_concurrency.lockutils [req-057907ed-fa68-45d1-95a8-c1eb84a67cb2 req-85e34e0d-0b47-4592-861e-c1d8619fd8e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.532 239549 DEBUG oslo_concurrency.lockutils [req-057907ed-fa68-45d1-95a8-c1eb84a67cb2 req-85e34e0d-0b47-4592-861e-c1d8619fd8e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.533 239549 DEBUG oslo_concurrency.lockutils [req-057907ed-fa68-45d1-95a8-c1eb84a67cb2 req-85e34e0d-0b47-4592-861e-c1d8619fd8e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.533 239549 DEBUG nova.compute.manager [req-057907ed-fa68-45d1-95a8-c1eb84a67cb2 req-85e34e0d-0b47-4592-861e-c1d8619fd8e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] No waiting events found dispatching network-vif-unplugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.533 239549 DEBUG nova.compute.manager [req-057907ed-fa68-45d1-95a8-c1eb84a67cb2 req-85e34e0d-0b47-4592-861e-c1d8619fd8e8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received event network-vif-unplugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.543 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <uuid>84365bea-19f8-4121-86d5-dd9e1a5eeaa3</uuid>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <name>instance-00000013</name>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-945783403</nova:name>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:43:12</nova:creationTime>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <nova:user uuid="b8e72a1cb6344869821da1cfc41bf8fc">tempest-TestVolumeBootPattern-77302308-project-member</nova:user>
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <nova:project uuid="8a28227cdc0a4390bebe7549f189bfe5">tempest-TestVolumeBootPattern-77302308</nova:project>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="d4b335c4-e07b-4ee0-9761-2796fef45b8d"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <nova:port uuid="733bac79-4a7c-42c0-90c2-1a1e68e1543f">
Feb 02 15:43:13 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <system>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <entry name="serial">84365bea-19f8-4121-86d5-dd9e1a5eeaa3</entry>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <entry name="uuid">84365bea-19f8-4121-86d5-dd9e1a5eeaa3</entry>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     </system>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <os>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   </os>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <features>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   </features>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/84365bea-19f8-4121-86d5-dd9e1a5eeaa3_disk.config">
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       </source>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-f4e480c6-ad80-4ac0-bde3-8ca6f7670b08">
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       </source>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:43:13 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <serial>f4e480c6-ad80-4ac0-bde3-8ca6f7670b08</serial>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:02:b5:8f"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <target dev="tap733bac79-4a"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3/console.log" append="off"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <video>
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     </video>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <input type="keyboard" bus="usb"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:43:13 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:43:13 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:43:13 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:43:13 compute-0 nova_compute[239545]: </domain>
Feb 02 15:43:13 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.544 239549 DEBUG nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Preparing to wait for external event network-vif-plugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.544 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.544 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.544 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.545 239549 DEBUG nova.virt.libvirt.vif [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-945783403',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-945783403',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-945783403',id=19,image_ref='d4b335c4-e07b-4ee0-9761-2796fef45b8d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmrRjEkb6cgWeSYjc37lQrrxVPle3ppFSK+u78pO20zuPWTQn5idX7F6RNg8VypNDKVczqmhPVrv5jrSZc9gmN0i7G3lT8N9zdxz/YDHLLnJS8oiVtr1W60g1y/bS6ftw==',key_name='tempest-keypair-846465039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-ub0gw0t0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-77302308',image_owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:43:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=84365bea-19f8-4121-86d5-dd9e1a5eeaa3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.545 239549 DEBUG nova.network.os_vif_util [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.546 239549 DEBUG nova.network.os_vif_util [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b5:8f,bridge_name='br-int',has_traffic_filtering=True,id=733bac79-4a7c-42c0-90c2-1a1e68e1543f,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733bac79-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.546 239549 DEBUG os_vif [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b5:8f,bridge_name='br-int',has_traffic_filtering=True,id=733bac79-4a7c-42c0-90c2-1a1e68e1543f,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733bac79-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.547 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.547 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.547 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.552 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.552 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap733bac79-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.553 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap733bac79-4a, col_values=(('external_ids', {'iface-id': '733bac79-4a7c-42c0-90c2-1a1e68e1543f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:b5:8f', 'vm-uuid': '84365bea-19f8-4121-86d5-dd9e1a5eeaa3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.554 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:13 compute-0 NetworkManager[49171]: <info>  [1770046993.5551] manager: (tap733bac79-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.556 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.558 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.559 239549 INFO os_vif [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b5:8f,bridge_name='br-int',has_traffic_filtering=True,id=733bac79-4a7c-42c0-90c2-1a1e68e1543f,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733bac79-4a')
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.607 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.607 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.607 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No VIF found with MAC fa:16:3e:02:b5:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.608 239549 INFO nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Using config drive
Feb 02 15:43:13 compute-0 nova_compute[239545]: 2026-02-02 15:43:13.626 239549 DEBUG nova.storage.rbd_utils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 84365bea-19f8-4121-86d5-dd9e1a5eeaa3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:43:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Feb 02 15:43:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Feb 02 15:43:14 compute-0 ceph-mon[75334]: osdmap e435: 3 total, 3 up, 3 in
Feb 02 15:43:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4055176977' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:43:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Feb 02 15:43:14 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.318 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.319 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.319 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.354 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.393 239549 INFO nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Creating config drive at /var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3/disk.config
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.397 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbk2e1cey execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.415 239549 DEBUG nova.network.neutron [-] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.434 239549 INFO nova.compute.manager [-] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Took 1.16 seconds to deallocate network for instance.
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.520 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbk2e1cey" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.547 239549 DEBUG nova.storage.rbd_utils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 84365bea-19f8-4121-86d5-dd9e1a5eeaa3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.551 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3/disk.config 84365bea-19f8-4121-86d5-dd9e1a5eeaa3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.616 239549 INFO nova.compute.manager [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Took 0.18 seconds to detach 1 volumes for instance.
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.657 239549 DEBUG oslo_concurrency.processutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3/disk.config 84365bea-19f8-4121-86d5-dd9e1a5eeaa3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.658 239549 INFO nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Deleting local config drive /var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3/disk.config because it was imported into RBD.
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.666 239549 DEBUG oslo_concurrency.lockutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.667 239549 DEBUG oslo_concurrency.lockutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:14 compute-0 kernel: tap733bac79-4a: entered promiscuous mode
Feb 02 15:43:14 compute-0 systemd-udevd[264391]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:43:14 compute-0 ovn_controller[144995]: 2026-02-02T15:43:14Z|00178|binding|INFO|Claiming lport 733bac79-4a7c-42c0-90c2-1a1e68e1543f for this chassis.
Feb 02 15:43:14 compute-0 ovn_controller[144995]: 2026-02-02T15:43:14Z|00179|binding|INFO|733bac79-4a7c-42c0-90c2-1a1e68e1543f: Claiming fa:16:3e:02:b5:8f 10.100.0.14
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.692 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:14 compute-0 NetworkManager[49171]: <info>  [1770046994.6928] manager: (tap733bac79-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Feb 02 15:43:14 compute-0 NetworkManager[49171]: <info>  [1770046994.6996] device (tap733bac79-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.699 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b5:8f 10.100.0.14'], port_security=['fa:16:3e:02:b5:8f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '84365bea-19f8-4121-86d5-dd9e1a5eeaa3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '004cec53-e5f7-46da-97cb-05df85e39e7c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=733bac79-4a7c-42c0-90c2-1a1e68e1543f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:43:14 compute-0 NetworkManager[49171]: <info>  [1770046994.7002] device (tap733bac79-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:43:14 compute-0 ovn_controller[144995]: 2026-02-02T15:43:14Z|00180|binding|INFO|Setting lport 733bac79-4a7c-42c0-90c2-1a1e68e1543f ovn-installed in OVS
Feb 02 15:43:14 compute-0 ovn_controller[144995]: 2026-02-02T15:43:14Z|00181|binding|INFO|Setting lport 733bac79-4a7c-42c0-90c2-1a1e68e1543f up in Southbound
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.700 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 733bac79-4a7c-42c0-90c2-1a1e68e1543f in datapath 473fc4ca-a137-447b-9349-9f4677babee6 bound to our chassis
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.701 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.702 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.711 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[15da57e8-2ac0-4b06-a129-18030a0a347c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:14 compute-0 systemd-machined[207609]: New machine qemu-19-instance-00000013.
Feb 02 15:43:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:43:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:43:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:43:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:43:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:43:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:43:14 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.730 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[bc61b604-499c-47f3-be7b-14a9e53f6de4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.733 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[f20ee3ca-5f99-4c57-bfe1-439659b1f405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.752 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[256da469-4953-4239-a5ee-8758bbc12dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.763 239549 DEBUG oslo_concurrency.processutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.765 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2b7c9b-7419-4f54-acee-c49f09dc8b67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442796, 'reachable_time': 17277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264611, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.778 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5c535204-7aff-4368-abb7-ec01ea3dcb9f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap473fc4ca-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442804, 'tstamp': 442804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264613, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap473fc4ca-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442806, 'tstamp': 442806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264613, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.779 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:14 compute-0 nova_compute[239545]: 2026-02-02 15:43:14.782 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.782 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473fc4ca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.783 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.783 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473fc4ca-a0, col_values=(('external_ids', {'iface-id': '8ec763b2-de85-4ed5-bb5d-67e76d81beae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:14 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:14.783 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.000 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046995.000029, 84365bea-19f8-4121-86d5-dd9e1a5eeaa3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.001 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] VM Started (Lifecycle Event)
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.047 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.051 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046995.000302, 84365bea-19f8-4121-86d5-dd9e1a5eeaa3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.051 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] VM Paused (Lifecycle Event)
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.087 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.090 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.112 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.124 239549 DEBUG nova.compute.manager [req-e30f4581-3f45-42da-b42d-9382b4659d7d req-1a875797-356c-43d9-b0b0-51684c802ea9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received event network-vif-plugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.125 239549 DEBUG oslo_concurrency.lockutils [req-e30f4581-3f45-42da-b42d-9382b4659d7d req-1a875797-356c-43d9-b0b0-51684c802ea9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.125 239549 DEBUG oslo_concurrency.lockutils [req-e30f4581-3f45-42da-b42d-9382b4659d7d req-1a875797-356c-43d9-b0b0-51684c802ea9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.126 239549 DEBUG oslo_concurrency.lockutils [req-e30f4581-3f45-42da-b42d-9382b4659d7d req-1a875797-356c-43d9-b0b0-51684c802ea9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.126 239549 DEBUG nova.compute.manager [req-e30f4581-3f45-42da-b42d-9382b4659d7d req-1a875797-356c-43d9-b0b0-51684c802ea9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Processing event network-vif-plugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.127 239549 DEBUG nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.147 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770046995.1375413, 84365bea-19f8-4121-86d5-dd9e1a5eeaa3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.149 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] VM Resumed (Lifecycle Event)
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.151 239549 DEBUG nova.virt.libvirt.driver [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.156 239549 INFO nova.virt.libvirt.driver [-] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Instance spawned successfully.
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.157 239549 INFO nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Took 2.28 seconds to spawn the instance on the hypervisor.
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.157 239549 DEBUG nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.172 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.177 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.217 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.240 239549 INFO nova.compute.manager [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Took 8.66 seconds to build instance.
Feb 02 15:43:15 compute-0 ceph-mon[75334]: pgmap v1521: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Feb 02 15:43:15 compute-0 ceph-mon[75334]: osdmap e436: 3 total, 3 up, 3 in
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.258 239549 DEBUG oslo_concurrency.lockutils [None req-04654d5a-9881-450e-bd1f-0741039e8298 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:43:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832796214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.331 239549 DEBUG oslo_concurrency.processutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.335 239549 DEBUG nova.compute.provider_tree [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.351 239549 DEBUG nova.scheduler.client.report [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.380 239549 DEBUG oslo_concurrency.lockutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.400 239549 INFO nova.scheduler.client.report [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Deleted allocations for instance 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb
Feb 02 15:43:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/395297631' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/395297631' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.464 239549 DEBUG oslo_concurrency.lockutils [None req-727d1037-b8e4-4c80-be23-3ebcdd084ea2 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.601 239549 DEBUG nova.compute.manager [req-f182aa29-73e0-4590-ad27-6fda4ff28478 req-5c055b2a-3f06-47ec-b170-d23b8314ef20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received event network-vif-plugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.601 239549 DEBUG oslo_concurrency.lockutils [req-f182aa29-73e0-4590-ad27-6fda4ff28478 req-5c055b2a-3f06-47ec-b170-d23b8314ef20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.602 239549 DEBUG oslo_concurrency.lockutils [req-f182aa29-73e0-4590-ad27-6fda4ff28478 req-5c055b2a-3f06-47ec-b170-d23b8314ef20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.602 239549 DEBUG oslo_concurrency.lockutils [req-f182aa29-73e0-4590-ad27-6fda4ff28478 req-5c055b2a-3f06-47ec-b170-d23b8314ef20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "587dcef5-85a2-49c6-8c3f-2cb01dd68aeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.602 239549 DEBUG nova.compute.manager [req-f182aa29-73e0-4590-ad27-6fda4ff28478 req-5c055b2a-3f06-47ec-b170-d23b8314ef20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] No waiting events found dispatching network-vif-plugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.602 239549 WARNING nova.compute.manager [req-f182aa29-73e0-4590-ad27-6fda4ff28478 req-5c055b2a-3f06-47ec-b170-d23b8314ef20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received unexpected event network-vif-plugged-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 for instance with vm_state deleted and task_state None.
Feb 02 15:43:15 compute-0 nova_compute[239545]: 2026-02-02 15:43:15.602 239549 DEBUG nova.compute.manager [req-f182aa29-73e0-4590-ad27-6fda4ff28478 req-5c055b2a-3f06-47ec-b170-d23b8314ef20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Received event network-vif-deleted-9a01f6e3-a5ae-4664-9f09-3c75fa4331a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 42 KiB/s wr, 158 op/s
Feb 02 15:43:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/832796214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/395297631' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/395297631' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.207 239549 DEBUG nova.compute.manager [req-0e0fbdf5-0341-4883-ab1f-0ca4429c9b33 req-bd9eb74a-dc66-4fb6-ba84-ce6e02bd42b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received event network-vif-plugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.207 239549 DEBUG oslo_concurrency.lockutils [req-0e0fbdf5-0341-4883-ab1f-0ca4429c9b33 req-bd9eb74a-dc66-4fb6-ba84-ce6e02bd42b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.208 239549 DEBUG oslo_concurrency.lockutils [req-0e0fbdf5-0341-4883-ab1f-0ca4429c9b33 req-bd9eb74a-dc66-4fb6-ba84-ce6e02bd42b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.208 239549 DEBUG oslo_concurrency.lockutils [req-0e0fbdf5-0341-4883-ab1f-0ca4429c9b33 req-bd9eb74a-dc66-4fb6-ba84-ce6e02bd42b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.208 239549 DEBUG nova.compute.manager [req-0e0fbdf5-0341-4883-ab1f-0ca4429c9b33 req-bd9eb74a-dc66-4fb6-ba84-ce6e02bd42b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] No waiting events found dispatching network-vif-plugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.208 239549 WARNING nova.compute.manager [req-0e0fbdf5-0341-4883-ab1f-0ca4429c9b33 req-bd9eb74a-dc66-4fb6-ba84-ce6e02bd42b0 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received unexpected event network-vif-plugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f for instance with vm_state active and task_state None.
Feb 02 15:43:17 compute-0 ceph-mon[75334]: pgmap v1523: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 42 KiB/s wr, 158 op/s
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.696 239549 DEBUG nova.compute.manager [req-fe3fdac3-5d33-420e-95f8-5ab6ce7ae902 req-980fbfa4-b255-442d-8de8-d60b9ca7ccc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received event network-changed-733bac79-4a7c-42c0-90c2-1a1e68e1543f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.696 239549 DEBUG nova.compute.manager [req-fe3fdac3-5d33-420e-95f8-5ab6ce7ae902 req-980fbfa4-b255-442d-8de8-d60b9ca7ccc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Refreshing instance network info cache due to event network-changed-733bac79-4a7c-42c0-90c2-1a1e68e1543f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.696 239549 DEBUG oslo_concurrency.lockutils [req-fe3fdac3-5d33-420e-95f8-5ab6ce7ae902 req-980fbfa4-b255-442d-8de8-d60b9ca7ccc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-84365bea-19f8-4121-86d5-dd9e1a5eeaa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.696 239549 DEBUG oslo_concurrency.lockutils [req-fe3fdac3-5d33-420e-95f8-5ab6ce7ae902 req-980fbfa4-b255-442d-8de8-d60b9ca7ccc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-84365bea-19f8-4121-86d5-dd9e1a5eeaa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:43:17 compute-0 nova_compute[239545]: 2026-02-02 15:43:17.697 239549 DEBUG nova.network.neutron [req-fe3fdac3-5d33-420e-95f8-5ab6ce7ae902 req-980fbfa4-b255-442d-8de8-d60b9ca7ccc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Refreshing network info cache for port 733bac79-4a7c-42c0-90c2-1a1e68e1543f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:43:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 36 KiB/s wr, 133 op/s
Feb 02 15:43:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Feb 02 15:43:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Feb 02 15:43:18 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Feb 02 15:43:18 compute-0 nova_compute[239545]: 2026-02-02 15:43:18.608 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:19 compute-0 ceph-mon[75334]: pgmap v1524: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 36 KiB/s wr, 133 op/s
Feb 02 15:43:19 compute-0 ceph-mon[75334]: osdmap e437: 3 total, 3 up, 3 in
Feb 02 15:43:19 compute-0 nova_compute[239545]: 2026-02-02 15:43:19.226 239549 DEBUG nova.network.neutron [req-fe3fdac3-5d33-420e-95f8-5ab6ce7ae902 req-980fbfa4-b255-442d-8de8-d60b9ca7ccc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Updated VIF entry in instance network info cache for port 733bac79-4a7c-42c0-90c2-1a1e68e1543f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:43:19 compute-0 nova_compute[239545]: 2026-02-02 15:43:19.226 239549 DEBUG nova.network.neutron [req-fe3fdac3-5d33-420e-95f8-5ab6ce7ae902 req-980fbfa4-b255-442d-8de8-d60b9ca7ccc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Updating instance_info_cache with network_info: [{"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:19 compute-0 nova_compute[239545]: 2026-02-02 15:43:19.250 239549 DEBUG oslo_concurrency.lockutils [req-fe3fdac3-5d33-420e-95f8-5ab6ce7ae902 req-980fbfa4-b255-442d-8de8-d60b9ca7ccc9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-84365bea-19f8-4121-86d5-dd9e1a5eeaa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:43:19 compute-0 nova_compute[239545]: 2026-02-02 15:43:19.357 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 51 KiB/s wr, 135 op/s
Feb 02 15:43:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Feb 02 15:43:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Feb 02 15:43:20 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Feb 02 15:43:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098226141' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098226141' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:21 compute-0 ceph-mon[75334]: pgmap v1526: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 51 KiB/s wr, 135 op/s
Feb 02 15:43:21 compute-0 ceph-mon[75334]: osdmap e438: 3 total, 3 up, 3 in
Feb 02 15:43:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3098226141' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3098226141' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 46 KiB/s wr, 215 op/s
Feb 02 15:43:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Feb 02 15:43:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Feb 02 15:43:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Feb 02 15:43:23 compute-0 ceph-mon[75334]: pgmap v1528: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 46 KiB/s wr, 215 op/s
Feb 02 15:43:23 compute-0 ceph-mon[75334]: osdmap e439: 3 total, 3 up, 3 in
Feb 02 15:43:23 compute-0 nova_compute[239545]: 2026-02-02 15:43:23.611 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 188 op/s
Feb 02 15:43:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Feb 02 15:43:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Feb 02 15:43:24 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Feb 02 15:43:24 compute-0 nova_compute[239545]: 2026-02-02 15:43:24.359 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Feb 02 15:43:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Feb 02 15:43:25 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Feb 02 15:43:25 compute-0 ceph-mon[75334]: pgmap v1530: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 188 op/s
Feb 02 15:43:25 compute-0 ceph-mon[75334]: osdmap e440: 3 total, 3 up, 3 in
Feb 02 15:43:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.6 KiB/s wr, 152 op/s
Feb 02 15:43:26 compute-0 ceph-mon[75334]: osdmap e441: 3 total, 3 up, 3 in
Feb 02 15:43:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3564642347' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3564642347' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:26 compute-0 nova_compute[239545]: 2026-02-02 15:43:26.473 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "b9beea2c-422e-4f83-9a08-6275c559a931" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:26 compute-0 nova_compute[239545]: 2026-02-02 15:43:26.474 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:26 compute-0 nova_compute[239545]: 2026-02-02 15:43:26.508 239549 DEBUG nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:43:26 compute-0 nova_compute[239545]: 2026-02-02 15:43:26.589 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:26 compute-0 nova_compute[239545]: 2026-02-02 15:43:26.589 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:26 compute-0 nova_compute[239545]: 2026-02-02 15:43:26.597 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:43:26 compute-0 nova_compute[239545]: 2026-02-02 15:43:26.597 239549 INFO nova.compute.claims [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:43:26 compute-0 nova_compute[239545]: 2026-02-02 15:43:26.726 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:27 compute-0 ceph-mon[75334]: pgmap v1533: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.6 KiB/s wr, 152 op/s
Feb 02 15:43:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3564642347' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3564642347' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:43:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1938184960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.257 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.265 239549 DEBUG nova.compute.provider_tree [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.281 239549 DEBUG nova.scheduler.client.report [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.312 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.313 239549 DEBUG nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.384 239549 DEBUG nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.384 239549 DEBUG nova.network.neutron [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.402 239549 INFO nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.418 239549 DEBUG nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.465 239549 INFO nova.virt.block_device [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Booting with volume 0d27bdc2-c098-4067-bfd3-bb3f0f4711d2 at /dev/vda
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.565 239549 DEBUG nova.policy [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df03e4d41ae644fca567cfe648b7bad6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.590 239549 DEBUG os_brick.utils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.591 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.603 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.603 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[d20c6956-6f2d-4071-acf5-36b0900a2424]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.605 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.614 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.614 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b1dde4-32c3-4186-8a80-864b48a2f641]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.615 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.623 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.624 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[883b4d69-c87e-44c0-9dbd-b74aee687695]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.625 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[c96d1915-78c9-4179-8dc2-eef0e58e3dbe]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.625 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.644 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.646 239549 DEBUG os_brick.initiator.connectors.lightos [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.646 239549 DEBUG os_brick.initiator.connectors.lightos [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.646 239549 DEBUG os_brick.initiator.connectors.lightos [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.647 239549 DEBUG os_brick.utils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.647 239549 DEBUG nova.virt.block_device [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Updating existing volume attachment record: 0be914d1-f80e-4ce6-b77e-e031ee49f93a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:43:27 compute-0 ovn_controller[144995]: 2026-02-02T15:43:27Z|00032|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.14
Feb 02 15:43:27 compute-0 ovn_controller[144995]: 2026-02-02T15:43:27Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:02:b5:8f 10.100.0.14
Feb 02 15:43:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 59 op/s
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.995 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770046992.9939165, 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:43:27 compute-0 nova_compute[239545]: 2026-02-02 15:43:27.995 239549 INFO nova.compute.manager [-] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] VM Stopped (Lifecycle Event)
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.018 239549 DEBUG nova.compute.manager [None req-08c80b60-8879-44be-b8ba-bf0f79c65782 - - - - - -] [instance: 587dcef5-85a2-49c6-8c3f-2cb01dd68aeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:43:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Feb 02 15:43:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Feb 02 15:43:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Feb 02 15:43:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/486123051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/486123051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1938184960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:28 compute-0 ceph-mon[75334]: osdmap e442: 3 total, 3 up, 3 in
Feb 02 15:43:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/486123051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/486123051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.290 239549 DEBUG nova.network.neutron [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Successfully created port: fca7a8cb-6a93-4ab4-b48f-742237e61009 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:43:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:43:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1537837625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.615 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.772 239549 DEBUG nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.774 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.774 239549 INFO nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Creating image(s)
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.775 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.775 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Ensure instance console log exists: /var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.775 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.776 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:28 compute-0 nova_compute[239545]: 2026-02-02 15:43:28.776 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:29 compute-0 nova_compute[239545]: 2026-02-02 15:43:29.081 239549 DEBUG nova.network.neutron [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Successfully updated port: fca7a8cb-6a93-4ab4-b48f-742237e61009 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:43:29 compute-0 nova_compute[239545]: 2026-02-02 15:43:29.097 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "refresh_cache-b9beea2c-422e-4f83-9a08-6275c559a931" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:43:29 compute-0 nova_compute[239545]: 2026-02-02 15:43:29.098 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquired lock "refresh_cache-b9beea2c-422e-4f83-9a08-6275c559a931" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:43:29 compute-0 nova_compute[239545]: 2026-02-02 15:43:29.098 239549 DEBUG nova.network.neutron [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:43:29 compute-0 nova_compute[239545]: 2026-02-02 15:43:29.169 239549 DEBUG nova.compute.manager [req-8897544d-cec6-4e9b-a0c9-7679144100c3 req-9c3fee7c-1bcd-46b6-90f0-9f4134e54e06 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received event network-changed-fca7a8cb-6a93-4ab4-b48f-742237e61009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:29 compute-0 nova_compute[239545]: 2026-02-02 15:43:29.169 239549 DEBUG nova.compute.manager [req-8897544d-cec6-4e9b-a0c9-7679144100c3 req-9c3fee7c-1bcd-46b6-90f0-9f4134e54e06 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Refreshing instance network info cache due to event network-changed-fca7a8cb-6a93-4ab4-b48f-742237e61009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:43:29 compute-0 nova_compute[239545]: 2026-02-02 15:43:29.169 239549 DEBUG oslo_concurrency.lockutils [req-8897544d-cec6-4e9b-a0c9-7679144100c3 req-9c3fee7c-1bcd-46b6-90f0-9f4134e54e06 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-b9beea2c-422e-4f83-9a08-6275c559a931" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:43:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Feb 02 15:43:29 compute-0 ceph-mon[75334]: pgmap v1534: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 59 op/s
Feb 02 15:43:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1537837625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:43:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Feb 02 15:43:29 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Feb 02 15:43:29 compute-0 nova_compute[239545]: 2026-02-02 15:43:29.241 239549 DEBUG nova.network.neutron [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:43:29 compute-0 nova_compute[239545]: 2026-02-02 15:43:29.360 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:29 compute-0 sudo[264707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:43:29 compute-0 sudo[264707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:43:29 compute-0 sudo[264707]: pam_unix(sudo:session): session closed for user root
Feb 02 15:43:29 compute-0 sudo[264732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:43:29 compute-0 sudo[264732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:43:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 354 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 543 KiB/s wr, 121 op/s
Feb 02 15:43:30 compute-0 sudo[264732]: pam_unix(sudo:session): session closed for user root
Feb 02 15:43:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:43:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:43:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:43:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:43:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:43:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:43:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:43:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:43:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:43:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:43:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:43:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:43:30 compute-0 sudo[264788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:43:30 compute-0 sudo[264788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:43:30 compute-0 sudo[264788]: pam_unix(sudo:session): session closed for user root
Feb 02 15:43:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Feb 02 15:43:30 compute-0 ceph-mon[75334]: osdmap e443: 3 total, 3 up, 3 in
Feb 02 15:43:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:43:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:43:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:43:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:43:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:43:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:43:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Feb 02 15:43:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Feb 02 15:43:30 compute-0 sudo[264813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:43:30 compute-0 sudo[264813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.430 239549 DEBUG nova.network.neutron [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Updating instance_info_cache with network_info: [{"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.447 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Releasing lock "refresh_cache-b9beea2c-422e-4f83-9a08-6275c559a931" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.447 239549 DEBUG nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Instance network_info: |[{"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.448 239549 DEBUG oslo_concurrency.lockutils [req-8897544d-cec6-4e9b-a0c9-7679144100c3 req-9c3fee7c-1bcd-46b6-90f0-9f4134e54e06 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-b9beea2c-422e-4f83-9a08-6275c559a931" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.448 239549 DEBUG nova.network.neutron [req-8897544d-cec6-4e9b-a0c9-7679144100c3 req-9c3fee7c-1bcd-46b6-90f0-9f4134e54e06 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Refreshing network info cache for port fca7a8cb-6a93-4ab4-b48f-742237e61009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.451 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Start _get_guest_xml network_info=[{"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '0be914d1-f80e-4ce6-b77e-e031ee49f93a', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'b9beea2c-422e-4f83-9a08-6275c559a931', 'attached_at': '', 'detached_at': '', 'volume_id': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'serial': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.456 239549 WARNING nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.460 239549 DEBUG nova.virt.libvirt.host [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.461 239549 DEBUG nova.virt.libvirt.host [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.463 239549 DEBUG nova.virt.libvirt.host [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.464 239549 DEBUG nova.virt.libvirt.host [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.464 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.464 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.465 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.465 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.465 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.465 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.465 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.466 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.466 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.466 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.466 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.467 239549 DEBUG nova.virt.hardware [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.491 239549 DEBUG nova.storage.rbd_utils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image b9beea2c-422e-4f83-9a08-6275c559a931_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:43:30 compute-0 nova_compute[239545]: 2026-02-02 15:43:30.495 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:30 compute-0 podman[264851]: 2026-02-02 15:43:30.513264142 +0000 UTC m=+0.061626326 container create 0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:43:30 compute-0 systemd[1]: Started libpod-conmon-0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec.scope.
Feb 02 15:43:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:43:30 compute-0 podman[264851]: 2026-02-02 15:43:30.567198269 +0000 UTC m=+0.115560473 container init 0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:43:30 compute-0 podman[264851]: 2026-02-02 15:43:30.579986049 +0000 UTC m=+0.128348233 container start 0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:43:30 compute-0 magical_dhawan[264886]: 167 167
Feb 02 15:43:30 compute-0 systemd[1]: libpod-0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec.scope: Deactivated successfully.
Feb 02 15:43:30 compute-0 conmon[264886]: conmon 0e46872b51f7f921bd84 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec.scope/container/memory.events
Feb 02 15:43:30 compute-0 podman[264851]: 2026-02-02 15:43:30.583325501 +0000 UTC m=+0.131687715 container attach 0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:43:30 compute-0 podman[264851]: 2026-02-02 15:43:30.583988276 +0000 UTC m=+0.132350450 container died 0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:43:30 compute-0 podman[264851]: 2026-02-02 15:43:30.489780172 +0000 UTC m=+0.038142406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c38339cf60a05eccab72e79f91c543318cea65f3f1a0ac9b5f94d74e534dfbd-merged.mount: Deactivated successfully.
Feb 02 15:43:30 compute-0 podman[264851]: 2026-02-02 15:43:30.619509197 +0000 UTC m=+0.167871381 container remove 0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:43:30 compute-0 systemd[1]: libpod-conmon-0e46872b51f7f921bd84527ee5617d003857fc4640cb72a529fee391ec6015ec.scope: Deactivated successfully.
Feb 02 15:43:30 compute-0 ovn_controller[144995]: 2026-02-02T15:43:30Z|00034|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.14
Feb 02 15:43:30 compute-0 ovn_controller[144995]: 2026-02-02T15:43:30Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:02:b5:8f 10.100.0.14
Feb 02 15:43:30 compute-0 podman[264928]: 2026-02-02 15:43:30.77755819 +0000 UTC m=+0.048208790 container create 59bc085901e0ce4b585a63ea9bd1c675f8c4a77ecd6cbac806d0fa95a55779e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:43:30 compute-0 systemd[1]: Started libpod-conmon-59bc085901e0ce4b585a63ea9bd1c675f8c4a77ecd6cbac806d0fa95a55779e7.scope.
Feb 02 15:43:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5791685e8581474643161742db499fe85477a7451666a2f8bd2edcbe96ac018c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5791685e8581474643161742db499fe85477a7451666a2f8bd2edcbe96ac018c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:30 compute-0 podman[264928]: 2026-02-02 15:43:30.760089447 +0000 UTC m=+0.030740047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5791685e8581474643161742db499fe85477a7451666a2f8bd2edcbe96ac018c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5791685e8581474643161742db499fe85477a7451666a2f8bd2edcbe96ac018c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5791685e8581474643161742db499fe85477a7451666a2f8bd2edcbe96ac018c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:30 compute-0 podman[264928]: 2026-02-02 15:43:30.876586201 +0000 UTC m=+0.147236801 container init 59bc085901e0ce4b585a63ea9bd1c675f8c4a77ecd6cbac806d0fa95a55779e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_poitras, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:43:30 compute-0 podman[264928]: 2026-02-02 15:43:30.883319695 +0000 UTC m=+0.153970265 container start 59bc085901e0ce4b585a63ea9bd1c675f8c4a77ecd6cbac806d0fa95a55779e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:43:30 compute-0 podman[264928]: 2026-02-02 15:43:30.893582423 +0000 UTC m=+0.164233023 container attach 59bc085901e0ce4b585a63ea9bd1c675f8c4a77ecd6cbac806d0fa95a55779e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_poitras, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:43:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:43:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627738615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.028 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.142 239549 DEBUG os_brick.encryptors [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Using volume encryption metadata '{'encryption_key_id': 'ce3b3d4a-bfd4-4849-8cac-6db690d791ab', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'b9beea2c-422e-4f83-9a08-6275c559a931', 'attached_at': '', 'detached_at': '', 'volume_id': '0d27bdc2-c098-4067-bfd3-bb3f0f4711d2', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.145 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.161 239549 DEBUG barbicanclient.v1.secrets [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.162 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 ceph-mon[75334]: pgmap v1537: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 354 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 543 KiB/s wr, 121 op/s
Feb 02 15:43:31 compute-0 ceph-mon[75334]: osdmap e444: 3 total, 3 up, 3 in
Feb 02 15:43:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/627738615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:43:31 compute-0 boring_poitras[264944]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:43:31 compute-0 boring_poitras[264944]: --> All data devices are unavailable
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.305 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.306 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 systemd[1]: libpod-59bc085901e0ce4b585a63ea9bd1c675f8c4a77ecd6cbac806d0fa95a55779e7.scope: Deactivated successfully.
Feb 02 15:43:31 compute-0 podman[264928]: 2026-02-02 15:43:31.325964598 +0000 UTC m=+0.596615208 container died 59bc085901e0ce4b585a63ea9bd1c675f8c4a77ecd6cbac806d0fa95a55779e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_poitras, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb 02 15:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5791685e8581474643161742db499fe85477a7451666a2f8bd2edcbe96ac018c-merged.mount: Deactivated successfully.
Feb 02 15:43:31 compute-0 podman[264928]: 2026-02-02 15:43:31.361800737 +0000 UTC m=+0.632451317 container remove 59bc085901e0ce4b585a63ea9bd1c675f8c4a77ecd6cbac806d0fa95a55779e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_poitras, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 02 15:43:31 compute-0 systemd[1]: libpod-conmon-59bc085901e0ce4b585a63ea9bd1c675f8c4a77ecd6cbac806d0fa95a55779e7.scope: Deactivated successfully.
Feb 02 15:43:31 compute-0 sudo[264813]: pam_unix(sudo:session): session closed for user root
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.423 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.424 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.447 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.448 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.469 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.470 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 sudo[264978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:43:31 compute-0 sudo[264978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:43:31 compute-0 sudo[264978]: pam_unix(sudo:session): session closed for user root
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.497 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.497 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.525 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.526 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 sudo[265003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:43:31 compute-0 sudo[265003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.567 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.568 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.591 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.592 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.619 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.620 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.644 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.644 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.673 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.674 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.694 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.695 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.724 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.725 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.749 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.749 239549 INFO barbicanclient.base [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/ce3b3d4a-bfd4-4849-8cac-6db690d791ab
Feb 02 15:43:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/715022783' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/715022783' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.780 239549 DEBUG barbicanclient.client [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.781 239549 DEBUG nova.virt.libvirt.host [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <usage type="volume">
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <volume>0d27bdc2-c098-4067-bfd3-bb3f0f4711d2</volume>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   </usage>
Feb 02 15:43:31 compute-0 nova_compute[239545]: </secret>
Feb 02 15:43:31 compute-0 nova_compute[239545]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.809 239549 DEBUG nova.virt.libvirt.vif [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:43:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1451984176',display_name='tempest-TransferEncryptedVolumeTest-server-1451984176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1451984176',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKMsL4MWgDZKbVhB6IQOiF8pp1EYQGvyTWbcn/zV7b4n3z7hapmnFr4nrZxT7tbDh4OrqjSbFL2giowZbe7RVbM1MVvSBqtMgXFfoAVQEbSkdr0VJtIIAKRxEkeVY0YVeg==',key_name='tempest-TransferEncryptedVolumeTest-347177902',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-m8v9yvae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:43:27Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=b9beea2c-422e-4f83-9a08-6275c559a931,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.809 239549 DEBUG nova.network.os_vif_util [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.810 239549 DEBUG nova.network.os_vif_util [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:6b,bridge_name='br-int',has_traffic_filtering=True,id=fca7a8cb-6a93-4ab4-b48f-742237e61009,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca7a8cb-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:43:31 compute-0 podman[265039]: 2026-02-02 15:43:31.811465641 +0000 UTC m=+0.037584973 container create bc477a14398cce5f15f4eb66e188948107837cc3e6ab971c1bbe5e4d617588a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kilby, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.812 239549 DEBUG nova.objects.instance [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9beea2c-422e-4f83-9a08-6275c559a931 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.823 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <uuid>b9beea2c-422e-4f83-9a08-6275c559a931</uuid>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <name>instance-00000014</name>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1451984176</nova:name>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:43:30</nova:creationTime>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <nova:user uuid="df03e4d41ae644fca567cfe648b7bad6">tempest-TransferEncryptedVolumeTest-1895614673-project-member</nova:user>
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <nova:project uuid="6d6011a66bdb41cea09b6018ceeec7d4">tempest-TransferEncryptedVolumeTest-1895614673</nova:project>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <nova:port uuid="fca7a8cb-6a93-4ab4-b48f-742237e61009">
Feb 02 15:43:31 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <system>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <entry name="serial">b9beea2c-422e-4f83-9a08-6275c559a931</entry>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <entry name="uuid">b9beea2c-422e-4f83-9a08-6275c559a931</entry>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     </system>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <os>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   </os>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <features>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   </features>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/b9beea2c-422e-4f83-9a08-6275c559a931_disk.config">
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       </source>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-0d27bdc2-c098-4067-bfd3-bb3f0f4711d2">
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       </source>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <serial>0d27bdc2-c098-4067-bfd3-bb3f0f4711d2</serial>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <encryption format="luks">
Feb 02 15:43:31 compute-0 nova_compute[239545]:         <secret type="passphrase" uuid="092af1ed-4996-44be-bb57-deea4223459f"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       </encryption>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:06:42:6b"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <target dev="tapfca7a8cb-6a"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931/console.log" append="off"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <video>
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     </video>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:43:31 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:43:31 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:43:31 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:43:31 compute-0 nova_compute[239545]: </domain>
Feb 02 15:43:31 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.823 239549 DEBUG nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Preparing to wait for external event network-vif-plugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.824 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.824 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.824 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.825 239549 DEBUG nova.virt.libvirt.vif [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:43:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1451984176',display_name='tempest-TransferEncryptedVolumeTest-server-1451984176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1451984176',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKMsL4MWgDZKbVhB6IQOiF8pp1EYQGvyTWbcn/zV7b4n3z7hapmnFr4nrZxT7tbDh4OrqjSbFL2giowZbe7RVbM1MVvSBqtMgXFfoAVQEbSkdr0VJtIIAKRxEkeVY0YVeg==',key_name='tempest-TransferEncryptedVolumeTest-347177902',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-m8v9yvae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:43:27Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=b9beea2c-422e-4f83-9a08-6275c559a931,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.825 239549 DEBUG nova.network.os_vif_util [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.825 239549 DEBUG nova.network.os_vif_util [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:6b,bridge_name='br-int',has_traffic_filtering=True,id=fca7a8cb-6a93-4ab4-b48f-742237e61009,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca7a8cb-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.826 239549 DEBUG os_vif [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:6b,bridge_name='br-int',has_traffic_filtering=True,id=fca7a8cb-6a93-4ab4-b48f-742237e61009,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca7a8cb-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.826 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.827 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.827 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.830 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.830 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfca7a8cb-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.831 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfca7a8cb-6a, col_values=(('external_ids', {'iface-id': 'fca7a8cb-6a93-4ab4-b48f-742237e61009', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:42:6b', 'vm-uuid': 'b9beea2c-422e-4f83-9a08-6275c559a931'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.832 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:31 compute-0 NetworkManager[49171]: <info>  [1770047011.8334] manager: (tapfca7a8cb-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.834 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:43:31 compute-0 systemd[1]: Started libpod-conmon-bc477a14398cce5f15f4eb66e188948107837cc3e6ab971c1bbe5e4d617588a9.scope.
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.839 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.840 239549 INFO os_vif [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:42:6b,bridge_name='br-int',has_traffic_filtering=True,id=fca7a8cb-6a93-4ab4-b48f-742237e61009,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca7a8cb-6a')
Feb 02 15:43:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:43:31 compute-0 podman[265039]: 2026-02-02 15:43:31.877694407 +0000 UTC m=+0.103813759 container init bc477a14398cce5f15f4eb66e188948107837cc3e6ab971c1bbe5e4d617588a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:43:31 compute-0 podman[265039]: 2026-02-02 15:43:31.885770973 +0000 UTC m=+0.111890305 container start bc477a14398cce5f15f4eb66e188948107837cc3e6ab971c1bbe5e4d617588a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kilby, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:43:31 compute-0 podman[265039]: 2026-02-02 15:43:31.793195428 +0000 UTC m=+0.019314780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:43:31 compute-0 podman[265039]: 2026-02-02 15:43:31.889554515 +0000 UTC m=+0.115673867 container attach bc477a14398cce5f15f4eb66e188948107837cc3e6ab971c1bbe5e4d617588a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kilby, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:43:31 compute-0 frosty_kilby[265057]: 167 167
Feb 02 15:43:31 compute-0 systemd[1]: libpod-bc477a14398cce5f15f4eb66e188948107837cc3e6ab971c1bbe5e4d617588a9.scope: Deactivated successfully.
Feb 02 15:43:31 compute-0 podman[265039]: 2026-02-02 15:43:31.892168177 +0000 UTC m=+0.118287519 container died bc477a14398cce5f15f4eb66e188948107837cc3e6ab971c1bbe5e4d617588a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kilby, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.906 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.907 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.907 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No VIF found with MAC fa:16:3e:06:42:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.907 239549 INFO nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Using config drive
Feb 02 15:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-87af24bb04f4fc67ac7ee0f4875876ae549b0e128fa564ca2055bd120ff3e469-merged.mount: Deactivated successfully.
Feb 02 15:43:31 compute-0 podman[265039]: 2026-02-02 15:43:31.930113668 +0000 UTC m=+0.156233000 container remove bc477a14398cce5f15f4eb66e188948107837cc3e6ab971c1bbe5e4d617588a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:43:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 364 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 997 KiB/s wr, 165 op/s
Feb 02 15:43:31 compute-0 nova_compute[239545]: 2026-02-02 15:43:31.938 239549 DEBUG nova.storage.rbd_utils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image b9beea2c-422e-4f83-9a08-6275c559a931_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:43:31 compute-0 systemd[1]: libpod-conmon-bc477a14398cce5f15f4eb66e188948107837cc3e6ab971c1bbe5e4d617588a9.scope: Deactivated successfully.
Feb 02 15:43:32 compute-0 podman[265100]: 2026-02-02 15:43:32.075431421 +0000 UTC m=+0.036357922 container create e1084534a92b16809d567569c0ca320cd516d73bf2caf0ed51a12dc013f4443b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:43:32 compute-0 systemd[1]: Started libpod-conmon-e1084534a92b16809d567569c0ca320cd516d73bf2caf0ed51a12dc013f4443b.scope.
Feb 02 15:43:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8b61fa5a0e90f31aa2a3aee51e66bc35b5e3415a3f1cda6452f0883db92cfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8b61fa5a0e90f31aa2a3aee51e66bc35b5e3415a3f1cda6452f0883db92cfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8b61fa5a0e90f31aa2a3aee51e66bc35b5e3415a3f1cda6452f0883db92cfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b8b61fa5a0e90f31aa2a3aee51e66bc35b5e3415a3f1cda6452f0883db92cfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:32 compute-0 podman[265100]: 2026-02-02 15:43:32.059462064 +0000 UTC m=+0.020388575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:43:32 compute-0 podman[265100]: 2026-02-02 15:43:32.168939009 +0000 UTC m=+0.129865530 container init e1084534a92b16809d567569c0ca320cd516d73bf2caf0ed51a12dc013f4443b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euclid, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:43:32 compute-0 podman[265100]: 2026-02-02 15:43:32.180742586 +0000 UTC m=+0.141669067 container start e1084534a92b16809d567569c0ca320cd516d73bf2caf0ed51a12dc013f4443b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:43:32 compute-0 podman[265100]: 2026-02-02 15:43:32.184011735 +0000 UTC m=+0.144938256 container attach e1084534a92b16809d567569c0ca320cd516d73bf2caf0ed51a12dc013f4443b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:43:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/715022783' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/715022783' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.256 239549 INFO nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Creating config drive at /var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931/disk.config
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.262 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1vr60d3i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.285 239549 DEBUG nova.network.neutron [req-8897544d-cec6-4e9b-a0c9-7679144100c3 req-9c3fee7c-1bcd-46b6-90f0-9f4134e54e06 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Updated VIF entry in instance network info cache for port fca7a8cb-6a93-4ab4-b48f-742237e61009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.286 239549 DEBUG nova.network.neutron [req-8897544d-cec6-4e9b-a0c9-7679144100c3 req-9c3fee7c-1bcd-46b6-90f0-9f4134e54e06 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Updating instance_info_cache with network_info: [{"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.305 239549 DEBUG oslo_concurrency.lockutils [req-8897544d-cec6-4e9b-a0c9-7679144100c3 req-9c3fee7c-1bcd-46b6-90f0-9f4134e54e06 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-b9beea2c-422e-4f83-9a08-6275c559a931" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.390 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1vr60d3i" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.423 239549 DEBUG nova.storage.rbd_utils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image b9beea2c-422e-4f83-9a08-6275c559a931_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.428 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931/disk.config b9beea2c-422e-4f83-9a08-6275c559a931_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]: {
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:     "0": [
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:         {
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "devices": [
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "/dev/loop3"
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             ],
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_name": "ceph_lv0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_size": "21470642176",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "name": "ceph_lv0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "tags": {
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.cluster_name": "ceph",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.crush_device_class": "",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.encrypted": "0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.objectstore": "bluestore",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.osd_id": "0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.type": "block",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.vdo": "0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.with_tpm": "0"
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             },
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "type": "block",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "vg_name": "ceph_vg0"
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:         }
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:     ],
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:     "1": [
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:         {
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "devices": [
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "/dev/loop4"
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             ],
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_name": "ceph_lv1",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_size": "21470642176",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "name": "ceph_lv1",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "tags": {
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.cluster_name": "ceph",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.crush_device_class": "",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.encrypted": "0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.objectstore": "bluestore",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.osd_id": "1",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.type": "block",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.vdo": "0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.with_tpm": "0"
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             },
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "type": "block",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "vg_name": "ceph_vg1"
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:         }
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:     ],
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:     "2": [
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:         {
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "devices": [
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "/dev/loop5"
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             ],
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_name": "ceph_lv2",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_size": "21470642176",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "name": "ceph_lv2",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "tags": {
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.cluster_name": "ceph",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.crush_device_class": "",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.encrypted": "0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.objectstore": "bluestore",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.osd_id": "2",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.type": "block",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.vdo": "0",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:                 "ceph.with_tpm": "0"
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             },
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "type": "block",
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:             "vg_name": "ceph_vg2"
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:         }
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]:     ]
Feb 02 15:43:32 compute-0 vigilant_euclid[265116]: }
Feb 02 15:43:32 compute-0 systemd[1]: libpod-e1084534a92b16809d567569c0ca320cd516d73bf2caf0ed51a12dc013f4443b.scope: Deactivated successfully.
Feb 02 15:43:32 compute-0 podman[265147]: 2026-02-02 15:43:32.509540188 +0000 UTC m=+0.031115186 container died e1084534a92b16809d567569c0ca320cd516d73bf2caf0ed51a12dc013f4443b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b8b61fa5a0e90f31aa2a3aee51e66bc35b5e3415a3f1cda6452f0883db92cfd-merged.mount: Deactivated successfully.
Feb 02 15:43:32 compute-0 podman[265147]: 2026-02-02 15:43:32.544617348 +0000 UTC m=+0.066192316 container remove e1084534a92b16809d567569c0ca320cd516d73bf2caf0ed51a12dc013f4443b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:43:32 compute-0 systemd[1]: libpod-conmon-e1084534a92b16809d567569c0ca320cd516d73bf2caf0ed51a12dc013f4443b.scope: Deactivated successfully.
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.561 239549 DEBUG oslo_concurrency.processutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931/disk.config b9beea2c-422e-4f83-9a08-6275c559a931_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.562 239549 INFO nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Deleting local config drive /var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931/disk.config because it was imported into RBD.
Feb 02 15:43:32 compute-0 sudo[265003]: pam_unix(sudo:session): session closed for user root
Feb 02 15:43:32 compute-0 kernel: tapfca7a8cb-6a: entered promiscuous mode
Feb 02 15:43:32 compute-0 NetworkManager[49171]: <info>  [1770047012.6159] manager: (tapfca7a8cb-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.651 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:32 compute-0 ovn_controller[144995]: 2026-02-02T15:43:32Z|00182|binding|INFO|Claiming lport fca7a8cb-6a93-4ab4-b48f-742237e61009 for this chassis.
Feb 02 15:43:32 compute-0 ovn_controller[144995]: 2026-02-02T15:43:32Z|00183|binding|INFO|fca7a8cb-6a93-4ab4-b48f-742237e61009: Claiming fa:16:3e:06:42:6b 10.100.0.9
Feb 02 15:43:32 compute-0 ovn_controller[144995]: 2026-02-02T15:43:32Z|00184|binding|INFO|Setting lport fca7a8cb-6a93-4ab4-b48f-742237e61009 ovn-installed in OVS
Feb 02 15:43:32 compute-0 ovn_controller[144995]: 2026-02-02T15:43:32Z|00185|binding|INFO|Setting lport fca7a8cb-6a93-4ab4-b48f-742237e61009 up in Southbound
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.663 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:42:6b 10.100.0.9'], port_security=['fa:16:3e:06:42:6b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b9beea2c-422e-4f83-9a08-6275c559a931', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8d353b3-b1bd-4128-966b-cb49804d5ec9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=fca7a8cb-6a93-4ab4-b48f-742237e61009) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.663 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.665 154982 INFO neutron.agent.ovn.metadata.agent [-] Port fca7a8cb-6a93-4ab4-b48f-742237e61009 in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c bound to our chassis
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.666 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:43:32 compute-0 systemd-machined[207609]: New machine qemu-20-instance-00000014.
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.678 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f5a6fa-5c2d-4f26-8ec3-434dd45efeaf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.678 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6f67b7a-31 in ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:43:32 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.681 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6f67b7a-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.681 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7e673c-01a0-4363-b0b6-74302b49083b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.682 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[26dd850b-d777-4a6f-8313-55798e9f2f1d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_controller[144995]: 2026-02-02T15:43:32Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:b5:8f 10.100.0.14
Feb 02 15:43:32 compute-0 ovn_controller[144995]: 2026-02-02T15:43:32Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:b5:8f 10.100.0.14
Feb 02 15:43:32 compute-0 systemd-udevd[265215]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:43:32 compute-0 sudo[265184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:43:32 compute-0 sudo[265184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:43:32 compute-0 sudo[265184]: pam_unix(sudo:session): session closed for user root
Feb 02 15:43:32 compute-0 NetworkManager[49171]: <info>  [1770047012.7011] device (tapfca7a8cb-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:43:32 compute-0 NetworkManager[49171]: <info>  [1770047012.7018] device (tapfca7a8cb-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.701 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[9eebdfe8-46f7-4e86-b1fe-afdc80f249e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.723 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[654d7213-a778-4351-a6a7-a670b8d677ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 sudo[265219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:43:32 compute-0 sudo[265219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.748 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[befae9fd-15c9-4164-839e-dbf003fbead5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 NetworkManager[49171]: <info>  [1770047012.7536] manager: (tapb6f67b7a-30): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.752 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5464b8a7-efa9-4f2f-a052-7ae5dd22c8c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.775 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[bcacc883-ab11-43fe-8e00-42c63ab0e928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.778 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe696c3-7da3-4d37-9855-a99acb6d17a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 NetworkManager[49171]: <info>  [1770047012.7929] device (tapb6f67b7a-30): carrier: link connected
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.796 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2c8e56-282a-4222-b49f-f2f7e3a6487d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.808 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[33e20f94-79b9-44c5-b0d1-cac64eef3d03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6f67b7a-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449870, 'reachable_time': 42119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265273, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.819 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f5cd7442-6713-4231-a9c7-9623b42e60ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe04:b29'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449870, 'tstamp': 449870}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265274, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.829 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb74323-7677-44d4-85cc-fb190e062f4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6f67b7a-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449870, 'reachable_time': 42119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265275, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.848 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6eafec-5cdf-494d-8909-b76d295a1f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.862 239549 DEBUG nova.compute.manager [req-8218a959-9e05-4016-98cf-02de3e534fba req-f12fc12c-94c7-48af-a33d-b885c54d2988 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received event network-vif-plugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.863 239549 DEBUG oslo_concurrency.lockutils [req-8218a959-9e05-4016-98cf-02de3e534fba req-f12fc12c-94c7-48af-a33d-b885c54d2988 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.863 239549 DEBUG oslo_concurrency.lockutils [req-8218a959-9e05-4016-98cf-02de3e534fba req-f12fc12c-94c7-48af-a33d-b885c54d2988 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.863 239549 DEBUG oslo_concurrency.lockutils [req-8218a959-9e05-4016-98cf-02de3e534fba req-f12fc12c-94c7-48af-a33d-b885c54d2988 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.863 239549 DEBUG nova.compute.manager [req-8218a959-9e05-4016-98cf-02de3e534fba req-f12fc12c-94c7-48af-a33d-b885c54d2988 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Processing event network-vif-plugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.897 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5c774c5e-d487-4e60-9c41-6217de251e03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.900 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6f67b7a-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.903 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:32 compute-0 kernel: tapb6f67b7a-30: entered promiscuous mode
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.901 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.901 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6f67b7a-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:32 compute-0 NetworkManager[49171]: <info>  [1770047012.9048] manager: (tapb6f67b7a-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.905 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.912 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6f67b7a-30, col_values=(('external_ids', {'iface-id': '4216aeff-7d93-404b-9880-8737d42e9d19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:32 compute-0 ovn_controller[144995]: 2026-02-02T15:43:32Z|00186|binding|INFO|Releasing lport 4216aeff-7d93-404b-9880-8737d42e9d19 from this chassis (sb_readonly=0)
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.914 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.918 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.919 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[bae7c6dc-122e-4835-9eee-01548e0c455c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:32 compute-0 nova_compute[239545]: 2026-02-02 15:43:32.920 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.920 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:43:32 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:32.921 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'env', 'PROCESS_TAG=haproxy-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:43:33 compute-0 podman[265331]: 2026-02-02 15:43:33.040004851 +0000 UTC m=+0.047859011 container create 1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_curran, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:43:33 compute-0 systemd[1]: Started libpod-conmon-1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712.scope.
Feb 02 15:43:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:43:33 compute-0 podman[265331]: 2026-02-02 15:43:33.015244471 +0000 UTC m=+0.023098661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:43:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:33 compute-0 podman[265331]: 2026-02-02 15:43:33.11421614 +0000 UTC m=+0.122070330 container init 1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:43:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Feb 02 15:43:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Feb 02 15:43:33 compute-0 podman[265331]: 2026-02-02 15:43:33.120418381 +0000 UTC m=+0.128272541 container start 1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_curran, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 02 15:43:33 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Feb 02 15:43:33 compute-0 recursing_curran[265348]: 167 167
Feb 02 15:43:33 compute-0 systemd[1]: libpod-1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712.scope: Deactivated successfully.
Feb 02 15:43:33 compute-0 conmon[265348]: conmon 1a9b564f8b0424809c80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712.scope/container/memory.events
Feb 02 15:43:33 compute-0 podman[265331]: 2026-02-02 15:43:33.125181756 +0000 UTC m=+0.133035946 container attach 1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:43:33 compute-0 podman[265331]: 2026-02-02 15:43:33.128167169 +0000 UTC m=+0.136021339 container died 1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_curran, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-45031f4d1519282c9d1c8d7aa51b719edb0182d61670874b35de6fd34ab6dd0b-merged.mount: Deactivated successfully.
Feb 02 15:43:33 compute-0 podman[265331]: 2026-02-02 15:43:33.166278513 +0000 UTC m=+0.174132673 container remove 1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_curran, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:43:33 compute-0 systemd[1]: libpod-conmon-1a9b564f8b0424809c80fa0bab2f1036e276c1c58f362a75c95735fa9939f712.scope: Deactivated successfully.
Feb 02 15:43:33 compute-0 ceph-mon[75334]: pgmap v1539: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 364 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 997 KiB/s wr, 165 op/s
Feb 02 15:43:33 compute-0 ceph-mon[75334]: osdmap e445: 3 total, 3 up, 3 in
Feb 02 15:43:33 compute-0 podman[265395]: 2026-02-02 15:43:33.304351061 +0000 UTC m=+0.045863363 container create b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 02 15:43:33 compute-0 podman[265394]: 2026-02-02 15:43:33.308770748 +0000 UTC m=+0.051164661 container create 412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 02 15:43:33 compute-0 systemd[1]: Started libpod-conmon-412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e.scope.
Feb 02 15:43:33 compute-0 systemd[1]: Started libpod-conmon-b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88.scope.
Feb 02 15:43:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:43:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec645be37fab732dd471f90c68f1038283a8b92066239114a7ec01c02af68c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e335bb3809cc554f6014b67fb5c1bbf63a7cedc088afff53e35072a268767e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e335bb3809cc554f6014b67fb5c1bbf63a7cedc088afff53e35072a268767e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e335bb3809cc554f6014b67fb5c1bbf63a7cedc088afff53e35072a268767e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e335bb3809cc554f6014b67fb5c1bbf63a7cedc088afff53e35072a268767e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:43:33 compute-0 podman[265394]: 2026-02-02 15:43:33.279874478 +0000 UTC m=+0.022268391 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:43:33 compute-0 podman[265395]: 2026-02-02 15:43:33.282181553 +0000 UTC m=+0.023693875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:43:33 compute-0 podman[265394]: 2026-02-02 15:43:33.391531275 +0000 UTC m=+0.133925188 container init 412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:43:33 compute-0 podman[265395]: 2026-02-02 15:43:33.394268961 +0000 UTC m=+0.135781273 container init b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:43:33 compute-0 podman[265394]: 2026-02-02 15:43:33.398630977 +0000 UTC m=+0.141024870 container start 412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 02 15:43:33 compute-0 podman[265395]: 2026-02-02 15:43:33.401074866 +0000 UTC m=+0.142587168 container start b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:43:33 compute-0 podman[265395]: 2026-02-02 15:43:33.404328905 +0000 UTC m=+0.145841207 container attach b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:43:33 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[265422]: [NOTICE]   (265430) : New worker (265434) forked
Feb 02 15:43:33 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[265422]: [NOTICE]   (265430) : Loading success.
Feb 02 15:43:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 364 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 490 KiB/s wr, 122 op/s
Feb 02 15:43:33 compute-0 lvm[265513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:43:33 compute-0 lvm[265513]: VG ceph_vg0 finished
Feb 02 15:43:33 compute-0 lvm[265515]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:43:33 compute-0 lvm[265515]: VG ceph_vg1 finished
Feb 02 15:43:33 compute-0 lvm[265516]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:43:33 compute-0 lvm[265516]: VG ceph_vg2 finished
Feb 02 15:43:34 compute-0 hungry_sinoussi[265424]: {}
Feb 02 15:43:34 compute-0 systemd[1]: libpod-b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88.scope: Deactivated successfully.
Feb 02 15:43:34 compute-0 systemd[1]: libpod-b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88.scope: Consumed 1.029s CPU time.
Feb 02 15:43:34 compute-0 podman[265395]: 2026-02-02 15:43:34.091282833 +0000 UTC m=+0.832795125 container died b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 02 15:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-29e335bb3809cc554f6014b67fb5c1bbf63a7cedc088afff53e35072a268767e-merged.mount: Deactivated successfully.
Feb 02 15:43:34 compute-0 podman[265395]: 2026-02-02 15:43:34.12991823 +0000 UTC m=+0.871430532 container remove b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:43:34 compute-0 systemd[1]: libpod-conmon-b6fbf96eff62a2623108307420f61720824e5afd73af4b1f71d9a8e0b4fe0c88.scope: Deactivated successfully.
Feb 02 15:43:34 compute-0 sudo[265219]: pam_unix(sudo:session): session closed for user root
Feb 02 15:43:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:43:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:43:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:43:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:43:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Feb 02 15:43:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Feb 02 15:43:34 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Feb 02 15:43:34 compute-0 sudo[265533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:43:34 compute-0 sudo[265533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:43:34 compute-0 sudo[265533]: pam_unix(sudo:session): session closed for user root
Feb 02 15:43:34 compute-0 nova_compute[239545]: 2026-02-02 15:43:34.362 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.040 239549 DEBUG nova.compute.manager [req-571d394c-3a3b-46fd-85cd-80ff104d63b2 req-e38965f8-b788-42bf-aca6-5a79af55e24e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received event network-vif-plugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.041 239549 DEBUG oslo_concurrency.lockutils [req-571d394c-3a3b-46fd-85cd-80ff104d63b2 req-e38965f8-b788-42bf-aca6-5a79af55e24e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.041 239549 DEBUG oslo_concurrency.lockutils [req-571d394c-3a3b-46fd-85cd-80ff104d63b2 req-e38965f8-b788-42bf-aca6-5a79af55e24e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.041 239549 DEBUG oslo_concurrency.lockutils [req-571d394c-3a3b-46fd-85cd-80ff104d63b2 req-e38965f8-b788-42bf-aca6-5a79af55e24e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.041 239549 DEBUG nova.compute.manager [req-571d394c-3a3b-46fd-85cd-80ff104d63b2 req-e38965f8-b788-42bf-aca6-5a79af55e24e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] No waiting events found dispatching network-vif-plugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.041 239549 WARNING nova.compute.manager [req-571d394c-3a3b-46fd-85cd-80ff104d63b2 req-e38965f8-b788-42bf-aca6-5a79af55e24e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received unexpected event network-vif-plugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 for instance with vm_state building and task_state spawning.
Feb 02 15:43:35 compute-0 ceph-mon[75334]: pgmap v1541: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 364 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 490 KiB/s wr, 122 op/s
Feb 02 15:43:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:43:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:43:35 compute-0 ceph-mon[75334]: osdmap e446: 3 total, 3 up, 3 in
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.411 239549 DEBUG nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.412 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047015.4109957, b9beea2c-422e-4f83-9a08-6275c559a931 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.412 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] VM Started (Lifecycle Event)
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.415 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.419 239549 INFO nova.virt.libvirt.driver [-] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Instance spawned successfully.
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.419 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.432 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.439 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.443 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.443 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.443 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.444 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.444 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.444 239549 DEBUG nova.virt.libvirt.driver [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.471 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.472 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047015.4111273, b9beea2c-422e-4f83-9a08-6275c559a931 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.472 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] VM Paused (Lifecycle Event)
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.497 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.499 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047015.4133468, b9beea2c-422e-4f83-9a08-6275c559a931 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.500 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] VM Resumed (Lifecycle Event)
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.523 239549 INFO nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Took 6.75 seconds to spawn the instance on the hypervisor.
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.523 239549 DEBUG nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.524 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.530 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.564 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.588 239549 INFO nova.compute.manager [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Took 9.02 seconds to build instance.
Feb 02 15:43:35 compute-0 nova_compute[239545]: 2026-02-02 15:43:35.605 239549 DEBUG oslo_concurrency.lockutils [None req-5f4f493c-c3e4-4d3d-a3e3-6dd391e6ecd7 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 368 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 596 KiB/s wr, 181 op/s
Feb 02 15:43:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Feb 02 15:43:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Feb 02 15:43:36 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Feb 02 15:43:36 compute-0 nova_compute[239545]: 2026-02-02 15:43:36.834 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3212273168' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3212273168' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:37 compute-0 ceph-mon[75334]: pgmap v1543: 305 pgs: 305 active+clean; 368 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 596 KiB/s wr, 181 op/s
Feb 02 15:43:37 compute-0 ceph-mon[75334]: osdmap e447: 3 total, 3 up, 3 in
Feb 02 15:43:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3212273168' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3212273168' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 368 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 121 KiB/s wr, 75 op/s
Feb 02 15:43:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Feb 02 15:43:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Feb 02 15:43:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Feb 02 15:43:39 compute-0 ceph-mon[75334]: pgmap v1545: 305 pgs: 305 active+clean; 368 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 121 KiB/s wr, 75 op/s
Feb 02 15:43:39 compute-0 ceph-mon[75334]: osdmap e448: 3 total, 3 up, 3 in
Feb 02 15:43:39 compute-0 nova_compute[239545]: 2026-02-02 15:43:39.365 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:39 compute-0 nova_compute[239545]: 2026-02-02 15:43:39.716 239549 DEBUG nova.compute.manager [req-d30ea7cb-3b76-4ebc-9578-426f7713a752 req-9fed8b4b-40b4-4838-89dc-fac0c24de4a1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received event network-changed-fca7a8cb-6a93-4ab4-b48f-742237e61009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:39 compute-0 nova_compute[239545]: 2026-02-02 15:43:39.716 239549 DEBUG nova.compute.manager [req-d30ea7cb-3b76-4ebc-9578-426f7713a752 req-9fed8b4b-40b4-4838-89dc-fac0c24de4a1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Refreshing instance network info cache due to event network-changed-fca7a8cb-6a93-4ab4-b48f-742237e61009. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:43:39 compute-0 nova_compute[239545]: 2026-02-02 15:43:39.716 239549 DEBUG oslo_concurrency.lockutils [req-d30ea7cb-3b76-4ebc-9578-426f7713a752 req-9fed8b4b-40b4-4838-89dc-fac0c24de4a1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-b9beea2c-422e-4f83-9a08-6275c559a931" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:43:39 compute-0 nova_compute[239545]: 2026-02-02 15:43:39.716 239549 DEBUG oslo_concurrency.lockutils [req-d30ea7cb-3b76-4ebc-9578-426f7713a752 req-9fed8b4b-40b4-4838-89dc-fac0c24de4a1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-b9beea2c-422e-4f83-9a08-6275c559a931" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:43:39 compute-0 nova_compute[239545]: 2026-02-02 15:43:39.716 239549 DEBUG nova.network.neutron [req-d30ea7cb-3b76-4ebc-9578-426f7713a752 req-9fed8b4b-40b4-4838-89dc-fac0c24de4a1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Refreshing network info cache for port fca7a8cb-6a93-4ab4-b48f-742237e61009 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:43:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 368 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 138 KiB/s wr, 110 op/s
Feb 02 15:43:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Feb 02 15:43:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Feb 02 15:43:40 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Feb 02 15:43:41 compute-0 ceph-mon[75334]: pgmap v1547: 305 pgs: 305 active+clean; 368 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 138 KiB/s wr, 110 op/s
Feb 02 15:43:41 compute-0 ceph-mon[75334]: osdmap e449: 3 total, 3 up, 3 in
Feb 02 15:43:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Feb 02 15:43:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Feb 02 15:43:41 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Feb 02 15:43:41 compute-0 nova_compute[239545]: 2026-02-02 15:43:41.399 239549 DEBUG nova.network.neutron [req-d30ea7cb-3b76-4ebc-9578-426f7713a752 req-9fed8b4b-40b4-4838-89dc-fac0c24de4a1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Updated VIF entry in instance network info cache for port fca7a8cb-6a93-4ab4-b48f-742237e61009. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:43:41 compute-0 nova_compute[239545]: 2026-02-02 15:43:41.401 239549 DEBUG nova.network.neutron [req-d30ea7cb-3b76-4ebc-9578-426f7713a752 req-9fed8b4b-40b4-4838-89dc-fac0c24de4a1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Updating instance_info_cache with network_info: [{"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:41 compute-0 nova_compute[239545]: 2026-02-02 15:43:41.421 239549 DEBUG oslo_concurrency.lockutils [req-d30ea7cb-3b76-4ebc-9578-426f7713a752 req-9fed8b4b-40b4-4838-89dc-fac0c24de4a1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-b9beea2c-422e-4f83-9a08-6275c559a931" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:43:41 compute-0 nova_compute[239545]: 2026-02-02 15:43:41.839 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 368 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 28 KiB/s wr, 185 op/s
Feb 02 15:43:42 compute-0 ceph-mon[75334]: osdmap e450: 3 total, 3 up, 3 in
Feb 02 15:43:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3206274694' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3206274694' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:43:42
Feb 02 15:43:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:43:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:43:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root', '.mgr']
Feb 02 15:43:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:43:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e450 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Feb 02 15:43:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Feb 02 15:43:43 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Feb 02 15:43:43 compute-0 podman[265565]: 2026-02-02 15:43:43.372054808 +0000 UTC m=+0.093565809 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Feb 02 15:43:43 compute-0 ceph-mon[75334]: pgmap v1550: 305 pgs: 305 active+clean; 368 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 28 KiB/s wr, 185 op/s
Feb 02 15:43:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3206274694' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:43 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3206274694' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:43 compute-0 podman[265564]: 2026-02-02 15:43:43.412751875 +0000 UTC m=+0.133850197 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Feb 02 15:43:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 368 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 15 KiB/s wr, 183 op/s
Feb 02 15:43:44 compute-0 nova_compute[239545]: 2026-02-02 15:43:44.373 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:44 compute-0 ceph-mon[75334]: osdmap e451: 3 total, 3 up, 3 in
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:43:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:43:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Feb 02 15:43:45 compute-0 ceph-mon[75334]: pgmap v1552: 305 pgs: 305 active+clean; 368 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 15 KiB/s wr, 183 op/s
Feb 02 15:43:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Feb 02 15:43:45 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Feb 02 15:43:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 372 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 835 KiB/s rd, 409 KiB/s wr, 118 op/s
Feb 02 15:43:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Feb 02 15:43:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Feb 02 15:43:46 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Feb 02 15:43:46 compute-0 ceph-mon[75334]: osdmap e452: 3 total, 3 up, 3 in
Feb 02 15:43:46 compute-0 nova_compute[239545]: 2026-02-02 15:43:46.843 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:47 compute-0 ceph-mon[75334]: pgmap v1554: 305 pgs: 305 active+clean; 372 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 835 KiB/s rd, 409 KiB/s wr, 118 op/s
Feb 02 15:43:47 compute-0 ceph-mon[75334]: osdmap e453: 3 total, 3 up, 3 in
Feb 02 15:43:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/753794485' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/753794485' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 372 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 415 KiB/s rd, 394 KiB/s wr, 69 op/s
Feb 02 15:43:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e453 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Feb 02 15:43:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Feb 02 15:43:48 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Feb 02 15:43:48 compute-0 ovn_controller[144995]: 2026-02-02T15:43:48Z|00038|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.9
Feb 02 15:43:48 compute-0 ovn_controller[144995]: 2026-02-02T15:43:48Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:06:42:6b 10.100.0.9
Feb 02 15:43:48 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/753794485' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:48 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/753794485' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:48 compute-0 ceph-mon[75334]: osdmap e454: 3 total, 3 up, 3 in
Feb 02 15:43:49 compute-0 nova_compute[239545]: 2026-02-02 15:43:49.378 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:49 compute-0 ceph-mon[75334]: pgmap v1556: 305 pgs: 305 active+clean; 372 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 415 KiB/s rd, 394 KiB/s wr, 69 op/s
Feb 02 15:43:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 372 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 391 KiB/s wr, 80 op/s
Feb 02 15:43:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Feb 02 15:43:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Feb 02 15:43:50 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.557 239549 DEBUG oslo_concurrency.lockutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.557 239549 DEBUG oslo_concurrency.lockutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.558 239549 DEBUG oslo_concurrency.lockutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.558 239549 DEBUG oslo_concurrency.lockutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.558 239549 DEBUG oslo_concurrency.lockutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.559 239549 INFO nova.compute.manager [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Terminating instance
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.560 239549 DEBUG nova.compute.manager [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:43:50 compute-0 kernel: tap733bac79-4a (unregistering): left promiscuous mode
Feb 02 15:43:50 compute-0 NetworkManager[49171]: <info>  [1770047030.6173] device (tap733bac79-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.626 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:50 compute-0 ovn_controller[144995]: 2026-02-02T15:43:50Z|00187|binding|INFO|Releasing lport 733bac79-4a7c-42c0-90c2-1a1e68e1543f from this chassis (sb_readonly=0)
Feb 02 15:43:50 compute-0 ovn_controller[144995]: 2026-02-02T15:43:50Z|00188|binding|INFO|Setting lport 733bac79-4a7c-42c0-90c2-1a1e68e1543f down in Southbound
Feb 02 15:43:50 compute-0 ovn_controller[144995]: 2026-02-02T15:43:50Z|00189|binding|INFO|Removing iface tap733bac79-4a ovn-installed in OVS
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.628 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.636 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b5:8f 10.100.0.14'], port_security=['fa:16:3e:02:b5:8f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '84365bea-19f8-4121-86d5-dd9e1a5eeaa3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '004cec53-e5f7-46da-97cb-05df85e39e7c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=733bac79-4a7c-42c0-90c2-1a1e68e1543f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.638 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.640 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 733bac79-4a7c-42c0-90c2-1a1e68e1543f in datapath 473fc4ca-a137-447b-9349-9f4677babee6 unbound from our chassis
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.641 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.652 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[59f9c64c-8623-45a9-b00f-c73e6d78476c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.669 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[629053cb-a298-4747-95ec-cd07d0e35ed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:50 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Feb 02 15:43:50 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 13.339s CPU time.
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.673 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[a051fa9a-2d0a-4b45-bead-50827daf2587]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:50 compute-0 systemd-machined[207609]: Machine qemu-19-instance-00000013 terminated.
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.689 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7b7b8b-2fb6-49e8-b997-9c746dd1b13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.704 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[156e0773-277c-4cfd-b281-64db193615f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442796, 'reachable_time': 17277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265618, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.718 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[70a6377e-61dd-4cf9-ba56-e639471cd93d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap473fc4ca-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442804, 'tstamp': 442804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265619, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap473fc4ca-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442806, 'tstamp': 442806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265619, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.720 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.721 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.726 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.726 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473fc4ca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.727 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.727 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473fc4ca-a0, col_values=(('external_ids', {'iface-id': '8ec763b2-de85-4ed5-bb5d-67e76d81beae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:50.727 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.799 239549 INFO nova.virt.libvirt.driver [-] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Instance destroyed successfully.
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.799 239549 DEBUG nova.objects.instance [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'resources' on Instance uuid 84365bea-19f8-4121-86d5-dd9e1a5eeaa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.814 239549 DEBUG nova.virt.libvirt.vif [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-945783403',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-945783403',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-945783403',id=19,image_ref='d4b335c4-e07b-4ee0-9761-2796fef45b8d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmrRjEkb6cgWeSYjc37lQrrxVPle3ppFSK+u78pO20zuPWTQn5idX7F6RNg8VypNDKVczqmhPVrv5jrSZc9gmN0i7G3lT8N9zdxz/YDHLLnJS8oiVtr1W60g1y/bS6ftw==',key_name='tempest-keypair-846465039',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:43:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-ub0gw0t0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-77302308',image_owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:43:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=84365bea-19f8-4121-86d5-dd9e1a5eeaa3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.815 239549 DEBUG nova.network.os_vif_util [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "address": "fa:16:3e:02:b5:8f", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733bac79-4a", "ovs_interfaceid": "733bac79-4a7c-42c0-90c2-1a1e68e1543f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.815 239549 DEBUG nova.network.os_vif_util [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:b5:8f,bridge_name='br-int',has_traffic_filtering=True,id=733bac79-4a7c-42c0-90c2-1a1e68e1543f,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733bac79-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.816 239549 DEBUG os_vif [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:b5:8f,bridge_name='br-int',has_traffic_filtering=True,id=733bac79-4a7c-42c0-90c2-1a1e68e1543f,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733bac79-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.817 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.818 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap733bac79-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.821 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.823 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.826 239549 INFO os_vif [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:b5:8f,bridge_name='br-int',has_traffic_filtering=True,id=733bac79-4a7c-42c0-90c2-1a1e68e1543f,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733bac79-4a')
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.973 239549 INFO nova.virt.libvirt.driver [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Deleting instance files /var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3_del
Feb 02 15:43:50 compute-0 nova_compute[239545]: 2026-02-02 15:43:50.974 239549 INFO nova.virt.libvirt.driver [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Deletion of /var/lib/nova/instances/84365bea-19f8-4121-86d5-dd9e1a5eeaa3_del complete
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.053 239549 INFO nova.compute.manager [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Took 0.49 seconds to destroy the instance on the hypervisor.
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.055 239549 DEBUG oslo.service.loopingcall [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.055 239549 DEBUG nova.compute.manager [-] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.055 239549 DEBUG nova.network.neutron [-] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.404 239549 DEBUG nova.compute.manager [req-85cec4e2-ef6c-41cb-aa75-8824fef14375 req-ea1e16b0-eea9-4b08-9b26-315a7fe85110 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received event network-vif-unplugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.404 239549 DEBUG oslo_concurrency.lockutils [req-85cec4e2-ef6c-41cb-aa75-8824fef14375 req-ea1e16b0-eea9-4b08-9b26-315a7fe85110 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.405 239549 DEBUG oslo_concurrency.lockutils [req-85cec4e2-ef6c-41cb-aa75-8824fef14375 req-ea1e16b0-eea9-4b08-9b26-315a7fe85110 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.405 239549 DEBUG oslo_concurrency.lockutils [req-85cec4e2-ef6c-41cb-aa75-8824fef14375 req-ea1e16b0-eea9-4b08-9b26-315a7fe85110 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.405 239549 DEBUG nova.compute.manager [req-85cec4e2-ef6c-41cb-aa75-8824fef14375 req-ea1e16b0-eea9-4b08-9b26-315a7fe85110 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] No waiting events found dispatching network-vif-unplugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.405 239549 DEBUG nova.compute.manager [req-85cec4e2-ef6c-41cb-aa75-8824fef14375 req-ea1e16b0-eea9-4b08-9b26-315a7fe85110 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received event network-vif-unplugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:43:51 compute-0 ceph-mon[75334]: pgmap v1558: 305 pgs: 305 active+clean; 372 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 391 KiB/s wr, 80 op/s
Feb 02 15:43:51 compute-0 ceph-mon[75334]: osdmap e455: 3 total, 3 up, 3 in
Feb 02 15:43:51 compute-0 nova_compute[239545]: 2026-02-02 15:43:51.663 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:51 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:51.663 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:43:51 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:51.665 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:43:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 372 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 1020 KiB/s rd, 18 KiB/s wr, 138 op/s
Feb 02 15:43:52 compute-0 nova_compute[239545]: 2026-02-02 15:43:52.318 239549 DEBUG nova.network.neutron [-] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:52 compute-0 nova_compute[239545]: 2026-02-02 15:43:52.336 239549 INFO nova.compute.manager [-] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Took 1.28 seconds to deallocate network for instance.
Feb 02 15:43:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Feb 02 15:43:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Feb 02 15:43:52 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 02 15:43:52 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Feb 02 15:43:52 compute-0 nova_compute[239545]: 2026-02-02 15:43:52.517 239549 INFO nova.compute.manager [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Took 0.18 seconds to detach 1 volumes for instance.
Feb 02 15:43:52 compute-0 nova_compute[239545]: 2026-02-02 15:43:52.520 239549 DEBUG nova.compute.manager [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Deleting volume: f4e480c6-ad80-4ac0-bde3-8ca6f7670b08 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Feb 02 15:43:52 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:52.667 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:52 compute-0 nova_compute[239545]: 2026-02-02 15:43:52.703 239549 DEBUG oslo_concurrency.lockutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:52 compute-0 nova_compute[239545]: 2026-02-02 15:43:52.704 239549 DEBUG oslo_concurrency.lockutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:52 compute-0 nova_compute[239545]: 2026-02-02 15:43:52.795 239549 DEBUG oslo_concurrency.processutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2265336518' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2265336518' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e456 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Feb 02 15:43:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Feb 02 15:43:53 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Feb 02 15:43:53 compute-0 ovn_controller[144995]: 2026-02-02T15:43:53Z|00040|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.9
Feb 02 15:43:53 compute-0 ovn_controller[144995]: 2026-02-02T15:43:53Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:06:42:6b 10.100.0.9
Feb 02 15:43:53 compute-0 ovn_controller[144995]: 2026-02-02T15:43:53Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:42:6b 10.100.0.9
Feb 02 15:43:53 compute-0 ovn_controller[144995]: 2026-02-02T15:43:53Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:42:6b 10.100.0.9
Feb 02 15:43:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:43:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375913494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.454 239549 DEBUG oslo_concurrency.processutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.466 239549 DEBUG nova.compute.provider_tree [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.493 239549 DEBUG nova.compute.manager [req-4b49f42d-178b-4adc-ad8d-1dda064f3723 req-ebd72d7f-92b6-48d5-aac3-dae811087883 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received event network-vif-plugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.494 239549 DEBUG oslo_concurrency.lockutils [req-4b49f42d-178b-4adc-ad8d-1dda064f3723 req-ebd72d7f-92b6-48d5-aac3-dae811087883 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.494 239549 DEBUG oslo_concurrency.lockutils [req-4b49f42d-178b-4adc-ad8d-1dda064f3723 req-ebd72d7f-92b6-48d5-aac3-dae811087883 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.494 239549 DEBUG oslo_concurrency.lockutils [req-4b49f42d-178b-4adc-ad8d-1dda064f3723 req-ebd72d7f-92b6-48d5-aac3-dae811087883 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.495 239549 DEBUG nova.compute.manager [req-4b49f42d-178b-4adc-ad8d-1dda064f3723 req-ebd72d7f-92b6-48d5-aac3-dae811087883 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] No waiting events found dispatching network-vif-plugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.495 239549 WARNING nova.compute.manager [req-4b49f42d-178b-4adc-ad8d-1dda064f3723 req-ebd72d7f-92b6-48d5-aac3-dae811087883 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received unexpected event network-vif-plugged-733bac79-4a7c-42c0-90c2-1a1e68e1543f for instance with vm_state deleted and task_state None.
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.495 239549 DEBUG nova.compute.manager [req-4b49f42d-178b-4adc-ad8d-1dda064f3723 req-ebd72d7f-92b6-48d5-aac3-dae811087883 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Received event network-vif-deleted-733bac79-4a7c-42c0-90c2-1a1e68e1543f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.497 239549 DEBUG nova.scheduler.client.report [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:43:53 compute-0 ceph-mon[75334]: pgmap v1560: 305 pgs: 305 active+clean; 372 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 1020 KiB/s rd, 18 KiB/s wr, 138 op/s
Feb 02 15:43:53 compute-0 ceph-mon[75334]: osdmap e456: 3 total, 3 up, 3 in
Feb 02 15:43:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2265336518' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2265336518' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:53 compute-0 ceph-mon[75334]: osdmap e457: 3 total, 3 up, 3 in
Feb 02 15:43:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3375913494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.534 239549 DEBUG oslo_concurrency.lockutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.559 239549 INFO nova.scheduler.client.report [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Deleted allocations for instance 84365bea-19f8-4121-86d5-dd9e1a5eeaa3
Feb 02 15:43:53 compute-0 nova_compute[239545]: 2026-02-02 15:43:53.647 239549 DEBUG oslo_concurrency.lockutils [None req-33cdf5c9-2385-4b31-9109-40031c107c2a b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "84365bea-19f8-4121-86d5-dd9e1a5eeaa3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 367 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 19 KiB/s wr, 129 op/s
Feb 02 15:43:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1082114730' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1082114730' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:54 compute-0 nova_compute[239545]: 2026-02-02 15:43:54.380 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Feb 02 15:43:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:43:54 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.3893446785992085e-06 of space, bias 1.0, pg target 0.0019168034035797626 quantized to 32 (current 32)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0037526059229873684 of space, bias 1.0, pg target 1.1257817768962106 quantized to 32 (current 32)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.582254699798443e-06 of space, bias 1.0, pg target 0.0007720941552397344 quantized to 32 (current 32)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677905369171323 of space, bias 1.0, pg target 0.19966937053822256 quantized to 32 (current 32)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4939624567535493e-06 of space, bias 4.0, pg target 0.001786779098277245 quantized to 16 (current 16)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:43:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 02 15:43:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1082114730' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1082114730' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.473 239549 DEBUG oslo_concurrency.lockutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.473 239549 DEBUG oslo_concurrency.lockutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.474 239549 DEBUG oslo_concurrency.lockutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.474 239549 DEBUG oslo_concurrency.lockutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.474 239549 DEBUG oslo_concurrency.lockutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.475 239549 INFO nova.compute.manager [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Terminating instance
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.476 239549 DEBUG nova.compute.manager [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:43:55 compute-0 kernel: tapf325e981-4c (unregistering): left promiscuous mode
Feb 02 15:43:55 compute-0 NetworkManager[49171]: <info>  [1770047035.5249] device (tapf325e981-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.533 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:55 compute-0 ovn_controller[144995]: 2026-02-02T15:43:55Z|00190|binding|INFO|Releasing lport f325e981-4c0c-4aa0-814b-8e0d58e800d4 from this chassis (sb_readonly=0)
Feb 02 15:43:55 compute-0 ovn_controller[144995]: 2026-02-02T15:43:55Z|00191|binding|INFO|Setting lport f325e981-4c0c-4aa0-814b-8e0d58e800d4 down in Southbound
Feb 02 15:43:55 compute-0 ovn_controller[144995]: 2026-02-02T15:43:55Z|00192|binding|INFO|Removing iface tapf325e981-4c ovn-installed in OVS
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.535 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:55 compute-0 ceph-mon[75334]: pgmap v1563: 305 pgs: 305 active+clean; 367 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 19 KiB/s wr, 129 op/s
Feb 02 15:43:55 compute-0 ceph-mon[75334]: osdmap e458: 3 total, 3 up, 3 in
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.541 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:1d:eb 10.100.0.10'], port_security=['fa:16:3e:3d:1d:eb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4d22e226-bdcc-49f4-b9b5-85c81397a0f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f81cf6a-8223-4b83-8701-8d2d1d8d2a2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=f325e981-4c0c-4aa0-814b-8e0d58e800d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.542 154982 INFO neutron.agent.ovn.metadata.agent [-] Port f325e981-4c0c-4aa0-814b-8e0d58e800d4 in datapath 473fc4ca-a137-447b-9349-9f4677babee6 unbound from our chassis
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.544 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 473fc4ca-a137-447b-9349-9f4677babee6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.545 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[fe85a134-b60f-4576-9552-fa55efb4aeee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.546 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace which is not needed anymore
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.546 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:55 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb 02 15:43:55 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 15.120s CPU time.
Feb 02 15:43:55 compute-0 systemd-machined[207609]: Machine qemu-17-instance-00000011 terminated.
Feb 02 15:43:55 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[263311]: [NOTICE]   (263315) : haproxy version is 2.8.14-c23fe91
Feb 02 15:43:55 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[263311]: [NOTICE]   (263315) : path to executable is /usr/sbin/haproxy
Feb 02 15:43:55 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[263311]: [WARNING]  (263315) : Exiting Master process...
Feb 02 15:43:55 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[263311]: [ALERT]    (263315) : Current worker (263317) exited with code 143 (Terminated)
Feb 02 15:43:55 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[263311]: [WARNING]  (263315) : All workers exited. Exiting... (0)
Feb 02 15:43:55 compute-0 systemd[1]: libpod-04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc.scope: Deactivated successfully.
Feb 02 15:43:55 compute-0 podman[265695]: 2026-02-02 15:43:55.657578546 +0000 UTC m=+0.039201402 container died 04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc-userdata-shm.mount: Deactivated successfully.
Feb 02 15:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b5a9b4c502ccf5b3497c0f5e33c8f59d5f94c1c6a0b682fbfa6471709f1173a-merged.mount: Deactivated successfully.
Feb 02 15:43:55 compute-0 podman[265695]: 2026-02-02 15:43:55.695876784 +0000 UTC m=+0.077499650 container cleanup 04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:43:55 compute-0 systemd[1]: libpod-conmon-04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc.scope: Deactivated successfully.
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.704 239549 INFO nova.virt.libvirt.driver [-] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Instance destroyed successfully.
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.706 239549 DEBUG nova.objects.instance [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'resources' on Instance uuid 4d22e226-bdcc-49f4-b9b5-85c81397a0f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.730 239549 DEBUG nova.virt.libvirt.vif [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-2112237726',display_name='tempest-TestVolumeBootPattern-volume-backed-server-2112237726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-2112237726',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ3I7ej2QEq412Pxe8wdTTgEK6xdMVdaKUJK8NpNgcYZHZmL1ut3LqFHWwwEEk4vb9ouHqvw3XDrJ+X+Wi45pbQkXF60G3n4jYLfmhBujBWP8h1RUz8SU1iZp6vasJ04pw==',key_name='tempest-keypair-927612033',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:42:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-fw00ids4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:42:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=4d22e226-bdcc-49f4-b9b5-85c81397a0f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.730 239549 DEBUG nova.network.os_vif_util [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "address": "fa:16:3e:3d:1d:eb", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf325e981-4c", "ovs_interfaceid": "f325e981-4c0c-4aa0-814b-8e0d58e800d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.731 239549 DEBUG nova.network.os_vif_util [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:1d:eb,bridge_name='br-int',has_traffic_filtering=True,id=f325e981-4c0c-4aa0-814b-8e0d58e800d4,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf325e981-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.731 239549 DEBUG os_vif [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:1d:eb,bridge_name='br-int',has_traffic_filtering=True,id=f325e981-4c0c-4aa0-814b-8e0d58e800d4,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf325e981-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.733 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.734 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf325e981-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.735 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.736 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.738 239549 INFO os_vif [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:1d:eb,bridge_name='br-int',has_traffic_filtering=True,id=f325e981-4c0c-4aa0-814b-8e0d58e800d4,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf325e981-4c')
Feb 02 15:43:55 compute-0 podman[265736]: 2026-02-02 15:43:55.762814778 +0000 UTC m=+0.047723398 container remove 04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.767 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[31633de1-413c-4405-997e-58c3ef91f165]: (4, ('Mon Feb  2 03:43:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc)\n04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc\nMon Feb  2 03:43:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc)\n04cb6da3ceb9822ebed1721e8f536e495dc36ee0f6005400fea6444f138e71cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.769 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c78427-3bce-4443-8314-763084b66261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.770 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.771 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:55 compute-0 kernel: tap473fc4ca-a0: left promiscuous mode
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.778 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.781 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[86a700bc-a675-43c4-add8-4203d1b66a61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.795 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2ebd4d-2e08-409b-bd6e-49dd7f9bc80b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.796 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f76c1e06-d25e-42c6-9a73-a13ca31192d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.810 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[58aaaaff-4284-406c-8f95-9e758f96081a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442790, 'reachable_time': 43806, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265768, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.813 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:43:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d473fc4ca\x2da137\x2d447b\x2d9349\x2d9f4677babee6.mount: Deactivated successfully.
Feb 02 15:43:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:55.813 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[4c27e1df-80ed-4a9f-aa33-eafbe0108009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.869 239549 INFO nova.virt.libvirt.driver [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Deleting instance files /var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3_del
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.870 239549 INFO nova.virt.libvirt.driver [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Deletion of /var/lib/nova/instances/4d22e226-bdcc-49f4-b9b5-85c81397a0f3_del complete
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.918 239549 INFO nova.compute.manager [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Took 0.44 seconds to destroy the instance on the hypervisor.
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.918 239549 DEBUG oslo.service.loopingcall [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.919 239549 DEBUG nova.compute.manager [-] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:43:55 compute-0 nova_compute[239545]: 2026-02-02 15:43:55.919 239549 DEBUG nova.network.neutron [-] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:43:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 37 KiB/s wr, 145 op/s
Feb 02 15:43:56 compute-0 nova_compute[239545]: 2026-02-02 15:43:56.592 239549 DEBUG nova.compute.manager [req-ac3cb428-5075-49ae-8f33-d8211ed5e9ec req-65adc5f4-1b90-4158-9b15-3bb643e84ca8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received event network-vif-unplugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:56 compute-0 nova_compute[239545]: 2026-02-02 15:43:56.593 239549 DEBUG oslo_concurrency.lockutils [req-ac3cb428-5075-49ae-8f33-d8211ed5e9ec req-65adc5f4-1b90-4158-9b15-3bb643e84ca8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:56 compute-0 nova_compute[239545]: 2026-02-02 15:43:56.593 239549 DEBUG oslo_concurrency.lockutils [req-ac3cb428-5075-49ae-8f33-d8211ed5e9ec req-65adc5f4-1b90-4158-9b15-3bb643e84ca8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:56 compute-0 nova_compute[239545]: 2026-02-02 15:43:56.594 239549 DEBUG oslo_concurrency.lockutils [req-ac3cb428-5075-49ae-8f33-d8211ed5e9ec req-65adc5f4-1b90-4158-9b15-3bb643e84ca8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:56 compute-0 nova_compute[239545]: 2026-02-02 15:43:56.594 239549 DEBUG nova.compute.manager [req-ac3cb428-5075-49ae-8f33-d8211ed5e9ec req-65adc5f4-1b90-4158-9b15-3bb643e84ca8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] No waiting events found dispatching network-vif-unplugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:43:56 compute-0 nova_compute[239545]: 2026-02-02 15:43:56.595 239549 DEBUG nova.compute.manager [req-ac3cb428-5075-49ae-8f33-d8211ed5e9ec req-65adc5f4-1b90-4158-9b15-3bb643e84ca8 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received event network-vif-unplugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:43:57 compute-0 nova_compute[239545]: 2026-02-02 15:43:57.082 239549 DEBUG nova.network.neutron [-] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:43:57 compute-0 nova_compute[239545]: 2026-02-02 15:43:57.101 239549 INFO nova.compute.manager [-] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Took 1.18 seconds to deallocate network for instance.
Feb 02 15:43:57 compute-0 nova_compute[239545]: 2026-02-02 15:43:57.259 239549 DEBUG nova.compute.manager [req-693bbba3-906a-4531-8c29-7a23d48c1169 req-e3ab6aed-162d-4cdb-8381-140051a9f911 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received event network-vif-deleted-f325e981-4c0c-4aa0-814b-8e0d58e800d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:57 compute-0 nova_compute[239545]: 2026-02-02 15:43:57.421 239549 INFO nova.compute.manager [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Took 0.32 seconds to detach 1 volumes for instance.
Feb 02 15:43:57 compute-0 nova_compute[239545]: 2026-02-02 15:43:57.422 239549 DEBUG nova.compute.manager [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Deleting volume: 3e04b1a3-0372-4a95-8313-15b657dee567 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Feb 02 15:43:57 compute-0 ceph-mon[75334]: pgmap v1565: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 37 KiB/s wr, 145 op/s
Feb 02 15:43:57 compute-0 nova_compute[239545]: 2026-02-02 15:43:57.627 239549 DEBUG oslo_concurrency.lockutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:57 compute-0 nova_compute[239545]: 2026-02-02 15:43:57.627 239549 DEBUG oslo_concurrency.lockutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:57 compute-0 nova_compute[239545]: 2026-02-02 15:43:57.692 239549 DEBUG oslo_concurrency.processutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:43:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 34 KiB/s wr, 131 op/s
Feb 02 15:43:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:43:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/375895257' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:43:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/375895257' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:43:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Feb 02 15:43:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Feb 02 15:43:58 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Feb 02 15:43:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:43:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/149470884' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.275 239549 DEBUG oslo_concurrency.processutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.282 239549 DEBUG nova.compute.provider_tree [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.306 239549 DEBUG nova.scheduler.client.report [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.336 239549 DEBUG oslo_concurrency.lockutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.385 239549 INFO nova.scheduler.client.report [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Deleted allocations for instance 4d22e226-bdcc-49f4-b9b5-85c81397a0f3
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.448 239549 DEBUG oslo_concurrency.lockutils [None req-4be87210-4c58-41dd-ba74-3df2f005239b b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/375895257' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:43:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/375895257' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:43:58 compute-0 ceph-mon[75334]: osdmap e459: 3 total, 3 up, 3 in
Feb 02 15:43:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/149470884' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.823 239549 DEBUG nova.compute.manager [req-d4b1a35d-f470-4b78-b628-beb30ddf46e0 req-3dfddac1-b235-4125-8459-58812156377e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received event network-vif-plugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.824 239549 DEBUG oslo_concurrency.lockutils [req-d4b1a35d-f470-4b78-b628-beb30ddf46e0 req-3dfddac1-b235-4125-8459-58812156377e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.824 239549 DEBUG oslo_concurrency.lockutils [req-d4b1a35d-f470-4b78-b628-beb30ddf46e0 req-3dfddac1-b235-4125-8459-58812156377e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.824 239549 DEBUG oslo_concurrency.lockutils [req-d4b1a35d-f470-4b78-b628-beb30ddf46e0 req-3dfddac1-b235-4125-8459-58812156377e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "4d22e226-bdcc-49f4-b9b5-85c81397a0f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.824 239549 DEBUG nova.compute.manager [req-d4b1a35d-f470-4b78-b628-beb30ddf46e0 req-3dfddac1-b235-4125-8459-58812156377e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] No waiting events found dispatching network-vif-plugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:43:58 compute-0 nova_compute[239545]: 2026-02-02 15:43:58.825 239549 WARNING nova.compute.manager [req-d4b1a35d-f470-4b78-b628-beb30ddf46e0 req-3dfddac1-b235-4125-8459-58812156377e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Received unexpected event network-vif-plugged-f325e981-4c0c-4aa0-814b-8e0d58e800d4 for instance with vm_state deleted and task_state None.
Feb 02 15:43:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:59.254 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:43:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:59.254 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:43:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:43:59.255 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:43:59 compute-0 nova_compute[239545]: 2026-02-02 15:43:59.381 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:43:59 compute-0 ceph-mon[75334]: pgmap v1566: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 34 KiB/s wr, 131 op/s
Feb 02 15:43:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 327 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 28 KiB/s wr, 97 op/s
Feb 02 15:44:00 compute-0 nova_compute[239545]: 2026-02-02 15:44:00.736 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Feb 02 15:44:01 compute-0 ceph-mon[75334]: pgmap v1568: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 327 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 28 KiB/s wr, 97 op/s
Feb 02 15:44:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Feb 02 15:44:01 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Feb 02 15:44:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 270 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 33 KiB/s wr, 138 op/s
Feb 02 15:44:02 compute-0 ceph-mon[75334]: osdmap e460: 3 total, 3 up, 3 in
Feb 02 15:44:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Feb 02 15:44:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Feb 02 15:44:03 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Feb 02 15:44:03 compute-0 nova_compute[239545]: 2026-02-02 15:44:03.563 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:03 compute-0 ceph-mon[75334]: pgmap v1570: 305 pgs: 305 active+clean; 270 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 33 KiB/s wr, 138 op/s
Feb 02 15:44:03 compute-0 ceph-mon[75334]: osdmap e461: 3 total, 3 up, 3 in
Feb 02 15:44:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 270 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 11 KiB/s wr, 116 op/s
Feb 02 15:44:04 compute-0 nova_compute[239545]: 2026-02-02 15:44:04.382 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:05 compute-0 nova_compute[239545]: 2026-02-02 15:44:05.738 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:05 compute-0 ceph-mon[75334]: pgmap v1572: 305 pgs: 305 active+clean; 270 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 11 KiB/s wr, 116 op/s
Feb 02 15:44:05 compute-0 nova_compute[239545]: 2026-02-02 15:44:05.798 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047030.7969546, 84365bea-19f8-4121-86d5-dd9e1a5eeaa3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:44:05 compute-0 nova_compute[239545]: 2026-02-02 15:44:05.798 239549 INFO nova.compute.manager [-] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] VM Stopped (Lifecycle Event)
Feb 02 15:44:05 compute-0 nova_compute[239545]: 2026-02-02 15:44:05.818 239549 DEBUG nova.compute.manager [None req-ade5d9d8-80d9-4285-bc96-6365baa10501 - - - - - -] [instance: 84365bea-19f8-4121-86d5-dd9e1a5eeaa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 22 KiB/s wr, 78 op/s
Feb 02 15:44:06 compute-0 nova_compute[239545]: 2026-02-02 15:44:06.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:06 compute-0 nova_compute[239545]: 2026-02-02 15:44:06.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:44:06 compute-0 nova_compute[239545]: 2026-02-02 15:44:06.575 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:44:07 compute-0 nova_compute[239545]: 2026-02-02 15:44:07.569 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:07 compute-0 ceph-mon[75334]: pgmap v1573: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 22 KiB/s wr, 78 op/s
Feb 02 15:44:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 22 KiB/s wr, 76 op/s
Feb 02 15:44:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e461 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Feb 02 15:44:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Feb 02 15:44:08 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Feb 02 15:44:08 compute-0 nova_compute[239545]: 2026-02-02 15:44:08.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:08 compute-0 nova_compute[239545]: 2026-02-02 15:44:08.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:44:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1265426530' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:44:08 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1265426530' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:09 compute-0 ceph-mon[75334]: pgmap v1574: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 22 KiB/s wr, 76 op/s
Feb 02 15:44:09 compute-0 ceph-mon[75334]: osdmap e462: 3 total, 3 up, 3 in
Feb 02 15:44:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1265426530' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1265426530' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:09 compute-0 nova_compute[239545]: 2026-02-02 15:44:09.384 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:09 compute-0 nova_compute[239545]: 2026-02-02 15:44:09.630 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Feb 02 15:44:10 compute-0 nova_compute[239545]: 2026-02-02 15:44:10.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:10 compute-0 nova_compute[239545]: 2026-02-02 15:44:10.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:44:10 compute-0 nova_compute[239545]: 2026-02-02 15:44:10.704 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047035.703329, 4d22e226-bdcc-49f4-b9b5-85c81397a0f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:44:10 compute-0 nova_compute[239545]: 2026-02-02 15:44:10.704 239549 INFO nova.compute.manager [-] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] VM Stopped (Lifecycle Event)
Feb 02 15:44:10 compute-0 nova_compute[239545]: 2026-02-02 15:44:10.730 239549 DEBUG nova.compute.manager [None req-de8bc28e-77d2-4728-a96c-607a1656bc75 - - - - - -] [instance: 4d22e226-bdcc-49f4-b9b5-85c81397a0f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:10 compute-0 nova_compute[239545]: 2026-02-02 15:44:10.739 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 ceph-mon[75334]: pgmap v1576: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Feb 02 15:44:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:44:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2745001277' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:11 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:44:11 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2745001277' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.581 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.581 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.582 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.582 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.583 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.632 239549 DEBUG oslo_concurrency.lockutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "b9beea2c-422e-4f83-9a08-6275c559a931" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.633 239549 DEBUG oslo_concurrency.lockutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.633 239549 DEBUG oslo_concurrency.lockutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.633 239549 DEBUG oslo_concurrency.lockutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.634 239549 DEBUG oslo_concurrency.lockutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.635 239549 INFO nova.compute.manager [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Terminating instance
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.636 239549 DEBUG nova.compute.manager [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:44:11 compute-0 kernel: tapfca7a8cb-6a (unregistering): left promiscuous mode
Feb 02 15:44:11 compute-0 NetworkManager[49171]: <info>  [1770047051.6779] device (tapfca7a8cb-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.687 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 ovn_controller[144995]: 2026-02-02T15:44:11Z|00193|binding|INFO|Releasing lport fca7a8cb-6a93-4ab4-b48f-742237e61009 from this chassis (sb_readonly=0)
Feb 02 15:44:11 compute-0 ovn_controller[144995]: 2026-02-02T15:44:11Z|00194|binding|INFO|Setting lport fca7a8cb-6a93-4ab4-b48f-742237e61009 down in Southbound
Feb 02 15:44:11 compute-0 ovn_controller[144995]: 2026-02-02T15:44:11Z|00195|binding|INFO|Removing iface tapfca7a8cb-6a ovn-installed in OVS
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.694 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.699 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:42:6b 10.100.0.9'], port_security=['fa:16:3e:06:42:6b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b9beea2c-422e-4f83-9a08-6275c559a931', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8d353b3-b1bd-4128-966b-cb49804d5ec9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=fca7a8cb-6a93-4ab4-b48f-742237e61009) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.701 154982 INFO neutron.agent.ovn.metadata.agent [-] Port fca7a8cb-6a93-4ab4-b48f-742237e61009 in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c unbound from our chassis
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.703 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.704 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c928f5a3-41de-4bff-b17b-d8b5161b1245]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.705 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c namespace which is not needed anymore
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.713 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb 02 15:44:11 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 16.848s CPU time.
Feb 02 15:44:11 compute-0 systemd-machined[207609]: Machine qemu-20-instance-00000014 terminated.
Feb 02 15:44:11 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[265422]: [NOTICE]   (265430) : haproxy version is 2.8.14-c23fe91
Feb 02 15:44:11 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[265422]: [NOTICE]   (265430) : path to executable is /usr/sbin/haproxy
Feb 02 15:44:11 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[265422]: [WARNING]  (265430) : Exiting Master process...
Feb 02 15:44:11 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[265422]: [WARNING]  (265430) : Exiting Master process...
Feb 02 15:44:11 compute-0 systemd[1]: libpod-412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e.scope: Deactivated successfully.
Feb 02 15:44:11 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[265422]: [ALERT]    (265430) : Current worker (265434) exited with code 143 (Terminated)
Feb 02 15:44:11 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[265422]: [WARNING]  (265430) : All workers exited. Exiting... (0)
Feb 02 15:44:11 compute-0 conmon[265422]: conmon 412ac0fbe46f00524b03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e.scope/container/memory.events
Feb 02 15:44:11 compute-0 podman[265837]: 2026-02-02 15:44:11.816270761 +0000 UTC m=+0.039708224 container died 412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e-userdata-shm.mount: Deactivated successfully.
Feb 02 15:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dec645be37fab732dd471f90c68f1038283a8b92066239114a7ec01c02af68c2-merged.mount: Deactivated successfully.
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.855 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.860 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 podman[265837]: 2026-02-02 15:44:11.863420294 +0000 UTC m=+0.086857727 container cleanup 412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:44:11 compute-0 systemd[1]: libpod-conmon-412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e.scope: Deactivated successfully.
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.875 239549 INFO nova.virt.libvirt.driver [-] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Instance destroyed successfully.
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.876 239549 DEBUG nova.objects.instance [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lazy-loading 'resources' on Instance uuid b9beea2c-422e-4f83-9a08-6275c559a931 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.892 239549 DEBUG nova.virt.libvirt.vif [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:43:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1451984176',display_name='tempest-TransferEncryptedVolumeTest-server-1451984176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1451984176',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKMsL4MWgDZKbVhB6IQOiF8pp1EYQGvyTWbcn/zV7b4n3z7hapmnFr4nrZxT7tbDh4OrqjSbFL2giowZbe7RVbM1MVvSBqtMgXFfoAVQEbSkdr0VJtIIAKRxEkeVY0YVeg==',key_name='tempest-TransferEncryptedVolumeTest-347177902',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:43:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-m8v9yvae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:43:35Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=b9beea2c-422e-4f83-9a08-6275c559a931,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.892 239549 DEBUG nova.network.os_vif_util [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "address": "fa:16:3e:06:42:6b", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca7a8cb-6a", "ovs_interfaceid": "fca7a8cb-6a93-4ab4-b48f-742237e61009", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.893 239549 DEBUG nova.network.os_vif_util [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:42:6b,bridge_name='br-int',has_traffic_filtering=True,id=fca7a8cb-6a93-4ab4-b48f-742237e61009,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca7a8cb-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.894 239549 DEBUG os_vif [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:42:6b,bridge_name='br-int',has_traffic_filtering=True,id=fca7a8cb-6a93-4ab4-b48f-742237e61009,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca7a8cb-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.895 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.896 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfca7a8cb-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.897 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.898 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.900 239549 INFO os_vif [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:42:6b,bridge_name='br-int',has_traffic_filtering=True,id=fca7a8cb-6a93-4ab4-b48f-742237e61009,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca7a8cb-6a')
Feb 02 15:44:11 compute-0 podman[265874]: 2026-02-02 15:44:11.932784906 +0000 UTC m=+0.042728407 container remove 412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.937 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea56e8d-9d22-4352-941d-c0cabe98e092]: (4, ('Mon Feb  2 03:44:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c (412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e)\n412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e\nMon Feb  2 03:44:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c (412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e)\n412ac0fbe46f00524b030667d5db8fec68bc223f2313448e59d08fac9072e57e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.940 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[53fbf669-7e56-48b9-931a-c4ecac1cedd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.942 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6f67b7a-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.944 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 kernel: tapb6f67b7a-30: left promiscuous mode
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.948 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 30 op/s
Feb 02 15:44:11 compute-0 nova_compute[239545]: 2026-02-02 15:44:11.955 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.957 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[026f1186-5059-4deb-b5bc-be207c6d2abb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.970 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[bfddbc51-2e7d-40c4-9a83-f73a3c6f3346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.973 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[80ca85e9-7904-4c7a-8355-d6d460d72b08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.990 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[86ca581d-1430-4c10-b201-0ff5d0dd8f1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449865, 'reachable_time': 30764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265908, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:11 compute-0 systemd[1]: run-netns-ovnmeta\x2db6f67b7a\x2d3fd7\x2d4623\x2d9937\x2d142eb5dabe2c.mount: Deactivated successfully.
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.996 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:44:11 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:11.997 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[520b1d4f-35bf-4136-bdea-f3228c74a552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.064 239549 INFO nova.virt.libvirt.driver [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Deleting instance files /var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931_del
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.065 239549 INFO nova.virt.libvirt.driver [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Deletion of /var/lib/nova/instances/b9beea2c-422e-4f83-9a08-6275c559a931_del complete
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.131 239549 INFO nova.compute.manager [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Took 0.49 seconds to destroy the instance on the hypervisor.
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.132 239549 DEBUG oslo.service.loopingcall [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.133 239549 DEBUG nova.compute.manager [-] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.134 239549 DEBUG nova.network.neutron [-] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:44:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:44:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219628465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2745001277' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2745001277' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:12 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2219628465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.184 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.349 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.350 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4379MB free_disk=59.98805933445692GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.351 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.351 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.558 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance b9beea2c-422e-4f83-9a08-6275c559a931 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.558 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.558 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.605 239549 DEBUG nova.compute.manager [req-1551cf4a-7bbf-419a-9bbc-7324de10e002 req-7c941d1a-dd2c-4eb9-8faa-25d4c59365cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received event network-vif-unplugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.606 239549 DEBUG oslo_concurrency.lockutils [req-1551cf4a-7bbf-419a-9bbc-7324de10e002 req-7c941d1a-dd2c-4eb9-8faa-25d4c59365cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.606 239549 DEBUG oslo_concurrency.lockutils [req-1551cf4a-7bbf-419a-9bbc-7324de10e002 req-7c941d1a-dd2c-4eb9-8faa-25d4c59365cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.606 239549 DEBUG oslo_concurrency.lockutils [req-1551cf4a-7bbf-419a-9bbc-7324de10e002 req-7c941d1a-dd2c-4eb9-8faa-25d4c59365cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.607 239549 DEBUG nova.compute.manager [req-1551cf4a-7bbf-419a-9bbc-7324de10e002 req-7c941d1a-dd2c-4eb9-8faa-25d4c59365cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] No waiting events found dispatching network-vif-unplugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.607 239549 DEBUG nova.compute.manager [req-1551cf4a-7bbf-419a-9bbc-7324de10e002 req-7c941d1a-dd2c-4eb9-8faa-25d4c59365cf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received event network-vif-unplugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:44:12 compute-0 nova_compute[239545]: 2026-02-02 15:44:12.676 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:44:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857980809' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:13 compute-0 ceph-mon[75334]: pgmap v1577: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 30 op/s
Feb 02 15:44:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3857980809' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:13 compute-0 nova_compute[239545]: 2026-02-02 15:44:13.194 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:13 compute-0 nova_compute[239545]: 2026-02-02 15:44:13.200 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:44:13 compute-0 nova_compute[239545]: 2026-02-02 15:44:13.216 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:44:13 compute-0 nova_compute[239545]: 2026-02-02 15:44:13.239 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:44:13 compute-0 nova_compute[239545]: 2026-02-02 15:44:13.240 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:13 compute-0 nova_compute[239545]: 2026-02-02 15:44:13.832 239549 DEBUG nova.network.neutron [-] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:44:13 compute-0 nova_compute[239545]: 2026-02-02 15:44:13.860 239549 INFO nova.compute.manager [-] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Took 1.73 seconds to deallocate network for instance.
Feb 02 15:44:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 38 op/s
Feb 02 15:44:14 compute-0 podman[265936]: 2026-02-02 15:44:14.322399592 +0000 UTC m=+0.058120251 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 02 15:44:14 compute-0 podman[265935]: 2026-02-02 15:44:14.353456705 +0000 UTC m=+0.090701051 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.364 239549 INFO nova.compute.manager [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Took 0.50 seconds to detach 1 volumes for instance.
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.385 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.420 239549 DEBUG oslo_concurrency.lockutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.420 239549 DEBUG oslo_concurrency.lockutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.459 239549 DEBUG oslo_concurrency.processutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:44:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3367509540' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:44:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3367509540' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.692 239549 DEBUG nova.compute.manager [req-c53fa957-2084-4bd7-9c1c-7b582f86373a req-563230b9-972b-40a9-9293-3ad08293bccd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received event network-vif-plugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.692 239549 DEBUG oslo_concurrency.lockutils [req-c53fa957-2084-4bd7-9c1c-7b582f86373a req-563230b9-972b-40a9-9293-3ad08293bccd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.693 239549 DEBUG oslo_concurrency.lockutils [req-c53fa957-2084-4bd7-9c1c-7b582f86373a req-563230b9-972b-40a9-9293-3ad08293bccd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.693 239549 DEBUG oslo_concurrency.lockutils [req-c53fa957-2084-4bd7-9c1c-7b582f86373a req-563230b9-972b-40a9-9293-3ad08293bccd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.693 239549 DEBUG nova.compute.manager [req-c53fa957-2084-4bd7-9c1c-7b582f86373a req-563230b9-972b-40a9-9293-3ad08293bccd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] No waiting events found dispatching network-vif-plugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.693 239549 WARNING nova.compute.manager [req-c53fa957-2084-4bd7-9c1c-7b582f86373a req-563230b9-972b-40a9-9293-3ad08293bccd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received unexpected event network-vif-plugged-fca7a8cb-6a93-4ab4-b48f-742237e61009 for instance with vm_state deleted and task_state None.
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.694 239549 DEBUG nova.compute.manager [req-c53fa957-2084-4bd7-9c1c-7b582f86373a req-563230b9-972b-40a9-9293-3ad08293bccd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Received event network-vif-deleted-fca7a8cb-6a93-4ab4-b48f-742237e61009 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:44:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:44:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:44:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:44:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:44:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:44:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:44:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175660511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.986 239549 DEBUG oslo_concurrency.processutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:14 compute-0 nova_compute[239545]: 2026-02-02 15:44:14.990 239549 DEBUG nova.compute.provider_tree [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:44:15 compute-0 nova_compute[239545]: 2026-02-02 15:44:15.010 239549 DEBUG nova.scheduler.client.report [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:44:15 compute-0 nova_compute[239545]: 2026-02-02 15:44:15.036 239549 DEBUG oslo_concurrency.lockutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:15 compute-0 nova_compute[239545]: 2026-02-02 15:44:15.072 239549 INFO nova.scheduler.client.report [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Deleted allocations for instance b9beea2c-422e-4f83-9a08-6275c559a931
Feb 02 15:44:15 compute-0 nova_compute[239545]: 2026-02-02 15:44:15.138 239549 DEBUG oslo_concurrency.lockutils [None req-31116de7-381e-446e-bb09-7f2e3aa6b5e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "b9beea2c-422e-4f83-9a08-6275c559a931" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:15 compute-0 ceph-mon[75334]: pgmap v1578: 305 pgs: 305 active+clean; 270 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 38 op/s
Feb 02 15:44:15 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3367509540' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:15 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3367509540' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:15 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4175660511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:15 compute-0 nova_compute[239545]: 2026-02-02 15:44:15.240 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:15 compute-0 nova_compute[239545]: 2026-02-02 15:44:15.241 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:15 compute-0 nova_compute[239545]: 2026-02-02 15:44:15.241 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 305 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 112 op/s
Feb 02 15:44:16 compute-0 nova_compute[239545]: 2026-02-02 15:44:16.899 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:44:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4002343263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:44:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4002343263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:17 compute-0 ceph-mon[75334]: pgmap v1579: 305 pgs: 305 active+clean; 305 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 112 op/s
Feb 02 15:44:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4002343263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4002343263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:44:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172787537' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:44:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172787537' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 305 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 112 op/s
Feb 02 15:44:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2172787537' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2172787537' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:19 compute-0 nova_compute[239545]: 2026-02-02 15:44:19.108 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "51307c94-353b-4d22-a215-27dba54ba38a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:19 compute-0 nova_compute[239545]: 2026-02-02 15:44:19.109 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:19 compute-0 nova_compute[239545]: 2026-02-02 15:44:19.128 239549 DEBUG nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:44:19 compute-0 nova_compute[239545]: 2026-02-02 15:44:19.197 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:19 compute-0 nova_compute[239545]: 2026-02-02 15:44:19.198 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:19 compute-0 nova_compute[239545]: 2026-02-02 15:44:19.204 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:44:19 compute-0 nova_compute[239545]: 2026-02-02 15:44:19.204 239549 INFO nova.compute.claims [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:44:19 compute-0 ceph-mon[75334]: pgmap v1580: 305 pgs: 305 active+clean; 305 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 112 op/s
Feb 02 15:44:19 compute-0 nova_compute[239545]: 2026-02-02 15:44:19.386 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:19 compute-0 nova_compute[239545]: 2026-02-02 15:44:19.492 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 256 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Feb 02 15:44:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:44:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191273073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.053 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.058 239549 DEBUG nova.compute.provider_tree [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.073 239549 DEBUG nova.scheduler.client.report [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.094 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.095 239549 DEBUG nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.146 239549 DEBUG nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.147 239549 DEBUG nova.network.neutron [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.170 239549 INFO nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.191 239549 DEBUG nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:44:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4191273073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.237 239549 INFO nova.virt.block_device [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Booting with volume 9698e5da-2df0-4288-87d3-c3ebb6c2ab14 at /dev/vda
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.343 239549 DEBUG nova.policy [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8e72a1cb6344869821da1cfc41bf8fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.438 239549 DEBUG os_brick.utils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.439 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.449 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.449 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cc3bb8-f822-4b80-96fc-9a5395190894]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.450 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.457 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.457 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd0f82d-f166-49a3-80c0-2e1e34a26a7a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.459 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.466 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.466 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[4e97cd0c-a88d-4393-b610-9fd851d23784]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.468 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[9c200a58-2cf5-4802-af41-d60f5d156c08]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.468 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.490 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.492 239549 DEBUG os_brick.initiator.connectors.lightos [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.493 239549 DEBUG os_brick.initiator.connectors.lightos [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.493 239549 DEBUG os_brick.initiator.connectors.lightos [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.493 239549 DEBUG os_brick.utils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] <== get_connector_properties: return (53ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.493 239549 DEBUG nova.virt.block_device [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Updating existing volume attachment record: f2116a65-4c1a-41f0-b426-530c5276148a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 15:44:20 compute-0 nova_compute[239545]: 2026-02-02 15:44:20.563 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 15:44:21 compute-0 ceph-mon[75334]: pgmap v1581: 305 pgs: 305 active+clean; 256 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Feb 02 15:44:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:44:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3006312614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.595 239549 DEBUG nova.network.neutron [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Successfully created port: 082e2fa7-67a2-4169-9b44-15ae8108115b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.770 239549 DEBUG nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.771 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.771 239549 INFO nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Creating image(s)
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.772 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.772 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Ensure instance console log exists: /var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.772 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.773 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.773 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:21 compute-0 nova_compute[239545]: 2026-02-02 15:44:21.936 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 134 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Feb 02 15:44:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3006312614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:22 compute-0 nova_compute[239545]: 2026-02-02 15:44:22.613 239549 DEBUG nova.network.neutron [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Successfully updated port: 082e2fa7-67a2-4169-9b44-15ae8108115b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:44:22 compute-0 nova_compute[239545]: 2026-02-02 15:44:22.633 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "refresh_cache-51307c94-353b-4d22-a215-27dba54ba38a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:44:22 compute-0 nova_compute[239545]: 2026-02-02 15:44:22.633 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquired lock "refresh_cache-51307c94-353b-4d22-a215-27dba54ba38a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:44:22 compute-0 nova_compute[239545]: 2026-02-02 15:44:22.633 239549 DEBUG nova.network.neutron [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:44:22 compute-0 nova_compute[239545]: 2026-02-02 15:44:22.713 239549 DEBUG nova.compute.manager [req-eb640327-0c20-44fe-af5f-11ce6f8eb3c1 req-8bcbf597-cde7-45a4-ad9b-7f6e4a62d07f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received event network-changed-082e2fa7-67a2-4169-9b44-15ae8108115b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:22 compute-0 nova_compute[239545]: 2026-02-02 15:44:22.713 239549 DEBUG nova.compute.manager [req-eb640327-0c20-44fe-af5f-11ce6f8eb3c1 req-8bcbf597-cde7-45a4-ad9b-7f6e4a62d07f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Refreshing instance network info cache due to event network-changed-082e2fa7-67a2-4169-9b44-15ae8108115b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:44:22 compute-0 nova_compute[239545]: 2026-02-02 15:44:22.713 239549 DEBUG oslo_concurrency.lockutils [req-eb640327-0c20-44fe-af5f-11ce6f8eb3c1 req-8bcbf597-cde7-45a4-ad9b-7f6e4a62d07f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-51307c94-353b-4d22-a215-27dba54ba38a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:44:22 compute-0 nova_compute[239545]: 2026-02-02 15:44:22.844 239549 DEBUG nova.network.neutron [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:44:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:23 compute-0 ceph-mon[75334]: pgmap v1582: 305 pgs: 305 active+clean; 134 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Feb 02 15:44:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 134 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.266 239549 DEBUG nova.network.neutron [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Updating instance_info_cache with network_info: [{"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.297 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Releasing lock "refresh_cache-51307c94-353b-4d22-a215-27dba54ba38a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.297 239549 DEBUG nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Instance network_info: |[{"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.298 239549 DEBUG oslo_concurrency.lockutils [req-eb640327-0c20-44fe-af5f-11ce6f8eb3c1 req-8bcbf597-cde7-45a4-ad9b-7f6e4a62d07f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-51307c94-353b-4d22-a215-27dba54ba38a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.298 239549 DEBUG nova.network.neutron [req-eb640327-0c20-44fe-af5f-11ce6f8eb3c1 req-8bcbf597-cde7-45a4-ad9b-7f6e4a62d07f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Refreshing network info cache for port 082e2fa7-67a2-4169-9b44-15ae8108115b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.301 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Start _get_guest_xml network_info=[{"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': 'f2116a65-4c1a-41f0-b426-530c5276148a', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9698e5da-2df0-4288-87d3-c3ebb6c2ab14', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9698e5da-2df0-4288-87d3-c3ebb6c2ab14', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '51307c94-353b-4d22-a215-27dba54ba38a', 'attached_at': '', 'detached_at': '', 'volume_id': '9698e5da-2df0-4288-87d3-c3ebb6c2ab14', 'serial': '9698e5da-2df0-4288-87d3-c3ebb6c2ab14'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.306 239549 WARNING nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.312 239549 DEBUG nova.virt.libvirt.host [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.313 239549 DEBUG nova.virt.libvirt.host [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.322 239549 DEBUG nova.virt.libvirt.host [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.323 239549 DEBUG nova.virt.libvirt.host [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.324 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.324 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.324 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.325 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.325 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.325 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.325 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.326 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.326 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.326 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.327 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.327 239549 DEBUG nova.virt.hardware [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.353 239549 DEBUG nova.storage.rbd_utils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 51307c94-353b-4d22-a215-27dba54ba38a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.358 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.388 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:44:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600415610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.882 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.906 239549 DEBUG nova.virt.libvirt.vif [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:44:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-825499828',display_name='tempest-TestVolumeBootPattern-server-825499828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-825499828',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVN3emQ3pa4ZbuxCkTmhDe1Vp6VQUY67rC+ITHBo+Tq5uE7NmayODM4fxB/CHWvUnJ+8HqCsQ4XM6GBraeEG0bMnApJ123caLkGqWErsSAkkLYVHXE8VvM9eqpwYxSifA==',key_name='tempest-TestVolumeBootPattern-570771141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-efx6o40n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:44:20Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=51307c94-353b-4d22-a215-27dba54ba38a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.907 239549 DEBUG nova.network.os_vif_util [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.908 239549 DEBUG nova.network.os_vif_util [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:0c:2c,bridge_name='br-int',has_traffic_filtering=True,id=082e2fa7-67a2-4169-9b44-15ae8108115b,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082e2fa7-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.909 239549 DEBUG nova.objects.instance [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51307c94-353b-4d22-a215-27dba54ba38a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.928 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <uuid>51307c94-353b-4d22-a215-27dba54ba38a</uuid>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <name>instance-00000015</name>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <nova:name>tempest-TestVolumeBootPattern-server-825499828</nova:name>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:44:24</nova:creationTime>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <nova:user uuid="b8e72a1cb6344869821da1cfc41bf8fc">tempest-TestVolumeBootPattern-77302308-project-member</nova:user>
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <nova:project uuid="8a28227cdc0a4390bebe7549f189bfe5">tempest-TestVolumeBootPattern-77302308</nova:project>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <nova:port uuid="082e2fa7-67a2-4169-9b44-15ae8108115b">
Feb 02 15:44:24 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <system>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <entry name="serial">51307c94-353b-4d22-a215-27dba54ba38a</entry>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <entry name="uuid">51307c94-353b-4d22-a215-27dba54ba38a</entry>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     </system>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <os>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   </os>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <features>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   </features>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/51307c94-353b-4d22-a215-27dba54ba38a_disk.config">
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       </source>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-9698e5da-2df0-4288-87d3-c3ebb6c2ab14">
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       </source>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:44:24 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <serial>9698e5da-2df0-4288-87d3-c3ebb6c2ab14</serial>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:71:0c:2c"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <target dev="tap082e2fa7-67"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a/console.log" append="off"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <video>
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     </video>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:44:24 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:44:24 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:44:24 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:44:24 compute-0 nova_compute[239545]: </domain>
Feb 02 15:44:24 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.930 239549 DEBUG nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Preparing to wait for external event network-vif-plugged-082e2fa7-67a2-4169-9b44-15ae8108115b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.930 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "51307c94-353b-4d22-a215-27dba54ba38a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.931 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.931 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.932 239549 DEBUG nova.virt.libvirt.vif [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:44:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-825499828',display_name='tempest-TestVolumeBootPattern-server-825499828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-825499828',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVN3emQ3pa4ZbuxCkTmhDe1Vp6VQUY67rC+ITHBo+Tq5uE7NmayODM4fxB/CHWvUnJ+8HqCsQ4XM6GBraeEG0bMnApJ123caLkGqWErsSAkkLYVHXE8VvM9eqpwYxSifA==',key_name='tempest-TestVolumeBootPattern-570771141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-efx6o40n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:44:20Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=51307c94-353b-4d22-a215-27dba54ba38a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.932 239549 DEBUG nova.network.os_vif_util [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.933 239549 DEBUG nova.network.os_vif_util [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:0c:2c,bridge_name='br-int',has_traffic_filtering=True,id=082e2fa7-67a2-4169-9b44-15ae8108115b,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082e2fa7-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.933 239549 DEBUG os_vif [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:0c:2c,bridge_name='br-int',has_traffic_filtering=True,id=082e2fa7-67a2-4169-9b44-15ae8108115b,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082e2fa7-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.934 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.934 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.935 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.937 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.937 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap082e2fa7-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.938 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap082e2fa7-67, col_values=(('external_ids', {'iface-id': '082e2fa7-67a2-4169-9b44-15ae8108115b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:0c:2c', 'vm-uuid': '51307c94-353b-4d22-a215-27dba54ba38a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.939 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:24 compute-0 NetworkManager[49171]: <info>  [1770047064.9414] manager: (tap082e2fa7-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.942 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.944 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.947 239549 INFO os_vif [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:0c:2c,bridge_name='br-int',has_traffic_filtering=True,id=082e2fa7-67a2-4169-9b44-15ae8108115b,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082e2fa7-67')
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.994 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.994 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.995 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No VIF found with MAC fa:16:3e:71:0c:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:44:24 compute-0 nova_compute[239545]: 2026-02-02 15:44:24.995 239549 INFO nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Using config drive
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.021 239549 DEBUG nova.storage.rbd_utils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 51307c94-353b-4d22-a215-27dba54ba38a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:25 compute-0 ceph-mon[75334]: pgmap v1583: 305 pgs: 305 active+clean; 134 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Feb 02 15:44:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3600415610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.570 239549 INFO nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Creating config drive at /var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a/disk.config
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.574 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcbx0js9k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.700 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcbx0js9k" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.723 239549 DEBUG nova.storage.rbd_utils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 51307c94-353b-4d22-a215-27dba54ba38a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.726 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a/disk.config 51307c94-353b-4d22-a215-27dba54ba38a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.841 239549 DEBUG oslo_concurrency.processutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a/disk.config 51307c94-353b-4d22-a215-27dba54ba38a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.842 239549 INFO nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Deleting local config drive /var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a/disk.config because it was imported into RBD.
Feb 02 15:44:25 compute-0 kernel: tap082e2fa7-67: entered promiscuous mode
Feb 02 15:44:25 compute-0 NetworkManager[49171]: <info>  [1770047065.8741] manager: (tap082e2fa7-67): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Feb 02 15:44:25 compute-0 ovn_controller[144995]: 2026-02-02T15:44:25Z|00196|binding|INFO|Claiming lport 082e2fa7-67a2-4169-9b44-15ae8108115b for this chassis.
Feb 02 15:44:25 compute-0 ovn_controller[144995]: 2026-02-02T15:44:25Z|00197|binding|INFO|082e2fa7-67a2-4169-9b44-15ae8108115b: Claiming fa:16:3e:71:0c:2c 10.100.0.7
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.876 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.882 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:0c:2c 10.100.0.7'], port_security=['fa:16:3e:71:0c:2c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51307c94-353b-4d22-a215-27dba54ba38a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '413c222f-1970-4ec0-b0a7-3e88c9a779d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=082e2fa7-67a2-4169-9b44-15ae8108115b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.883 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 082e2fa7-67a2-4169-9b44-15ae8108115b in datapath 473fc4ca-a137-447b-9349-9f4677babee6 bound to our chassis
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.884 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:44:25 compute-0 ovn_controller[144995]: 2026-02-02T15:44:25Z|00198|binding|INFO|Setting lport 082e2fa7-67a2-4169-9b44-15ae8108115b ovn-installed in OVS
Feb 02 15:44:25 compute-0 ovn_controller[144995]: 2026-02-02T15:44:25Z|00199|binding|INFO|Setting lport 082e2fa7-67a2-4169-9b44-15ae8108115b up in Southbound
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.887 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:25 compute-0 nova_compute[239545]: 2026-02-02 15:44:25.889 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.892 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7eb73d-0a10-4fa9-800a-868768d86c18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.892 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap473fc4ca-a1 in ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.894 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap473fc4ca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.894 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[94ed3c2b-d30a-4fbc-b0b9-cbaaad1f828b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.894 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[672261fb-5338-493f-9069-7dd2645cf1f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 systemd-udevd[266143]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:44:25 compute-0 systemd-machined[207609]: New machine qemu-21-instance-00000015.
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.903 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[84462cdb-e1a0-4356-8c6f-709f24ab8a3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 NetworkManager[49171]: <info>  [1770047065.9086] device (tap082e2fa7-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:44:25 compute-0 NetworkManager[49171]: <info>  [1770047065.9092] device (tap082e2fa7-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.913 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8d44b706-c5c0-4299-a848-231b3f696d61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.938 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[6e33b30a-8f1b-45e5-ae11-7dc4a8ef8a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 systemd-udevd[266147]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:44:25 compute-0 NetworkManager[49171]: <info>  [1770047065.9440] manager: (tap473fc4ca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.943 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef3a79e-a2e4-4e3b-8598-f7a4752ebfd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 134 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.969 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee0466e-19c5-4026-a9d8-de50ed06277c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.971 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[b98b7ed1-bd0f-4381-b93f-546ae004e093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 NetworkManager[49171]: <info>  [1770047065.9852] device (tap473fc4ca-a0): carrier: link connected
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.987 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bfd2bb-d9aa-4280-907c-2c8e4360f697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:25.997 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eeee0b-4d02-4ef4-8105-0699fec3feb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455190, 'reachable_time': 18970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266175, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.004 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[70e31027-a2e7-42ae-ad8f-6edd84208aa8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:14cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455190, 'tstamp': 455190}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266176, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.013 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef4148d-f810-41fb-8143-def12c6710ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455190, 'reachable_time': 18970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266177, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.027 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9e468130-3fa1-4605-a01a-81d8ae2c0e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.071 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0d775c-ab82-4e38-a91a-127da34eb81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.073 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.073 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.074 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473fc4ca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.076 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:26 compute-0 NetworkManager[49171]: <info>  [1770047066.0768] manager: (tap473fc4ca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Feb 02 15:44:26 compute-0 kernel: tap473fc4ca-a0: entered promiscuous mode
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.078 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.082 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473fc4ca-a0, col_values=(('external_ids', {'iface-id': '8ec763b2-de85-4ed5-bb5d-67e76d81beae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:26 compute-0 ovn_controller[144995]: 2026-02-02T15:44:26Z|00200|binding|INFO|Releasing lport 8ec763b2-de85-4ed5-bb5d-67e76d81beae from this chassis (sb_readonly=0)
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.083 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.087 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.089 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.088 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[eae733ff-7176-4a1a-a86d-42b83ddb7cb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.090 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:44:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:26.091 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'env', 'PROCESS_TAG=haproxy-473fc4ca-a137-447b-9349-9f4677babee6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/473fc4ca-a137-447b-9349-9f4677babee6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:44:26 compute-0 podman[266214]: 2026-02-02 15:44:26.444242798 +0000 UTC m=+0.050834694 container create 50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Feb 02 15:44:26 compute-0 systemd[1]: Started libpod-conmon-50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242.scope.
Feb 02 15:44:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808487ea379ac2891368c3d2f2c99f0ad892f623d84bcd236147e148f79912ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:26 compute-0 podman[266214]: 2026-02-02 15:44:26.418786971 +0000 UTC m=+0.025378887 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.517 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047066.517265, 51307c94-353b-4d22-a215-27dba54ba38a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.519 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] VM Started (Lifecycle Event)
Feb 02 15:44:26 compute-0 podman[266214]: 2026-02-02 15:44:26.520980459 +0000 UTC m=+0.127572455 container init 50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Feb 02 15:44:26 compute-0 podman[266214]: 2026-02-02 15:44:26.525893568 +0000 UTC m=+0.132485494 container start 50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.539 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.543 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047066.5183096, 51307c94-353b-4d22-a215-27dba54ba38a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.544 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] VM Paused (Lifecycle Event)
Feb 02 15:44:26 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[266263]: [NOTICE]   (266269) : New worker (266271) forked
Feb 02 15:44:26 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[266263]: [NOTICE]   (266269) : Loading success.
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.567 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.572 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.587 239549 DEBUG nova.compute.manager [req-53b7ed18-7cd6-4cf9-a573-8930767b0125 req-8585846a-5a44-4a97-a96d-d4eb20fe7303 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received event network-vif-plugged-082e2fa7-67a2-4169-9b44-15ae8108115b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.587 239549 DEBUG oslo_concurrency.lockutils [req-53b7ed18-7cd6-4cf9-a573-8930767b0125 req-8585846a-5a44-4a97-a96d-d4eb20fe7303 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "51307c94-353b-4d22-a215-27dba54ba38a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.588 239549 DEBUG oslo_concurrency.lockutils [req-53b7ed18-7cd6-4cf9-a573-8930767b0125 req-8585846a-5a44-4a97-a96d-d4eb20fe7303 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.588 239549 DEBUG oslo_concurrency.lockutils [req-53b7ed18-7cd6-4cf9-a573-8930767b0125 req-8585846a-5a44-4a97-a96d-d4eb20fe7303 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.588 239549 DEBUG nova.compute.manager [req-53b7ed18-7cd6-4cf9-a573-8930767b0125 req-8585846a-5a44-4a97-a96d-d4eb20fe7303 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Processing event network-vif-plugged-082e2fa7-67a2-4169-9b44-15ae8108115b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.589 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.590 239549 DEBUG nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.592 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.593 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047066.5930212, 51307c94-353b-4d22-a215-27dba54ba38a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.593 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] VM Resumed (Lifecycle Event)
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.595 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.598 239549 INFO nova.virt.libvirt.driver [-] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Instance spawned successfully.
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.598 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.611 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.617 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.621 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.621 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.622 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.622 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.622 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.623 239549 DEBUG nova.virt.libvirt.driver [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.649 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.691 239549 INFO nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Took 4.92 seconds to spawn the instance on the hypervisor.
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.692 239549 DEBUG nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.766 239549 INFO nova.compute.manager [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Took 7.59 seconds to build instance.
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.785 239549 DEBUG oslo_concurrency.lockutils [None req-c04d2771-64db-4633-8daf-6aa611550cbc b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.871 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047051.869271, b9beea2c-422e-4f83-9a08-6275c559a931 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.872 239549 INFO nova.compute.manager [-] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] VM Stopped (Lifecycle Event)
Feb 02 15:44:26 compute-0 nova_compute[239545]: 2026-02-02 15:44:26.891 239549 DEBUG nova.compute.manager [None req-2ee0a900-8ad2-4e78-966d-7ab6ce597b3e - - - - - -] [instance: b9beea2c-422e-4f83-9a08-6275c559a931] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:27 compute-0 nova_compute[239545]: 2026-02-02 15:44:27.040 239549 DEBUG nova.network.neutron [req-eb640327-0c20-44fe-af5f-11ce6f8eb3c1 req-8bcbf597-cde7-45a4-ad9b-7f6e4a62d07f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Updated VIF entry in instance network info cache for port 082e2fa7-67a2-4169-9b44-15ae8108115b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:44:27 compute-0 nova_compute[239545]: 2026-02-02 15:44:27.041 239549 DEBUG nova.network.neutron [req-eb640327-0c20-44fe-af5f-11ce6f8eb3c1 req-8bcbf597-cde7-45a4-ad9b-7f6e4a62d07f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Updating instance_info_cache with network_info: [{"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:44:27 compute-0 nova_compute[239545]: 2026-02-02 15:44:27.065 239549 DEBUG oslo_concurrency.lockutils [req-eb640327-0c20-44fe-af5f-11ce6f8eb3c1 req-8bcbf597-cde7-45a4-ad9b-7f6e4a62d07f d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-51307c94-353b-4d22-a215-27dba54ba38a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:44:27 compute-0 ceph-mon[75334]: pgmap v1584: 305 pgs: 305 active+clean; 134 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 02 15:44:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 134 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 523 KiB/s wr, 43 op/s
Feb 02 15:44:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:44:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3892766182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:44:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3892766182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3892766182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:44:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3892766182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:44:28 compute-0 nova_compute[239545]: 2026-02-02 15:44:28.696 239549 DEBUG nova.compute.manager [req-f2ffa3fc-0a70-42d4-ad31-7a934fc49c3f req-41118eaa-35cd-44bd-be16-2eec4cfbd894 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received event network-vif-plugged-082e2fa7-67a2-4169-9b44-15ae8108115b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:28 compute-0 nova_compute[239545]: 2026-02-02 15:44:28.696 239549 DEBUG oslo_concurrency.lockutils [req-f2ffa3fc-0a70-42d4-ad31-7a934fc49c3f req-41118eaa-35cd-44bd-be16-2eec4cfbd894 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "51307c94-353b-4d22-a215-27dba54ba38a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:28 compute-0 nova_compute[239545]: 2026-02-02 15:44:28.697 239549 DEBUG oslo_concurrency.lockutils [req-f2ffa3fc-0a70-42d4-ad31-7a934fc49c3f req-41118eaa-35cd-44bd-be16-2eec4cfbd894 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:28 compute-0 nova_compute[239545]: 2026-02-02 15:44:28.697 239549 DEBUG oslo_concurrency.lockutils [req-f2ffa3fc-0a70-42d4-ad31-7a934fc49c3f req-41118eaa-35cd-44bd-be16-2eec4cfbd894 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:28 compute-0 nova_compute[239545]: 2026-02-02 15:44:28.697 239549 DEBUG nova.compute.manager [req-f2ffa3fc-0a70-42d4-ad31-7a934fc49c3f req-41118eaa-35cd-44bd-be16-2eec4cfbd894 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] No waiting events found dispatching network-vif-plugged-082e2fa7-67a2-4169-9b44-15ae8108115b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:44:28 compute-0 nova_compute[239545]: 2026-02-02 15:44:28.698 239549 WARNING nova.compute.manager [req-f2ffa3fc-0a70-42d4-ad31-7a934fc49c3f req-41118eaa-35cd-44bd-be16-2eec4cfbd894 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received unexpected event network-vif-plugged-082e2fa7-67a2-4169-9b44-15ae8108115b for instance with vm_state active and task_state None.
Feb 02 15:44:29 compute-0 ceph-mon[75334]: pgmap v1585: 305 pgs: 305 active+clean; 134 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 523 KiB/s wr, 43 op/s
Feb 02 15:44:29 compute-0 nova_compute[239545]: 2026-02-02 15:44:29.390 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:29 compute-0 nova_compute[239545]: 2026-02-02 15:44:29.940 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 134 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 523 KiB/s wr, 72 op/s
Feb 02 15:44:30 compute-0 nova_compute[239545]: 2026-02-02 15:44:30.641 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:31 compute-0 ceph-mon[75334]: pgmap v1586: 305 pgs: 305 active+clean; 134 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 523 KiB/s wr, 72 op/s
Feb 02 15:44:31 compute-0 nova_compute[239545]: 2026-02-02 15:44:31.859 239549 DEBUG nova.compute.manager [req-6f407324-bc38-4507-b6df-8e5105432907 req-71b50067-5f6b-4832-af64-d1501821a1ea d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received event network-changed-082e2fa7-67a2-4169-9b44-15ae8108115b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:31 compute-0 nova_compute[239545]: 2026-02-02 15:44:31.860 239549 DEBUG nova.compute.manager [req-6f407324-bc38-4507-b6df-8e5105432907 req-71b50067-5f6b-4832-af64-d1501821a1ea d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Refreshing instance network info cache due to event network-changed-082e2fa7-67a2-4169-9b44-15ae8108115b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:44:31 compute-0 nova_compute[239545]: 2026-02-02 15:44:31.860 239549 DEBUG oslo_concurrency.lockutils [req-6f407324-bc38-4507-b6df-8e5105432907 req-71b50067-5f6b-4832-af64-d1501821a1ea d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-51307c94-353b-4d22-a215-27dba54ba38a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:44:31 compute-0 nova_compute[239545]: 2026-02-02 15:44:31.860 239549 DEBUG oslo_concurrency.lockutils [req-6f407324-bc38-4507-b6df-8e5105432907 req-71b50067-5f6b-4832-af64-d1501821a1ea d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-51307c94-353b-4d22-a215-27dba54ba38a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:44:31 compute-0 nova_compute[239545]: 2026-02-02 15:44:31.860 239549 DEBUG nova.network.neutron [req-6f407324-bc38-4507-b6df-8e5105432907 req-71b50067-5f6b-4832-af64-d1501821a1ea d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Refreshing network info cache for port 082e2fa7-67a2-4169-9b44-15ae8108115b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:44:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 134 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 13 KiB/s wr, 99 op/s
Feb 02 15:44:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:33 compute-0 ceph-mon[75334]: pgmap v1587: 305 pgs: 305 active+clean; 134 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 13 KiB/s wr, 99 op/s
Feb 02 15:44:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 134 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 12 KiB/s wr, 80 op/s
Feb 02 15:44:34 compute-0 nova_compute[239545]: 2026-02-02 15:44:34.168 239549 DEBUG nova.network.neutron [req-6f407324-bc38-4507-b6df-8e5105432907 req-71b50067-5f6b-4832-af64-d1501821a1ea d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Updated VIF entry in instance network info cache for port 082e2fa7-67a2-4169-9b44-15ae8108115b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:44:34 compute-0 nova_compute[239545]: 2026-02-02 15:44:34.169 239549 DEBUG nova.network.neutron [req-6f407324-bc38-4507-b6df-8e5105432907 req-71b50067-5f6b-4832-af64-d1501821a1ea d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Updating instance_info_cache with network_info: [{"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:44:34 compute-0 nova_compute[239545]: 2026-02-02 15:44:34.210 239549 DEBUG oslo_concurrency.lockutils [req-6f407324-bc38-4507-b6df-8e5105432907 req-71b50067-5f6b-4832-af64-d1501821a1ea d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-51307c94-353b-4d22-a215-27dba54ba38a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:44:34 compute-0 sudo[266280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:44:34 compute-0 sudo[266280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:44:34 compute-0 sudo[266280]: pam_unix(sudo:session): session closed for user root
Feb 02 15:44:34 compute-0 sudo[266305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:44:34 compute-0 sudo[266305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:44:34 compute-0 nova_compute[239545]: 2026-02-02 15:44:34.396 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:34 compute-0 sudo[266305]: pam_unix(sudo:session): session closed for user root
Feb 02 15:44:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:44:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:44:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:44:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:44:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:44:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:44:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:44:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:44:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:44:34 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:44:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:44:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:44:34 compute-0 sudo[266361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:44:34 compute-0 sudo[266361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:44:34 compute-0 sudo[266361]: pam_unix(sudo:session): session closed for user root
Feb 02 15:44:34 compute-0 nova_compute[239545]: 2026-02-02 15:44:34.942 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:34 compute-0 sudo[266386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:44:34 compute-0 sudo[266386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:44:35 compute-0 podman[266421]: 2026-02-02 15:44:35.222838296 +0000 UTC m=+0.030832898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:44:35 compute-0 podman[266421]: 2026-02-02 15:44:35.336287127 +0000 UTC m=+0.144281699 container create 35d1202dee4c5a3c27c4761a6403f0fed726c1c228cebecd8e259a797725184e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:44:35 compute-0 ceph-mon[75334]: pgmap v1588: 305 pgs: 305 active+clean; 134 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 12 KiB/s wr, 80 op/s
Feb 02 15:44:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:44:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:44:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:44:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:44:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:44:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:44:35 compute-0 systemd[1]: Started libpod-conmon-35d1202dee4c5a3c27c4761a6403f0fed726c1c228cebecd8e259a797725184e.scope.
Feb 02 15:44:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:44:35 compute-0 podman[266421]: 2026-02-02 15:44:35.473526835 +0000 UTC m=+0.281521427 container init 35d1202dee4c5a3c27c4761a6403f0fed726c1c228cebecd8e259a797725184e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:44:35 compute-0 podman[266421]: 2026-02-02 15:44:35.48114548 +0000 UTC m=+0.289140052 container start 35d1202dee4c5a3c27c4761a6403f0fed726c1c228cebecd8e259a797725184e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:44:35 compute-0 sad_shannon[266435]: 167 167
Feb 02 15:44:35 compute-0 systemd[1]: libpod-35d1202dee4c5a3c27c4761a6403f0fed726c1c228cebecd8e259a797725184e.scope: Deactivated successfully.
Feb 02 15:44:35 compute-0 podman[266421]: 2026-02-02 15:44:35.533775456 +0000 UTC m=+0.341770028 container attach 35d1202dee4c5a3c27c4761a6403f0fed726c1c228cebecd8e259a797725184e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:44:35 compute-0 podman[266421]: 2026-02-02 15:44:35.534504264 +0000 UTC m=+0.342498826 container died 35d1202dee4c5a3c27c4761a6403f0fed726c1c228cebecd8e259a797725184e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 02 15:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b89ffef1494dc45fa42e77d56ee7f5029392dca7ec5617619c95ce91c7a8ea0-merged.mount: Deactivated successfully.
Feb 02 15:44:35 compute-0 podman[266421]: 2026-02-02 15:44:35.713777371 +0000 UTC m=+0.521771943 container remove 35d1202dee4c5a3c27c4761a6403f0fed726c1c228cebecd8e259a797725184e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:44:35 compute-0 systemd[1]: libpod-conmon-35d1202dee4c5a3c27c4761a6403f0fed726c1c228cebecd8e259a797725184e.scope: Deactivated successfully.
Feb 02 15:44:35 compute-0 podman[266460]: 2026-02-02 15:44:35.865823787 +0000 UTC m=+0.058508839 container create fff5faccd1fd522b1507d5a0c38ce567c09d2bd1fa88c34224ea868ab68ff53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:44:35 compute-0 podman[266460]: 2026-02-02 15:44:35.837538202 +0000 UTC m=+0.030223264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:44:35 compute-0 systemd[1]: Started libpod-conmon-fff5faccd1fd522b1507d5a0c38ce567c09d2bd1fa88c34224ea868ab68ff53b.scope.
Feb 02 15:44:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 34 KiB/s wr, 85 op/s
Feb 02 15:44:35 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bac9479af94b8dcaf27cce6534d7932660b17e7d0c12a54058bec8908874afd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bac9479af94b8dcaf27cce6534d7932660b17e7d0c12a54058bec8908874afd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bac9479af94b8dcaf27cce6534d7932660b17e7d0c12a54058bec8908874afd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bac9479af94b8dcaf27cce6534d7932660b17e7d0c12a54058bec8908874afd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bac9479af94b8dcaf27cce6534d7932660b17e7d0c12a54058bec8908874afd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:36 compute-0 podman[266460]: 2026-02-02 15:44:36.010585818 +0000 UTC m=+0.203270880 container init fff5faccd1fd522b1507d5a0c38ce567c09d2bd1fa88c34224ea868ab68ff53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:44:36 compute-0 podman[266460]: 2026-02-02 15:44:36.016481511 +0000 UTC m=+0.209166553 container start fff5faccd1fd522b1507d5a0c38ce567c09d2bd1fa88c34224ea868ab68ff53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:44:36 compute-0 podman[266460]: 2026-02-02 15:44:36.081227311 +0000 UTC m=+0.273912373 container attach fff5faccd1fd522b1507d5a0c38ce567c09d2bd1fa88c34224ea868ab68ff53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:44:36 compute-0 trusting_sanderson[266478]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:44:36 compute-0 trusting_sanderson[266478]: --> All data devices are unavailable
Feb 02 15:44:36 compute-0 systemd[1]: libpod-fff5faccd1fd522b1507d5a0c38ce567c09d2bd1fa88c34224ea868ab68ff53b.scope: Deactivated successfully.
Feb 02 15:44:36 compute-0 podman[266460]: 2026-02-02 15:44:36.477407358 +0000 UTC m=+0.670092400 container died fff5faccd1fd522b1507d5a0c38ce567c09d2bd1fa88c34224ea868ab68ff53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bac9479af94b8dcaf27cce6534d7932660b17e7d0c12a54058bec8908874afd-merged.mount: Deactivated successfully.
Feb 02 15:44:36 compute-0 podman[266460]: 2026-02-02 15:44:36.597821738 +0000 UTC m=+0.790506810 container remove fff5faccd1fd522b1507d5a0c38ce567c09d2bd1fa88c34224ea868ab68ff53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:44:36 compute-0 systemd[1]: libpod-conmon-fff5faccd1fd522b1507d5a0c38ce567c09d2bd1fa88c34224ea868ab68ff53b.scope: Deactivated successfully.
Feb 02 15:44:36 compute-0 sudo[266386]: pam_unix(sudo:session): session closed for user root
Feb 02 15:44:36 compute-0 sudo[266510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:44:36 compute-0 sudo[266510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:44:36 compute-0 sudo[266510]: pam_unix(sudo:session): session closed for user root
Feb 02 15:44:36 compute-0 sudo[266535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:44:36 compute-0 sudo[266535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:44:36 compute-0 nova_compute[239545]: 2026-02-02 15:44:36.977 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:37 compute-0 podman[266573]: 2026-02-02 15:44:37.079155399 +0000 UTC m=+0.091289274 container create 5eaf95fcfbfd33ad22893b738d62ea1d779d93f37fe2be5532f73213ff6e6f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kirch, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 02 15:44:37 compute-0 podman[266573]: 2026-02-02 15:44:37.007692136 +0000 UTC m=+0.019826041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:44:37 compute-0 systemd[1]: Started libpod-conmon-5eaf95fcfbfd33ad22893b738d62ea1d779d93f37fe2be5532f73213ff6e6f4b.scope.
Feb 02 15:44:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:44:37 compute-0 podman[266573]: 2026-02-02 15:44:37.160502572 +0000 UTC m=+0.172636457 container init 5eaf95fcfbfd33ad22893b738d62ea1d779d93f37fe2be5532f73213ff6e6f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:44:37 compute-0 podman[266573]: 2026-02-02 15:44:37.166315162 +0000 UTC m=+0.178449047 container start 5eaf95fcfbfd33ad22893b738d62ea1d779d93f37fe2be5532f73213ff6e6f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 02 15:44:37 compute-0 podman[266573]: 2026-02-02 15:44:37.169484189 +0000 UTC m=+0.181618134 container attach 5eaf95fcfbfd33ad22893b738d62ea1d779d93f37fe2be5532f73213ff6e6f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:44:37 compute-0 objective_kirch[266589]: 167 167
Feb 02 15:44:37 compute-0 systemd[1]: libpod-5eaf95fcfbfd33ad22893b738d62ea1d779d93f37fe2be5532f73213ff6e6f4b.scope: Deactivated successfully.
Feb 02 15:44:37 compute-0 podman[266573]: 2026-02-02 15:44:37.170454693 +0000 UTC m=+0.182588588 container died 5eaf95fcfbfd33ad22893b738d62ea1d779d93f37fe2be5532f73213ff6e6f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kirch, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-05410566001edae89e7a025e9bc2798a49297ebbe9c4cbb5018ffb634a68c2be-merged.mount: Deactivated successfully.
Feb 02 15:44:37 compute-0 podman[266573]: 2026-02-02 15:44:37.210063323 +0000 UTC m=+0.222197198 container remove 5eaf95fcfbfd33ad22893b738d62ea1d779d93f37fe2be5532f73213ff6e6f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kirch, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:44:37 compute-0 systemd[1]: libpod-conmon-5eaf95fcfbfd33ad22893b738d62ea1d779d93f37fe2be5532f73213ff6e6f4b.scope: Deactivated successfully.
Feb 02 15:44:37 compute-0 podman[266614]: 2026-02-02 15:44:37.360365538 +0000 UTC m=+0.042413230 container create fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:44:37 compute-0 ceph-mon[75334]: pgmap v1589: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 34 KiB/s wr, 85 op/s
Feb 02 15:44:37 compute-0 systemd[1]: Started libpod-conmon-fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645.scope.
Feb 02 15:44:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:44:37 compute-0 podman[266614]: 2026-02-02 15:44:37.342427213 +0000 UTC m=+0.024474955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fa805c59d04c6751124c0baa8eea2b360c35596f10c4e478398ddddc881c84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fa805c59d04c6751124c0baa8eea2b360c35596f10c4e478398ddddc881c84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fa805c59d04c6751124c0baa8eea2b360c35596f10c4e478398ddddc881c84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fa805c59d04c6751124c0baa8eea2b360c35596f10c4e478398ddddc881c84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:37 compute-0 podman[266614]: 2026-02-02 15:44:37.490810821 +0000 UTC m=+0.172858543 container init fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ganguly, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:44:37 compute-0 podman[266614]: 2026-02-02 15:44:37.499256856 +0000 UTC m=+0.181304548 container start fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:44:37 compute-0 podman[266614]: 2026-02-02 15:44:37.524781055 +0000 UTC m=+0.206828777 container attach fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ganguly, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 02 15:44:37 compute-0 magical_ganguly[266631]: {
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:     "0": [
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:         {
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "devices": [
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "/dev/loop3"
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             ],
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_name": "ceph_lv0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_size": "21470642176",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "name": "ceph_lv0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "tags": {
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.cluster_name": "ceph",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.crush_device_class": "",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.encrypted": "0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.objectstore": "bluestore",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.osd_id": "0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.type": "block",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.vdo": "0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.with_tpm": "0"
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             },
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "type": "block",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "vg_name": "ceph_vg0"
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:         }
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:     ],
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:     "1": [
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:         {
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "devices": [
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "/dev/loop4"
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             ],
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_name": "ceph_lv1",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_size": "21470642176",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "name": "ceph_lv1",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "tags": {
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.cluster_name": "ceph",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.crush_device_class": "",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.encrypted": "0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.objectstore": "bluestore",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.osd_id": "1",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.type": "block",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.vdo": "0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.with_tpm": "0"
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             },
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "type": "block",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "vg_name": "ceph_vg1"
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:         }
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:     ],
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:     "2": [
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:         {
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "devices": [
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "/dev/loop5"
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             ],
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_name": "ceph_lv2",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_size": "21470642176",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "name": "ceph_lv2",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "tags": {
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.cluster_name": "ceph",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.crush_device_class": "",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.encrypted": "0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.objectstore": "bluestore",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.osd_id": "2",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.type": "block",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.vdo": "0",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:                 "ceph.with_tpm": "0"
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             },
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "type": "block",
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:             "vg_name": "ceph_vg2"
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:         }
Feb 02 15:44:37 compute-0 magical_ganguly[266631]:     ]
Feb 02 15:44:37 compute-0 magical_ganguly[266631]: }
Feb 02 15:44:37 compute-0 systemd[1]: libpod-fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645.scope: Deactivated successfully.
Feb 02 15:44:37 compute-0 conmon[266631]: conmon fcd64b40f16d39047a83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645.scope/container/memory.events
Feb 02 15:44:37 compute-0 podman[266614]: 2026-02-02 15:44:37.86273258 +0000 UTC m=+0.544780282 container died fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ganguly, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 02 15:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4fa805c59d04c6751124c0baa8eea2b360c35596f10c4e478398ddddc881c84-merged.mount: Deactivated successfully.
Feb 02 15:44:37 compute-0 podman[266614]: 2026-02-02 15:44:37.902389542 +0000 UTC m=+0.584437234 container remove fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ganguly, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:44:37 compute-0 systemd[1]: libpod-conmon-fcd64b40f16d39047a832e4ad9cac6553ffab87a8f5daf0ff783d0c0b44ab645.scope: Deactivated successfully.
Feb 02 15:44:37 compute-0 sudo[266535]: pam_unix(sudo:session): session closed for user root
Feb 02 15:44:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 34 KiB/s wr, 85 op/s
Feb 02 15:44:37 compute-0 sudo[266652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:44:38 compute-0 sudo[266652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:44:38 compute-0 sudo[266652]: pam_unix(sudo:session): session closed for user root
Feb 02 15:44:38 compute-0 sudo[266677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:44:38 compute-0 sudo[266677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:44:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:38 compute-0 podman[266715]: 2026-02-02 15:44:38.364264672 +0000 UTC m=+0.056501752 container create 391c91669aab101b10fe8a1243523357dc35f55d777e4eb5adb772107b31bd29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brattain, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:44:38 compute-0 systemd[1]: Started libpod-conmon-391c91669aab101b10fe8a1243523357dc35f55d777e4eb5adb772107b31bd29.scope.
Feb 02 15:44:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:44:38 compute-0 podman[266715]: 2026-02-02 15:44:38.327573021 +0000 UTC m=+0.019810121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:44:38 compute-0 podman[266715]: 2026-02-02 15:44:38.448519564 +0000 UTC m=+0.140756664 container init 391c91669aab101b10fe8a1243523357dc35f55d777e4eb5adb772107b31bd29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:44:38 compute-0 podman[266715]: 2026-02-02 15:44:38.454584331 +0000 UTC m=+0.146821431 container start 391c91669aab101b10fe8a1243523357dc35f55d777e4eb5adb772107b31bd29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brattain, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 15:44:38 compute-0 hungry_brattain[266731]: 167 167
Feb 02 15:44:38 compute-0 systemd[1]: libpod-391c91669aab101b10fe8a1243523357dc35f55d777e4eb5adb772107b31bd29.scope: Deactivated successfully.
Feb 02 15:44:38 compute-0 podman[266715]: 2026-02-02 15:44:38.478162683 +0000 UTC m=+0.170399793 container attach 391c91669aab101b10fe8a1243523357dc35f55d777e4eb5adb772107b31bd29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:44:38 compute-0 podman[266715]: 2026-02-02 15:44:38.478657145 +0000 UTC m=+0.170894245 container died 391c91669aab101b10fe8a1243523357dc35f55d777e4eb5adb772107b31bd29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-90b96cf95c7c86fc1b91e0d6b7eebfd843d86ab4e535636a2cc7c26e75dd47ee-merged.mount: Deactivated successfully.
Feb 02 15:44:38 compute-0 ovn_controller[144995]: 2026-02-02T15:44:38Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:0c:2c 10.100.0.7
Feb 02 15:44:38 compute-0 ovn_controller[144995]: 2026-02-02T15:44:38Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:0c:2c 10.100.0.7
Feb 02 15:44:38 compute-0 podman[266715]: 2026-02-02 15:44:38.562627531 +0000 UTC m=+0.254864611 container remove 391c91669aab101b10fe8a1243523357dc35f55d777e4eb5adb772107b31bd29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:44:38 compute-0 systemd[1]: libpod-conmon-391c91669aab101b10fe8a1243523357dc35f55d777e4eb5adb772107b31bd29.scope: Deactivated successfully.
Feb 02 15:44:38 compute-0 podman[266755]: 2026-02-02 15:44:38.684202719 +0000 UTC m=+0.032166121 container create 099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:44:38 compute-0 systemd[1]: Started libpod-conmon-099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb.scope.
Feb 02 15:44:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5cd6d5d8bb2bb72ee0903a36572098d85a59891bf39016fd37793abaf1d20b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5cd6d5d8bb2bb72ee0903a36572098d85a59891bf39016fd37793abaf1d20b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5cd6d5d8bb2bb72ee0903a36572098d85a59891bf39016fd37793abaf1d20b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5cd6d5d8bb2bb72ee0903a36572098d85a59891bf39016fd37793abaf1d20b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:38 compute-0 podman[266755]: 2026-02-02 15:44:38.670124318 +0000 UTC m=+0.018087750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:44:38 compute-0 podman[266755]: 2026-02-02 15:44:38.793286014 +0000 UTC m=+0.141249446 container init 099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:44:38 compute-0 podman[266755]: 2026-02-02 15:44:38.79847385 +0000 UTC m=+0.146437262 container start 099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:44:38 compute-0 podman[266755]: 2026-02-02 15:44:38.853497294 +0000 UTC m=+0.201460956 container attach 099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:44:39 compute-0 nova_compute[239545]: 2026-02-02 15:44:39.397 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:39 compute-0 ceph-mon[75334]: pgmap v1590: 305 pgs: 305 active+clean; 134 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 34 KiB/s wr, 85 op/s
Feb 02 15:44:39 compute-0 lvm[266849]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:44:39 compute-0 lvm[266849]: VG ceph_vg0 finished
Feb 02 15:44:39 compute-0 lvm[266850]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:44:39 compute-0 lvm[266850]: VG ceph_vg1 finished
Feb 02 15:44:39 compute-0 lvm[266852]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:44:39 compute-0 lvm[266852]: VG ceph_vg2 finished
Feb 02 15:44:39 compute-0 confident_rosalind[266771]: {}
Feb 02 15:44:39 compute-0 systemd[1]: libpod-099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb.scope: Deactivated successfully.
Feb 02 15:44:39 compute-0 podman[266755]: 2026-02-02 15:44:39.621124488 +0000 UTC m=+0.969087900 container died 099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 15:44:39 compute-0 systemd[1]: libpod-099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb.scope: Consumed 1.164s CPU time.
Feb 02 15:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d5cd6d5d8bb2bb72ee0903a36572098d85a59891bf39016fd37793abaf1d20b-merged.mount: Deactivated successfully.
Feb 02 15:44:39 compute-0 nova_compute[239545]: 2026-02-02 15:44:39.735 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:39 compute-0 podman[266755]: 2026-02-02 15:44:39.887478628 +0000 UTC m=+1.235442040 container remove 099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:44:39 compute-0 systemd[1]: libpod-conmon-099dbda4a7d312f73e73a81d21f1f75fa755e0aa1df7aa4bbf98ab064c5064fb.scope: Deactivated successfully.
Feb 02 15:44:39 compute-0 sudo[266677]: pam_unix(sudo:session): session closed for user root
Feb 02 15:44:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:44:39 compute-0 nova_compute[239545]: 2026-02-02 15:44:39.944 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:39 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:44:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:44:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 142 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 688 KiB/s wr, 89 op/s
Feb 02 15:44:39 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:44:40 compute-0 sudo[266870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:44:40 compute-0 sudo[266870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:44:40 compute-0 sudo[266870]: pam_unix(sudo:session): session closed for user root
Feb 02 15:44:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:44:40 compute-0 ceph-mon[75334]: pgmap v1591: 305 pgs: 305 active+clean; 142 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 688 KiB/s wr, 89 op/s
Feb 02 15:44:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:44:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 164 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Feb 02 15:44:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:44:42
Feb 02 15:44:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:44:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:44:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta']
Feb 02 15:44:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:44:43 compute-0 ceph-mon[75334]: pgmap v1592: 305 pgs: 305 active+clean; 164 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Feb 02 15:44:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 357 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Feb 02 15:44:44 compute-0 nova_compute[239545]: 2026-02-02 15:44:44.398 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:44:44 compute-0 nova_compute[239545]: 2026-02-02 15:44:44.941 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:44:44 compute-0 nova_compute[239545]: 2026-02-02 15:44:44.946 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:44 compute-0 nova_compute[239545]: 2026-02-02 15:44:44.962 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Triggering sync for uuid 51307c94-353b-4d22-a215-27dba54ba38a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 02 15:44:44 compute-0 nova_compute[239545]: 2026-02-02 15:44:44.962 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "51307c94-353b-4d22-a215-27dba54ba38a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:44 compute-0 nova_compute[239545]: 2026-02-02 15:44:44.963 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "51307c94-353b-4d22-a215-27dba54ba38a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:44:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:44:44 compute-0 nova_compute[239545]: 2026-02-02 15:44:44.992 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "51307c94-353b-4d22-a215-27dba54ba38a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:45 compute-0 ceph-mon[75334]: pgmap v1593: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 357 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Feb 02 15:44:45 compute-0 podman[266896]: 2026-02-02 15:44:45.333218769 +0000 UTC m=+0.068192272 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 02 15:44:45 compute-0 podman[266897]: 2026-02-02 15:44:45.334149202 +0000 UTC m=+0.068885229 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:44:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 72 op/s
Feb 02 15:44:47 compute-0 ceph-mon[75334]: pgmap v1594: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 72 op/s
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.789 239549 DEBUG oslo_concurrency.lockutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "51307c94-353b-4d22-a215-27dba54ba38a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.789 239549 DEBUG oslo_concurrency.lockutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.789 239549 DEBUG oslo_concurrency.lockutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "51307c94-353b-4d22-a215-27dba54ba38a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.790 239549 DEBUG oslo_concurrency.lockutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.790 239549 DEBUG oslo_concurrency.lockutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.791 239549 INFO nova.compute.manager [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Terminating instance
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.792 239549 DEBUG nova.compute.manager [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:44:47 compute-0 kernel: tap082e2fa7-67 (unregistering): left promiscuous mode
Feb 02 15:44:47 compute-0 NetworkManager[49171]: <info>  [1770047087.8410] device (tap082e2fa7-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.848 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:47 compute-0 ovn_controller[144995]: 2026-02-02T15:44:47Z|00201|binding|INFO|Releasing lport 082e2fa7-67a2-4169-9b44-15ae8108115b from this chassis (sb_readonly=0)
Feb 02 15:44:47 compute-0 ovn_controller[144995]: 2026-02-02T15:44:47Z|00202|binding|INFO|Setting lport 082e2fa7-67a2-4169-9b44-15ae8108115b down in Southbound
Feb 02 15:44:47 compute-0 ovn_controller[144995]: 2026-02-02T15:44:47Z|00203|binding|INFO|Removing iface tap082e2fa7-67 ovn-installed in OVS
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.858 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:47 compute-0 nova_compute[239545]: 2026-02-02 15:44:47.866 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:47.901 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:0c:2c 10.100.0.7'], port_security=['fa:16:3e:71:0c:2c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51307c94-353b-4d22-a215-27dba54ba38a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '413c222f-1970-4ec0-b0a7-3e88c9a779d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=082e2fa7-67a2-4169-9b44-15ae8108115b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:44:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:47.903 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 082e2fa7-67a2-4169-9b44-15ae8108115b in datapath 473fc4ca-a137-447b-9349-9f4677babee6 unbound from our chassis
Feb 02 15:44:47 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Feb 02 15:44:47 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 12.884s CPU time.
Feb 02 15:44:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:47.905 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 473fc4ca-a137-447b-9349-9f4677babee6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:44:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:47.906 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7440c63a-97e7-4a82-ae79-f35a28821d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:47 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:47.906 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace which is not needed anymore
Feb 02 15:44:47 compute-0 systemd-machined[207609]: Machine qemu-21-instance-00000015 terminated.
Feb 02 15:44:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.021 239549 INFO nova.virt.libvirt.driver [-] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Instance destroyed successfully.
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.022 239549 DEBUG nova.objects.instance [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'resources' on Instance uuid 51307c94-353b-4d22-a215-27dba54ba38a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:44:48 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[266263]: [NOTICE]   (266269) : haproxy version is 2.8.14-c23fe91
Feb 02 15:44:48 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[266263]: [NOTICE]   (266269) : path to executable is /usr/sbin/haproxy
Feb 02 15:44:48 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[266263]: [WARNING]  (266269) : Exiting Master process...
Feb 02 15:44:48 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[266263]: [ALERT]    (266269) : Current worker (266271) exited with code 143 (Terminated)
Feb 02 15:44:48 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[266263]: [WARNING]  (266269) : All workers exited. Exiting... (0)
Feb 02 15:44:48 compute-0 systemd[1]: libpod-50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242.scope: Deactivated successfully.
Feb 02 15:44:48 compute-0 podman[266966]: 2026-02-02 15:44:48.035232307 +0000 UTC m=+0.050720196 container died 50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.045 239549 DEBUG nova.virt.libvirt.vif [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:44:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-825499828',display_name='tempest-TestVolumeBootPattern-server-825499828',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-825499828',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVN3emQ3pa4ZbuxCkTmhDe1Vp6VQUY67rC+ITHBo+Tq5uE7NmayODM4fxB/CHWvUnJ+8HqCsQ4XM6GBraeEG0bMnApJ123caLkGqWErsSAkkLYVHXE8VvM9eqpwYxSifA==',key_name='tempest-TestVolumeBootPattern-570771141',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:44:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-efx6o40n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:44:26Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=51307c94-353b-4d22-a215-27dba54ba38a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.046 239549 DEBUG nova.network.os_vif_util [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "082e2fa7-67a2-4169-9b44-15ae8108115b", "address": "fa:16:3e:71:0c:2c", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap082e2fa7-67", "ovs_interfaceid": "082e2fa7-67a2-4169-9b44-15ae8108115b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.047 239549 DEBUG nova.network.os_vif_util [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:0c:2c,bridge_name='br-int',has_traffic_filtering=True,id=082e2fa7-67a2-4169-9b44-15ae8108115b,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082e2fa7-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.048 239549 DEBUG os_vif [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:0c:2c,bridge_name='br-int',has_traffic_filtering=True,id=082e2fa7-67a2-4169-9b44-15ae8108115b,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082e2fa7-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.050 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.051 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap082e2fa7-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.053 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.055 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.059 239549 INFO os_vif [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:0c:2c,bridge_name='br-int',has_traffic_filtering=True,id=082e2fa7-67a2-4169-9b44-15ae8108115b,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap082e2fa7-67')
Feb 02 15:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242-userdata-shm.mount: Deactivated successfully.
Feb 02 15:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-808487ea379ac2891368c3d2f2c99f0ad892f623d84bcd236147e148f79912ca-merged.mount: Deactivated successfully.
Feb 02 15:44:48 compute-0 podman[266966]: 2026-02-02 15:44:48.093521635 +0000 UTC m=+0.109009504 container cleanup 50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:44:48 compute-0 systemd[1]: libpod-conmon-50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242.scope: Deactivated successfully.
Feb 02 15:44:48 compute-0 podman[267019]: 2026-02-02 15:44:48.165338024 +0000 UTC m=+0.051929975 container remove 50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:44:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:48.169 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c2014d72-80c0-467c-b936-6bfc0da992a4]: (4, ('Mon Feb  2 03:44:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242)\n50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242\nMon Feb  2 03:44:48 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242)\n50122aa962859aa9734d970baf55e9a9db6755799f6a269c602990a4041de242\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:48.171 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[15ab1c33-88aa-431b-93f6-f89c85b84694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:48.173 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.175 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:48 compute-0 kernel: tap473fc4ca-a0: left promiscuous mode
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.180 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:48.183 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d96d0c5a-ebaa-48c4-b3f6-9c2cc3c0b227]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:48.197 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc667b0-77f1-49c7-82d2-ef767860e15c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:48.198 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c28be591-c69d-4bf7-9851-690fee73786b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:48.213 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4198944-c04a-4a12-87bc-97d6c5a72b46]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455185, 'reachable_time': 24107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267035, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d473fc4ca\x2da137\x2d447b\x2d9349\x2d9f4677babee6.mount: Deactivated successfully.
Feb 02 15:44:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:48.217 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:44:48 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:48.217 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[88f76562-0692-4850-839d-632fada563e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.237 239549 INFO nova.virt.libvirt.driver [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Deleting instance files /var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a_del
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.238 239549 INFO nova.virt.libvirt.driver [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Deletion of /var/lib/nova/instances/51307c94-353b-4d22-a215-27dba54ba38a_del complete
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.528 239549 INFO nova.compute.manager [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Took 0.74 seconds to destroy the instance on the hypervisor.
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.529 239549 DEBUG oslo.service.loopingcall [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.529 239549 DEBUG nova.compute.manager [-] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:44:48 compute-0 nova_compute[239545]: 2026-02-02 15:44:48.529 239549 DEBUG nova.network.neutron [-] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.243 239549 DEBUG nova.compute.manager [req-165180a0-6477-4671-aa42-9c68e1026f5f req-43087cde-48d1-455d-bbb4-46939d4b67b3 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received event network-vif-unplugged-082e2fa7-67a2-4169-9b44-15ae8108115b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.244 239549 DEBUG oslo_concurrency.lockutils [req-165180a0-6477-4671-aa42-9c68e1026f5f req-43087cde-48d1-455d-bbb4-46939d4b67b3 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "51307c94-353b-4d22-a215-27dba54ba38a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.244 239549 DEBUG oslo_concurrency.lockutils [req-165180a0-6477-4671-aa42-9c68e1026f5f req-43087cde-48d1-455d-bbb4-46939d4b67b3 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.244 239549 DEBUG oslo_concurrency.lockutils [req-165180a0-6477-4671-aa42-9c68e1026f5f req-43087cde-48d1-455d-bbb4-46939d4b67b3 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.244 239549 DEBUG nova.compute.manager [req-165180a0-6477-4671-aa42-9c68e1026f5f req-43087cde-48d1-455d-bbb4-46939d4b67b3 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] No waiting events found dispatching network-vif-unplugged-082e2fa7-67a2-4169-9b44-15ae8108115b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.244 239549 DEBUG nova.compute.manager [req-165180a0-6477-4671-aa42-9c68e1026f5f req-43087cde-48d1-455d-bbb4-46939d4b67b3 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received event network-vif-unplugged-082e2fa7-67a2-4169-9b44-15ae8108115b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:44:49 compute-0 ceph-mon[75334]: pgmap v1595: 305 pgs: 305 active+clean; 167 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.385 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.385 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.399 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.410 239549 DEBUG nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.481 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.482 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.489 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.489 239549 INFO nova.compute.claims [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.646 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.828 239549 DEBUG nova.network.neutron [-] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.855 239549 INFO nova.compute.manager [-] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Took 1.33 seconds to deallocate network for instance.
Feb 02 15:44:49 compute-0 nova_compute[239545]: 2026-02-02 15:44:49.929 239549 DEBUG nova.compute.manager [req-995f3ba5-23a4-4893-bf82-9222b7ea0609 req-73bc828b-a7fc-4723-b12b-d885130dfdb7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received event network-vif-deleted-082e2fa7-67a2-4169-9b44-15ae8108115b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 215 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 6.0 MiB/s wr, 98 op/s
Feb 02 15:44:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:44:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261957367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.208 239549 INFO nova.compute.manager [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Took 0.35 seconds to detach 1 volumes for instance.
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.227 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.233 239549 DEBUG nova.compute.provider_tree [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.270 239549 DEBUG nova.scheduler.client.report [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:44:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2261957367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.274 239549 DEBUG oslo_concurrency.lockutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.299 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.299 239549 DEBUG nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.302 239549 DEBUG oslo_concurrency.lockutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.354 239549 DEBUG nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.355 239549 DEBUG nova.network.neutron [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.371 239549 DEBUG oslo_concurrency.processutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.395 239549 INFO nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.419 239549 DEBUG nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.548 239549 DEBUG nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.550 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.551 239549 INFO nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Creating image(s)
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.581 239549 DEBUG nova.storage.rbd_utils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] rbd image 0a8d1e5a-af31-43cc-80a2-17c586996828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.605 239549 DEBUG nova.storage.rbd_utils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] rbd image 0a8d1e5a-af31-43cc-80a2-17c586996828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.626 239549 DEBUG nova.storage.rbd_utils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] rbd image 0a8d1e5a-af31-43cc-80a2-17c586996828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.629 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.652 239549 DEBUG nova.policy [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91001e0c903c4810bbeb98636b2e2380', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4dcd12fb00104dd9bbcc100f7828c435', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.685 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.686 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.687 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.687 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.710 239549 DEBUG nova.storage.rbd_utils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] rbd image 0a8d1e5a-af31-43cc-80a2-17c586996828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.713 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 0a8d1e5a-af31-43cc-80a2-17c586996828_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:44:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2167972666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.931 239549 DEBUG oslo_concurrency.processutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.936 239549 DEBUG nova.compute.provider_tree [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.945 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 0a8d1e5a-af31-43cc-80a2-17c586996828_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:50 compute-0 nova_compute[239545]: 2026-02-02 15:44:50.970 239549 DEBUG nova.scheduler.client.report [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.003 239549 DEBUG oslo_concurrency.lockutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.010 239549 DEBUG nova.storage.rbd_utils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] resizing rbd image 0a8d1e5a-af31-43cc-80a2-17c586996828_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.053 239549 INFO nova.scheduler.client.report [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Deleted allocations for instance 51307c94-353b-4d22-a215-27dba54ba38a
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.088 239549 DEBUG nova.objects.instance [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lazy-loading 'migration_context' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.120 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.120 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Ensure instance console log exists: /var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.120 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.120 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.121 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.150 239549 DEBUG oslo_concurrency.lockutils [None req-08487d24-c852-4372-83be-a76c0b1941e5 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:51 compute-0 ceph-mon[75334]: pgmap v1596: 305 pgs: 305 active+clean; 215 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 6.0 MiB/s wr, 98 op/s
Feb 02 15:44:51 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2167972666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.307596) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047091307636, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2642, "num_deletes": 523, "total_data_size": 3442564, "memory_usage": 3508048, "flush_reason": "Manual Compaction"}
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047091327215, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3374208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29453, "largest_seqno": 32094, "table_properties": {"data_size": 3362265, "index_size": 7413, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 28055, "raw_average_key_size": 20, "raw_value_size": 3336316, "raw_average_value_size": 2437, "num_data_blocks": 321, "num_entries": 1369, "num_filter_entries": 1369, "num_deletions": 523, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770046931, "oldest_key_time": 1770046931, "file_creation_time": 1770047091, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 19662 microseconds, and 4642 cpu microseconds.
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.327254) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3374208 bytes OK
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.327271) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.332999) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.333014) EVENT_LOG_v1 {"time_micros": 1770047091333009, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.333032) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3430243, prev total WAL file size 3430243, number of live WAL files 2.
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.333539) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3295KB)], [62(9208KB)]
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047091333571, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12803740, "oldest_snapshot_seqno": -1}
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.340 239549 DEBUG nova.compute.manager [req-48dae765-2103-43be-80f4-ddc339c5832c req-df03ea6a-2974-4260-8ad2-d8edeb05954b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received event network-vif-plugged-082e2fa7-67a2-4169-9b44-15ae8108115b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.340 239549 DEBUG oslo_concurrency.lockutils [req-48dae765-2103-43be-80f4-ddc339c5832c req-df03ea6a-2974-4260-8ad2-d8edeb05954b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "51307c94-353b-4d22-a215-27dba54ba38a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.340 239549 DEBUG oslo_concurrency.lockutils [req-48dae765-2103-43be-80f4-ddc339c5832c req-df03ea6a-2974-4260-8ad2-d8edeb05954b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.341 239549 DEBUG oslo_concurrency.lockutils [req-48dae765-2103-43be-80f4-ddc339c5832c req-df03ea6a-2974-4260-8ad2-d8edeb05954b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "51307c94-353b-4d22-a215-27dba54ba38a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.341 239549 DEBUG nova.compute.manager [req-48dae765-2103-43be-80f4-ddc339c5832c req-df03ea6a-2974-4260-8ad2-d8edeb05954b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] No waiting events found dispatching network-vif-plugged-082e2fa7-67a2-4169-9b44-15ae8108115b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.342 239549 WARNING nova.compute.manager [req-48dae765-2103-43be-80f4-ddc339c5832c req-df03ea6a-2974-4260-8ad2-d8edeb05954b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Received unexpected event network-vif-plugged-082e2fa7-67a2-4169-9b44-15ae8108115b for instance with vm_state deleted and task_state None.
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6322 keys, 10865293 bytes, temperature: kUnknown
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047091383494, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10865293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10815783, "index_size": 32632, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 159294, "raw_average_key_size": 25, "raw_value_size": 10694893, "raw_average_value_size": 1691, "num_data_blocks": 1308, "num_entries": 6322, "num_filter_entries": 6322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770047091, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.383823) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10865293 bytes
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.390218) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 255.9 rd, 217.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7371, records dropped: 1049 output_compression: NoCompression
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.390257) EVENT_LOG_v1 {"time_micros": 1770047091390240, "job": 34, "event": "compaction_finished", "compaction_time_micros": 50033, "compaction_time_cpu_micros": 19394, "output_level": 6, "num_output_files": 1, "total_output_size": 10865293, "num_input_records": 7371, "num_output_records": 6322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047091390839, "job": 34, "event": "table_file_deletion", "file_number": 64}
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047091392051, "job": 34, "event": "table_file_deletion", "file_number": 62}
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.333453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.392137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.392142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.392144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.392147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:44:51 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:44:51.392149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:44:51 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:51.729 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.729 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:51 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:51.730 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.907 239549 DEBUG nova.network.neutron [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Successfully created port: b40b5abb-11a7-4bce-96a9-904feea605f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:44:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 11 MiB/s wr, 113 op/s
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.989 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "58bc96ea-b6cb-4080-b353-861ed4e160f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:51 compute-0 nova_compute[239545]: 2026-02-02 15:44:51.989 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.007 239549 DEBUG nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.081 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.082 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.093 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.093 239549 INFO nova.compute.claims [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.226 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:44:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348100840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.816 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.821 239549 DEBUG nova.compute.provider_tree [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.840 239549 DEBUG nova.scheduler.client.report [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.866 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.867 239549 DEBUG nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.912 239549 DEBUG nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.913 239549 DEBUG nova.network.neutron [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.940 239549 DEBUG nova.network.neutron [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Successfully updated port: b40b5abb-11a7-4bce-96a9-904feea605f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.945 239549 INFO nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.974 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.974 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.975 239549 DEBUG nova.network.neutron [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:44:52 compute-0 nova_compute[239545]: 2026-02-02 15:44:52.977 239549 DEBUG nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.026 239549 DEBUG nova.compute.manager [req-5c36760a-b0e1-4bee-afa9-6eb600b4adc7 req-32c6be3d-6d1f-46c9-9abd-7aa9ffaec4f2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received event network-changed-b40b5abb-11a7-4bce-96a9-904feea605f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.026 239549 DEBUG nova.compute.manager [req-5c36760a-b0e1-4bee-afa9-6eb600b4adc7 req-32c6be3d-6d1f-46c9-9abd-7aa9ffaec4f2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Refreshing instance network info cache due to event network-changed-b40b5abb-11a7-4bce-96a9-904feea605f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.026 239549 DEBUG oslo_concurrency.lockutils [req-5c36760a-b0e1-4bee-afa9-6eb600b4adc7 req-32c6be3d-6d1f-46c9-9abd-7aa9ffaec4f2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.044 239549 INFO nova.virt.block_device [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Booting with volume 4c639a87-991a-40d6-b1a2-c7bd5580d6b1 at /dev/vda
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.055 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.190 239549 DEBUG os_brick.utils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.192 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.204 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.204 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab81653-80a5-48fc-bc45-de53604a8b4a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.206 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.213 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.213 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[4ffdeaec-dca7-4bc1-8cf8-f74503163a96]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.215 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.223 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.223 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[e48fd544-ca30-4b07-b7ea-8cdb392386fd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.225 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[083b030f-71b2-49fc-901f-d54a94ccf840]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.225 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.250 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.252 239549 DEBUG os_brick.initiator.connectors.lightos [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.253 239549 DEBUG os_brick.initiator.connectors.lightos [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.253 239549 DEBUG os_brick.initiator.connectors.lightos [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.253 239549 DEBUG os_brick.utils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] <== get_connector_properties: return (62ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.254 239549 DEBUG nova.virt.block_device [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Updating existing volume attachment record: 5a2f8c13-e3ee-4989-bb52-84d6842237e6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:44:53 compute-0 ceph-mon[75334]: pgmap v1597: 305 pgs: 305 active+clean; 281 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 11 MiB/s wr, 113 op/s
Feb 02 15:44:53 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3348100840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.377 239549 DEBUG nova.network.neutron [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:44:53 compute-0 nova_compute[239545]: 2026-02-02 15:44:53.420 239549 DEBUG nova.policy [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df03e4d41ae644fca567cfe648b7bad6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:44:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 9.7 MiB/s wr, 65 op/s
Feb 02 15:44:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:44:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1364835668' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1364835668' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.401 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.0476595096199384e-05 of space, bias 1.0, pg target 0.018142978528859814 quantized to 32 (current 32)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029480673414954313 of space, bias 1.0, pg target 0.8844202024486294 quantized to 32 (current 32)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.5693688874226993e-06 of space, bias 1.0, pg target 0.0007708106662268098 quantized to 32 (current 32)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006676810385620888 of space, bias 1.0, pg target 0.20030431156862663 quantized to 32 (current 32)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4621050025426744e-06 of space, bias 4.0, pg target 0.0017545260030512092 quantized to 16 (current 16)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:44:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.709 239549 DEBUG nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.710 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.711 239549 INFO nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Creating image(s)
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.711 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.711 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Ensure instance console log exists: /var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.712 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.712 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.713 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:54 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:54.732 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.774 239549 DEBUG nova.network.neutron [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.815 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.815 239549 DEBUG nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Instance network_info: |[{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.816 239549 DEBUG oslo_concurrency.lockutils [req-5c36760a-b0e1-4bee-afa9-6eb600b4adc7 req-32c6be3d-6d1f-46c9-9abd-7aa9ffaec4f2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.816 239549 DEBUG nova.network.neutron [req-5c36760a-b0e1-4bee-afa9-6eb600b4adc7 req-32c6be3d-6d1f-46c9-9abd-7aa9ffaec4f2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Refreshing network info cache for port b40b5abb-11a7-4bce-96a9-904feea605f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.818 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Start _get_guest_xml network_info=[{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.822 239549 WARNING nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.827 239549 DEBUG nova.virt.libvirt.host [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.828 239549 DEBUG nova.virt.libvirt.host [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.830 239549 DEBUG nova.virt.libvirt.host [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.831 239549 DEBUG nova.virt.libvirt.host [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.831 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.831 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.832 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.832 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.832 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.832 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.833 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.833 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.833 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.833 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.833 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.833 239549 DEBUG nova.virt.hardware [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:44:54 compute-0 nova_compute[239545]: 2026-02-02 15:44:54.836 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:55 compute-0 nova_compute[239545]: 2026-02-02 15:44:55.130 239549 DEBUG nova.network.neutron [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Successfully created port: b06b3b06-65e4-495f-8828-87024e852a05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:44:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:44:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/938094472' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:55 compute-0 ceph-mon[75334]: pgmap v1598: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 9.7 MiB/s wr, 65 op/s
Feb 02 15:44:55 compute-0 nova_compute[239545]: 2026-02-02 15:44:55.409 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:55 compute-0 nova_compute[239545]: 2026-02-02 15:44:55.431 239549 DEBUG nova.storage.rbd_utils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] rbd image 0a8d1e5a-af31-43cc-80a2-17c586996828_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:55 compute-0 nova_compute[239545]: 2026-02-02 15:44:55.435 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 11 MiB/s wr, 88 op/s
Feb 02 15:44:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:44:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261820539' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.028 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.030 239549 DEBUG nova.virt.libvirt.vif [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:44:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-569887417',display_name='tempest-SnapshotDataIntegrityTests-server-569887417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-569887417',id=22,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKDjFL6frGYjZPZCAUAsF1kAi2gOs4UqX81GaslFuyFLyY5rcP/AssRZOt9xbxtSCQ4ETXtR5POrUSSA1jnMxdJ/13sE4Jmx1NpbWyjIm1JVJWcS6wHWb75Gr3WAoTE0CQ==',key_name='tempest-keypair-1823352159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4dcd12fb00104dd9bbcc100f7828c435',ramdisk_id='',reservation_id='r-1qpnu0l8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-235440494',owner_user_name='tempest-SnapshotDataIntegrityTests-235440494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:44:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='91001e0c903c4810bbeb98636b2e2380',uuid=0a8d1e5a-af31-43cc-80a2-17c586996828,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.031 239549 DEBUG nova.network.os_vif_util [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Converting VIF {"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.032 239549 DEBUG nova.network.os_vif_util [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:7b:e6,bridge_name='br-int',has_traffic_filtering=True,id=b40b5abb-11a7-4bce-96a9-904feea605f6,network=Network(93cb165b-b97d-434d-8af7-ddc2fabeffee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb40b5abb-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.034 239549 DEBUG nova.objects.instance [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.043 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "589acca5-dd9e-4695-b32a-0235932283d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.044 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.077 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <uuid>0a8d1e5a-af31-43cc-80a2-17c586996828</uuid>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <name>instance-00000016</name>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <nova:name>tempest-SnapshotDataIntegrityTests-server-569887417</nova:name>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:44:54</nova:creationTime>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <nova:user uuid="91001e0c903c4810bbeb98636b2e2380">tempest-SnapshotDataIntegrityTests-235440494-project-member</nova:user>
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <nova:project uuid="4dcd12fb00104dd9bbcc100f7828c435">tempest-SnapshotDataIntegrityTests-235440494</nova:project>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <nova:port uuid="b40b5abb-11a7-4bce-96a9-904feea605f6">
Feb 02 15:44:56 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <system>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <entry name="serial">0a8d1e5a-af31-43cc-80a2-17c586996828</entry>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <entry name="uuid">0a8d1e5a-af31-43cc-80a2-17c586996828</entry>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     </system>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <os>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   </os>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <features>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   </features>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/0a8d1e5a-af31-43cc-80a2-17c586996828_disk">
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       </source>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/0a8d1e5a-af31-43cc-80a2-17c586996828_disk.config">
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       </source>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:44:56 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:a3:7b:e6"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <target dev="tapb40b5abb-11"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828/console.log" append="off"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <video>
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     </video>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:44:56 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:44:56 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:44:56 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:44:56 compute-0 nova_compute[239545]: </domain>
Feb 02 15:44:56 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.078 239549 DEBUG nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Preparing to wait for external event network-vif-plugged-b40b5abb-11a7-4bce-96a9-904feea605f6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.078 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.079 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.079 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.080 239549 DEBUG nova.virt.libvirt.vif [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:44:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-569887417',display_name='tempest-SnapshotDataIntegrityTests-server-569887417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-569887417',id=22,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKDjFL6frGYjZPZCAUAsF1kAi2gOs4UqX81GaslFuyFLyY5rcP/AssRZOt9xbxtSCQ4ETXtR5POrUSSA1jnMxdJ/13sE4Jmx1NpbWyjIm1JVJWcS6wHWb75Gr3WAoTE0CQ==',key_name='tempest-keypair-1823352159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4dcd12fb00104dd9bbcc100f7828c435',ramdisk_id='',reservation_id='r-1qpnu0l8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-235440494',owner_user_name='tempest-SnapshotDataIntegrityTests-235440494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:44:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='91001e0c903c4810bbeb98636b2e2380',uuid=0a8d1e5a-af31-43cc-80a2-17c586996828,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.080 239549 DEBUG nova.network.os_vif_util [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Converting VIF {"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.081 239549 DEBUG nova.network.os_vif_util [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:7b:e6,bridge_name='br-int',has_traffic_filtering=True,id=b40b5abb-11a7-4bce-96a9-904feea605f6,network=Network(93cb165b-b97d-434d-8af7-ddc2fabeffee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb40b5abb-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.081 239549 DEBUG os_vif [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:7b:e6,bridge_name='br-int',has_traffic_filtering=True,id=b40b5abb-11a7-4bce-96a9-904feea605f6,network=Network(93cb165b-b97d-434d-8af7-ddc2fabeffee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb40b5abb-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.082 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.083 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.083 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.084 239549 DEBUG nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.088 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.088 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb40b5abb-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.089 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb40b5abb-11, col_values=(('external_ids', {'iface-id': 'b40b5abb-11a7-4bce-96a9-904feea605f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:7b:e6', 'vm-uuid': '0a8d1e5a-af31-43cc-80a2-17c586996828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.091 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:56 compute-0 NetworkManager[49171]: <info>  [1770047096.0926] manager: (tapb40b5abb-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.096 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.098 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.099 239549 INFO os_vif [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:7b:e6,bridge_name='br-int',has_traffic_filtering=True,id=b40b5abb-11a7-4bce-96a9-904feea605f6,network=Network(93cb165b-b97d-434d-8af7-ddc2fabeffee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb40b5abb-11')
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.185 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.185 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.186 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] No VIF found with MAC fa:16:3e:a3:7b:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.186 239549 INFO nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Using config drive
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.208 239549 DEBUG nova.storage.rbd_utils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] rbd image 0a8d1e5a-af31-43cc-80a2-17c586996828_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.214 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.215 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.221 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.221 239549 INFO nova.compute.claims [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.407 239549 DEBUG nova.network.neutron [req-5c36760a-b0e1-4bee-afa9-6eb600b4adc7 req-32c6be3d-6d1f-46c9-9abd-7aa9ffaec4f2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated VIF entry in instance network info cache for port b40b5abb-11a7-4bce-96a9-904feea605f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.408 239549 DEBUG nova.network.neutron [req-5c36760a-b0e1-4bee-afa9-6eb600b4adc7 req-32c6be3d-6d1f-46c9-9abd-7aa9ffaec4f2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:44:56 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/938094472' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:56 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/261820539' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.453 239549 DEBUG nova.network.neutron [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Successfully updated port: b06b3b06-65e4-495f-8828-87024e852a05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.642 239549 INFO nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Creating config drive at /var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828/disk.config
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.649 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprxa4mogs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.675 239549 DEBUG nova.compute.manager [req-75cf75ce-5dbe-4b6e-aed7-bf47c16140ea req-ce5a8e83-d1ed-4a00-942b-556edd4d8a8c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received event network-changed-b06b3b06-65e4-495f-8828-87024e852a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.676 239549 DEBUG nova.compute.manager [req-75cf75ce-5dbe-4b6e-aed7-bf47c16140ea req-ce5a8e83-d1ed-4a00-942b-556edd4d8a8c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Refreshing instance network info cache due to event network-changed-b06b3b06-65e4-495f-8828-87024e852a05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.676 239549 DEBUG oslo_concurrency.lockutils [req-75cf75ce-5dbe-4b6e-aed7-bf47c16140ea req-ce5a8e83-d1ed-4a00-942b-556edd4d8a8c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-58bc96ea-b6cb-4080-b353-861ed4e160f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.676 239549 DEBUG oslo_concurrency.lockutils [req-75cf75ce-5dbe-4b6e-aed7-bf47c16140ea req-ce5a8e83-d1ed-4a00-942b-556edd4d8a8c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-58bc96ea-b6cb-4080-b353-861ed4e160f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.677 239549 DEBUG nova.network.neutron [req-75cf75ce-5dbe-4b6e-aed7-bf47c16140ea req-ce5a8e83-d1ed-4a00-942b-556edd4d8a8c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Refreshing network info cache for port b06b3b06-65e4-495f-8828-87024e852a05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.681 239549 DEBUG oslo_concurrency.lockutils [req-5c36760a-b0e1-4bee-afa9-6eb600b4adc7 req-32c6be3d-6d1f-46c9-9abd-7aa9ffaec4f2 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.682 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "refresh_cache-58bc96ea-b6cb-4080-b353-861ed4e160f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.748 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.775 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprxa4mogs" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.797 239549 DEBUG nova.storage.rbd_utils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] rbd image 0a8d1e5a-af31-43cc-80a2-17c586996828_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.800 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828/disk.config 0a8d1e5a-af31-43cc-80a2-17c586996828_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.855 239549 DEBUG nova.network.neutron [req-75cf75ce-5dbe-4b6e-aed7-bf47c16140ea req-ce5a8e83-d1ed-4a00-942b-556edd4d8a8c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.937 239549 DEBUG oslo_concurrency.processutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828/disk.config 0a8d1e5a-af31-43cc-80a2-17c586996828_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.938 239549 INFO nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Deleting local config drive /var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828/disk.config because it was imported into RBD.
Feb 02 15:44:56 compute-0 kernel: tapb40b5abb-11: entered promiscuous mode
Feb 02 15:44:56 compute-0 NetworkManager[49171]: <info>  [1770047096.9814] manager: (tapb40b5abb-11): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.982 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:56 compute-0 ovn_controller[144995]: 2026-02-02T15:44:56Z|00204|binding|INFO|Claiming lport b40b5abb-11a7-4bce-96a9-904feea605f6 for this chassis.
Feb 02 15:44:56 compute-0 ovn_controller[144995]: 2026-02-02T15:44:56Z|00205|binding|INFO|b40b5abb-11a7-4bce-96a9-904feea605f6: Claiming fa:16:3e:a3:7b:e6 10.100.0.6
Feb 02 15:44:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:56.993 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:7b:e6 10.100.0.6'], port_security=['fa:16:3e:a3:7b:e6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0a8d1e5a-af31-43cc-80a2-17c586996828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93cb165b-b97d-434d-8af7-ddc2fabeffee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4dcd12fb00104dd9bbcc100f7828c435', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64abc105-a857-4a13-b475-b019801cc32c e8b16762-2aff-4721-b0e3-10bc40176f4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8bd12e5-65b1-4fe7-9b52-fe844064c5a9, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=b40b5abb-11a7-4bce-96a9-904feea605f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.994 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:56 compute-0 ovn_controller[144995]: 2026-02-02T15:44:56Z|00206|binding|INFO|Setting lport b40b5abb-11a7-4bce-96a9-904feea605f6 ovn-installed in OVS
Feb 02 15:44:56 compute-0 ovn_controller[144995]: 2026-02-02T15:44:56Z|00207|binding|INFO|Setting lport b40b5abb-11a7-4bce-96a9-904feea605f6 up in Southbound
Feb 02 15:44:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:56.996 154982 INFO neutron.agent.ovn.metadata.agent [-] Port b40b5abb-11a7-4bce-96a9-904feea605f6 in datapath 93cb165b-b97d-434d-8af7-ddc2fabeffee bound to our chassis
Feb 02 15:44:56 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:56.998 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93cb165b-b97d-434d-8af7-ddc2fabeffee
Feb 02 15:44:56 compute-0 nova_compute[239545]: 2026-02-02 15:44:56.998 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.007 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1abb26be-db02-4983-ba04-0ab6358e7b5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.008 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93cb165b-b1 in ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:44:57 compute-0 systemd-machined[207609]: New machine qemu-22-instance-00000016.
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.013 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93cb165b-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.013 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2d460d54-276a-4644-a428-e6be3dd610b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.015 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[21025b0a-c020-4744-9a31-3aa8a2891b30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.025 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[538163e7-9615-4005-8757-07512a9b7d82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Feb 02 15:44:57 compute-0 systemd-udevd[267435]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.038 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb3bba4-6b30-4dbc-bb9b-fd3382862882]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 NetworkManager[49171]: <info>  [1770047097.0485] device (tapb40b5abb-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:44:57 compute-0 NetworkManager[49171]: <info>  [1770047097.0493] device (tapb40b5abb-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.065 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcf6fa1-56de-4fc5-8ed9-ce7d4e0f6da5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.070 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d694d34e-b1f0-4628-bdca-c6b59ea93a57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 systemd-udevd[267443]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:44:57 compute-0 NetworkManager[49171]: <info>  [1770047097.0713] manager: (tap93cb165b-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.094 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[10e78552-4a03-4f95-9afb-ede99ccf063a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.097 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb6bee0-ef0e-41ad-aa75-7442bc50e394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 NetworkManager[49171]: <info>  [1770047097.1146] device (tap93cb165b-b0): carrier: link connected
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.118 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[d12ad3b7-df04-4ab8-bcf4-ef12457df1f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.137 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[039a9882-25db-4dff-86ef-1b8b9f981ece]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93cb165b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:af:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458303, 'reachable_time': 27846, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267465, 'error': None, 'target': 'ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.149 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c53d09-5208-4981-ae64-180d94b9ec9a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:af44'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458303, 'tstamp': 458303}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267466, 'error': None, 'target': 'ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.163 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8ce06b-69ee-4e8b-9197-5db3a5ade22a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93cb165b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:af:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458303, 'reachable_time': 27846, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267467, 'error': None, 'target': 'ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.189 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7cff3b70-90b7-40ca-87ca-3a88dd7bb7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.238 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7b26c5c6-1274-4512-8c81-9abc0f88f757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.240 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93cb165b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.241 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.241 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93cb165b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.243 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:57 compute-0 NetworkManager[49171]: <info>  [1770047097.2444] manager: (tap93cb165b-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Feb 02 15:44:57 compute-0 kernel: tap93cb165b-b0: entered promiscuous mode
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.245 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.248 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93cb165b-b0, col_values=(('external_ids', {'iface-id': 'a43331b2-e1ad-4aa9-beac-e80c59fa7f31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.249 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:57 compute-0 ovn_controller[144995]: 2026-02-02T15:44:57Z|00208|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.252 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93cb165b-b97d-434d-8af7-ddc2fabeffee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93cb165b-b97d-434d-8af7-ddc2fabeffee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.252 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[41a5550e-0964-41f3-ae45-e01f177721f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.253 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-93cb165b-b97d-434d-8af7-ddc2fabeffee
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/93cb165b-b97d-434d-8af7-ddc2fabeffee.pid.haproxy
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 93cb165b-b97d-434d-8af7-ddc2fabeffee
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:44:57 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:57.254 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee', 'env', 'PROCESS_TAG=haproxy-93cb165b-b97d-434d-8af7-ddc2fabeffee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93cb165b-b97d-434d-8af7-ddc2fabeffee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.255 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.304 239549 DEBUG nova.network.neutron [req-75cf75ce-5dbe-4b6e-aed7-bf47c16140ea req-ce5a8e83-d1ed-4a00-942b-556edd4d8a8c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.330 239549 DEBUG oslo_concurrency.lockutils [req-75cf75ce-5dbe-4b6e-aed7-bf47c16140ea req-ce5a8e83-d1ed-4a00-942b-556edd4d8a8c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-58bc96ea-b6cb-4080-b353-861ed4e160f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.331 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquired lock "refresh_cache-58bc96ea-b6cb-4080-b353-861ed4e160f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.331 239549 DEBUG nova.network.neutron [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:44:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:44:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476827223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.358 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.365 239549 DEBUG nova.compute.provider_tree [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.385 239549 DEBUG nova.scheduler.client.report [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.417 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.418 239549 DEBUG nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.477 239549 DEBUG nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.478 239549 DEBUG nova.network.neutron [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.503 239549 INFO nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.522 239549 DEBUG nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.532 239549 DEBUG nova.network.neutron [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:44:57 compute-0 ceph-mon[75334]: pgmap v1599: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 11 MiB/s wr, 88 op/s
Feb 02 15:44:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2476827223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.566 239549 INFO nova.virt.block_device [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Booting with volume 9698e5da-2df0-4288-87d3-c3ebb6c2ab14 at /dev/vda
Feb 02 15:44:57 compute-0 podman[267538]: 2026-02-02 15:44:57.589283508 +0000 UTC m=+0.044963435 container create 5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 02 15:44:57 compute-0 systemd[1]: Started libpod-conmon-5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac.scope.
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.619 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047097.6181016, 0a8d1e5a-af31-43cc-80a2-17c586996828 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.619 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] VM Started (Lifecycle Event)
Feb 02 15:44:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04afa74679d4774c0a9e465d1c766f43997907cb707a89950696228a16ca8f4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.637 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:57 compute-0 podman[267538]: 2026-02-02 15:44:57.638554108 +0000 UTC m=+0.094234065 container init 5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.640 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047097.6188366, 0a8d1e5a-af31-43cc-80a2-17c586996828 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.640 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] VM Paused (Lifecycle Event)
Feb 02 15:44:57 compute-0 podman[267538]: 2026-02-02 15:44:57.642929525 +0000 UTC m=+0.098609452 container start 5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 02 15:44:57 compute-0 podman[267538]: 2026-02-02 15:44:57.563171323 +0000 UTC m=+0.018851280 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.656 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:57 compute-0 neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee[267558]: [NOTICE]   (267563) : New worker (267565) forked
Feb 02 15:44:57 compute-0 neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee[267558]: [NOTICE]   (267563) : Loading success.
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.661 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.682 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.692 239549 DEBUG nova.policy [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8e72a1cb6344869821da1cfc41bf8fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.704 239549 DEBUG os_brick.utils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.705 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.715 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.715 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[8059ca79-e46b-4ef0-86c1-4fc1d8688f87]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.717 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.723 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.723 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[df885b56-8421-44f7-9f8e-b116c5100a19]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.725 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.730 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.731 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[507c249b-6953-459c-8e12-810214050edb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.733 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[3cabc12d-993c-419e-b2a8-e827ebe3f69c]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.733 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.753 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.755 239549 DEBUG os_brick.initiator.connectors.lightos [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.755 239549 DEBUG os_brick.initiator.connectors.lightos [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.755 239549 DEBUG os_brick.initiator.connectors.lightos [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.755 239549 DEBUG os_brick.utils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] <== get_connector_properties: return (50ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:44:57 compute-0 nova_compute[239545]: 2026-02-02 15:44:57.756 239549 DEBUG nova.virt.block_device [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Updating existing volume attachment record: da341cbf-0704-4260-b27d-13a0c2066860 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:44:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 11 MiB/s wr, 84 op/s
Feb 02 15:44:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.348 239549 DEBUG nova.network.neutron [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Updating instance_info_cache with network_info: [{"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.385 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Releasing lock "refresh_cache-58bc96ea-b6cb-4080-b353-861ed4e160f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.386 239549 DEBUG nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Instance network_info: |[{"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.388 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Start _get_guest_xml network_info=[{"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '5a2f8c13-e3ee-4989-bb52-84d6842237e6', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '58bc96ea-b6cb-4080-b353-861ed4e160f9', 'attached_at': '', 'detached_at': '', 'volume_id': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'serial': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.392 239549 WARNING nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.398 239549 DEBUG nova.virt.libvirt.host [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.398 239549 DEBUG nova.virt.libvirt.host [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.401 239549 DEBUG nova.virt.libvirt.host [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.401 239549 DEBUG nova.virt.libvirt.host [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.401 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.402 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.402 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.402 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.402 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.403 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.403 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.403 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.403 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.403 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.404 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.404 239549 DEBUG nova.virt.hardware [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.425 239549 DEBUG nova.storage.rbd_utils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image 58bc96ea-b6cb-4080-b353-861ed4e160f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.429 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:44:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:44:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1200502210' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.534 239549 DEBUG nova.network.neutron [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Successfully created port: 4cb7a453-9db5-4fbc-a7ba-59600d76589c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:44:58 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1200502210' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.893 239549 DEBUG nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.894 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.895 239549 INFO nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Creating image(s)
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.895 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.895 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Ensure instance console log exists: /var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.895 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.896 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.896 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.943 239549 DEBUG nova.compute.manager [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received event network-vif-plugged-b40b5abb-11a7-4bce-96a9-904feea605f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.943 239549 DEBUG oslo_concurrency.lockutils [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.944 239549 DEBUG oslo_concurrency.lockutils [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.944 239549 DEBUG oslo_concurrency.lockutils [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.944 239549 DEBUG nova.compute.manager [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Processing event network-vif-plugged-b40b5abb-11a7-4bce-96a9-904feea605f6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.945 239549 DEBUG nova.compute.manager [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received event network-vif-plugged-b40b5abb-11a7-4bce-96a9-904feea605f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.945 239549 DEBUG oslo_concurrency.lockutils [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.945 239549 DEBUG oslo_concurrency.lockutils [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.945 239549 DEBUG oslo_concurrency.lockutils [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.946 239549 DEBUG nova.compute.manager [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] No waiting events found dispatching network-vif-plugged-b40b5abb-11a7-4bce-96a9-904feea605f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.946 239549 WARNING nova.compute.manager [req-a60ae083-038d-4b92-993e-178f64baf2a2 req-552f57df-c65e-4085-82bf-003abd9a53a6 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received unexpected event network-vif-plugged-b40b5abb-11a7-4bce-96a9-904feea605f6 for instance with vm_state building and task_state spawning.
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.947 239549 DEBUG nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.951 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.952 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047098.9513612, 0a8d1e5a-af31-43cc-80a2-17c586996828 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.952 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] VM Resumed (Lifecycle Event)
Feb 02 15:44:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:44:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117358143' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.962 239549 INFO nova.virt.libvirt.driver [-] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Instance spawned successfully.
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.964 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.973 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.981 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.987 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.990 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.990 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.991 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.991 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.992 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:58 compute-0 nova_compute[239545]: 2026-02-02 15:44:58.992 239549 DEBUG nova.virt.libvirt.driver [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.022 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.047 239549 INFO nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Took 8.50 seconds to spawn the instance on the hypervisor.
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.048 239549 DEBUG nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.096 239549 DEBUG os_brick.encryptors [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Using volume encryption metadata '{'encryption_key_id': '17e744f7-0c89-49de-8878-57332d3a6df8', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '58bc96ea-b6cb-4080-b353-861ed4e160f9', 'attached_at': '', 'detached_at': '', 'volume_id': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.099 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.113 239549 INFO nova.compute.manager [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Took 9.65 seconds to build instance.
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.124 239549 DEBUG barbicanclient.v1.secrets [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/17e744f7-0c89-49de-8878-57332d3a6df8 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.124 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.128 239549 DEBUG oslo_concurrency.lockutils [None req-814a38e5-f920-4a1b-84b1-4dfa8a9f77d7 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.144 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.144 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.177 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.177 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.205 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.206 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.226 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.227 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.243 239549 DEBUG nova.network.neutron [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Successfully updated port: 4cb7a453-9db5-4fbc-a7ba-59600d76589c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:44:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:59.255 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:59.255 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:44:59.256 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.256 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.257 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquired lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.257 239549 DEBUG nova.network.neutron [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.338 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.339 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.366 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.366 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.400 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.401 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.403 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.424 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.425 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.454 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.455 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.479 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.479 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.512 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.513 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.538 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.539 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.555 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.556 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 ceph-mon[75334]: pgmap v1600: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 11 MiB/s wr, 84 op/s
Feb 02 15:44:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3117358143' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.600 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.601 239549 INFO barbicanclient.base [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/17e744f7-0c89-49de-8878-57332d3a6df8
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.633 239549 DEBUG barbicanclient.client [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.634 239549 DEBUG nova.virt.libvirt.host [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <usage type="volume">
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <volume>4c639a87-991a-40d6-b1a2-c7bd5580d6b1</volume>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   </usage>
Feb 02 15:44:59 compute-0 nova_compute[239545]: </secret>
Feb 02 15:44:59 compute-0 nova_compute[239545]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.671 239549 DEBUG nova.virt.libvirt.vif [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:44:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-640335234',display_name='tempest-TransferEncryptedVolumeTest-server-640335234',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-640335234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH4lPXdat6TIfSOSKg5xYklqsZ5blpFjr9pJRpxK9EoeTRyB9ECumCAF+ZB72uHiJN6zvQWtj3yCwumCfWWkS7+am6bvE7SvfzxW5K4yPSBZ+jdyG6zmzmLhEEjLuT4TCQ==',key_name='tempest-TransferEncryptedVolumeTest-1394740004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-uxc51fyo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:44:53Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=58bc96ea-b6cb-4080-b353-861ed4e160f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.672 239549 DEBUG nova.network.os_vif_util [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.672 239549 DEBUG nova.network.os_vif_util [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b7:65,bridge_name='br-int',has_traffic_filtering=True,id=b06b3b06-65e4-495f-8828-87024e852a05,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06b3b06-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.674 239549 DEBUG nova.objects.instance [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58bc96ea-b6cb-4080-b353-861ed4e160f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.702 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <uuid>58bc96ea-b6cb-4080-b353-861ed4e160f9</uuid>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <name>instance-00000017</name>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-640335234</nova:name>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:44:58</nova:creationTime>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <nova:user uuid="df03e4d41ae644fca567cfe648b7bad6">tempest-TransferEncryptedVolumeTest-1895614673-project-member</nova:user>
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <nova:project uuid="6d6011a66bdb41cea09b6018ceeec7d4">tempest-TransferEncryptedVolumeTest-1895614673</nova:project>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <nova:port uuid="b06b3b06-65e4-495f-8828-87024e852a05">
Feb 02 15:44:59 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <system>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <entry name="serial">58bc96ea-b6cb-4080-b353-861ed4e160f9</entry>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <entry name="uuid">58bc96ea-b6cb-4080-b353-861ed4e160f9</entry>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     </system>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <os>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   </os>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <features>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   </features>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/58bc96ea-b6cb-4080-b353-861ed4e160f9_disk.config">
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       </source>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-4c639a87-991a-40d6-b1a2-c7bd5580d6b1">
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       </source>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <serial>4c639a87-991a-40d6-b1a2-c7bd5580d6b1</serial>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <encryption format="luks">
Feb 02 15:44:59 compute-0 nova_compute[239545]:         <secret type="passphrase" uuid="0108f9f8-69fb-4cd0-b095-e683fd4f323f"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       </encryption>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:70:b7:65"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <target dev="tapb06b3b06-65"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9/console.log" append="off"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <video>
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     </video>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:44:59 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:44:59 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:44:59 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:44:59 compute-0 nova_compute[239545]: </domain>
Feb 02 15:44:59 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.703 239549 DEBUG nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Preparing to wait for external event network-vif-plugged-b06b3b06-65e4-495f-8828-87024e852a05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.703 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.704 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.704 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.705 239549 DEBUG nova.virt.libvirt.vif [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:44:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-640335234',display_name='tempest-TransferEncryptedVolumeTest-server-640335234',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-640335234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH4lPXdat6TIfSOSKg5xYklqsZ5blpFjr9pJRpxK9EoeTRyB9ECumCAF+ZB72uHiJN6zvQWtj3yCwumCfWWkS7+am6bvE7SvfzxW5K4yPSBZ+jdyG6zmzmLhEEjLuT4TCQ==',key_name='tempest-TransferEncryptedVolumeTest-1394740004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-uxc51fyo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:44:53Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=58bc96ea-b6cb-4080-b353-861ed4e160f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.705 239549 DEBUG nova.network.os_vif_util [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.706 239549 DEBUG nova.network.os_vif_util [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b7:65,bridge_name='br-int',has_traffic_filtering=True,id=b06b3b06-65e4-495f-8828-87024e852a05,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06b3b06-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.707 239549 DEBUG os_vif [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b7:65,bridge_name='br-int',has_traffic_filtering=True,id=b06b3b06-65e4-495f-8828-87024e852a05,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06b3b06-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.707 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.708 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.708 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.712 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.713 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb06b3b06-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.713 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb06b3b06-65, col_values=(('external_ids', {'iface-id': 'b06b3b06-65e4-495f-8828-87024e852a05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:b7:65', 'vm-uuid': '58bc96ea-b6cb-4080-b353-861ed4e160f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.715 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:59 compute-0 NetworkManager[49171]: <info>  [1770047099.7159] manager: (tapb06b3b06-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.716 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.722 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.724 239549 INFO os_vif [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b7:65,bridge_name='br-int',has_traffic_filtering=True,id=b06b3b06-65e4-495f-8828-87024e852a05,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06b3b06-65')
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.800 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.801 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.801 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No VIF found with MAC fa:16:3e:70:b7:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.802 239549 INFO nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Using config drive
Feb 02 15:44:59 compute-0 nova_compute[239545]: 2026-02-02 15:44:59.822 239549 DEBUG nova.storage.rbd_utils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image 58bc96ea-b6cb-4080-b353-861ed4e160f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:44:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 11 MiB/s wr, 86 op/s
Feb 02 15:45:00 compute-0 nova_compute[239545]: 2026-02-02 15:45:00.404 239549 DEBUG nova.network.neutron [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.358 239549 INFO nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Creating config drive at /var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9/disk.config
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.361 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp32_eh4_i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.489 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp32_eh4_i" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.512 239549 DEBUG nova.storage.rbd_utils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image 58bc96ea-b6cb-4080-b353-861ed4e160f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.517 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9/disk.config 58bc96ea-b6cb-4080-b353-861ed4e160f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:01 compute-0 ceph-mon[75334]: pgmap v1601: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 11 MiB/s wr, 86 op/s
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.644 239549 DEBUG oslo_concurrency.processutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9/disk.config 58bc96ea-b6cb-4080-b353-861ed4e160f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.645 239549 INFO nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Deleting local config drive /var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9/disk.config because it was imported into RBD.
Feb 02 15:45:01 compute-0 NetworkManager[49171]: <info>  [1770047101.6716] manager: (tapb06b3b06-65): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Feb 02 15:45:01 compute-0 kernel: tapb06b3b06-65: entered promiscuous mode
Feb 02 15:45:01 compute-0 ovn_controller[144995]: 2026-02-02T15:45:01Z|00209|binding|INFO|Claiming lport b06b3b06-65e4-495f-8828-87024e852a05 for this chassis.
Feb 02 15:45:01 compute-0 ovn_controller[144995]: 2026-02-02T15:45:01Z|00210|binding|INFO|b06b3b06-65e4-495f-8828-87024e852a05: Claiming fa:16:3e:70:b7:65 10.100.0.4
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.673 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.677 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:01 compute-0 ovn_controller[144995]: 2026-02-02T15:45:01Z|00211|binding|INFO|Setting lport b06b3b06-65e4-495f-8828-87024e852a05 ovn-installed in OVS
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.682 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:01 compute-0 nova_compute[239545]: 2026-02-02 15:45:01.683 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:01 compute-0 systemd-udevd[267695]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:45:01 compute-0 systemd-machined[207609]: New machine qemu-23-instance-00000017.
Feb 02 15:45:01 compute-0 NetworkManager[49171]: <info>  [1770047101.7077] device (tapb06b3b06-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:45:01 compute-0 NetworkManager[49171]: <info>  [1770047101.7083] device (tapb06b3b06-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:45:01 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Feb 02 15:45:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.3 MiB/s wr, 117 op/s
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.072 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b7:65 10.100.0.4'], port_security=['fa:16:3e:70:b7:65 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '58bc96ea-b6cb-4080-b353-861ed4e160f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e2499c6-4637-44db-a491-4fe8bcc3f081', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=b06b3b06-65e4-495f-8828-87024e852a05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:45:02 compute-0 ovn_controller[144995]: 2026-02-02T15:45:02Z|00212|binding|INFO|Setting lport b06b3b06-65e4-495f-8828-87024e852a05 up in Southbound
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.073 154982 INFO neutron.agent.ovn.metadata.agent [-] Port b06b3b06-65e4-495f-8828-87024e852a05 in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c bound to our chassis
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.079 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.089 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[85561214-8ca7-4dc4-9415-0e1f3e2bf868]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.090 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6f67b7a-31 in ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.092 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6f67b7a-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.092 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0fef29ca-08f5-4ea9-bb2d-04ac91e8b796]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.093 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0b1020-064d-41a0-99a4-a45dd962462f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.103 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[75e14237-126c-4b35-80e7-a76096227c8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.114 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b56477ed-7657-4e54-abb8-a0dd91a541b5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.140 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[e7baa9f2-d0fe-462d-b7a7-69f09d3f5546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.145 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[be6bd313-00c3-479d-81d6-2aa66b841b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 NetworkManager[49171]: <info>  [1770047102.1462] manager: (tapb6f67b7a-30): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Feb 02 15:45:02 compute-0 systemd-udevd[267697]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.180 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[4023171c-05ef-4400-a2ed-3b7f866f076d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.183 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[a9abe0cd-09d4-4f15-a9b3-f7200efb972a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 NetworkManager[49171]: <info>  [1770047102.2021] device (tapb6f67b7a-30): carrier: link connected
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.208 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[f104960c-710a-4b76-9589-4eb641d07325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.225 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[aba77b01-d092-4fce-8063-7f7a0571e125]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6f67b7a-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458811, 'reachable_time': 24244, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267764, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.240 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa07d94-c02a-4e38-a0fb-7d92cf090a6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe04:b29'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458811, 'tstamp': 458811}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267765, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.256 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a3d353-a46a-4653-8d8d-a4aaec3f3cf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6f67b7a-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458811, 'reachable_time': 24244, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267766, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.283 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b90c0b01-637e-487f-aebf-9503e98aa4cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.333 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d4aeeedd-1a7d-49d2-9968-87554a063974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.334 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6f67b7a-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.335 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.335 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6f67b7a-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:02 compute-0 nova_compute[239545]: 2026-02-02 15:45:02.337 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:02 compute-0 kernel: tapb6f67b7a-30: entered promiscuous mode
Feb 02 15:45:02 compute-0 NetworkManager[49171]: <info>  [1770047102.3376] manager: (tapb6f67b7a-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.339 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6f67b7a-30, col_values=(('external_ids', {'iface-id': '4216aeff-7d93-404b-9880-8737d42e9d19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:02 compute-0 nova_compute[239545]: 2026-02-02 15:45:02.340 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:02 compute-0 ovn_controller[144995]: 2026-02-02T15:45:02Z|00213|binding|INFO|Releasing lport 4216aeff-7d93-404b-9880-8737d42e9d19 from this chassis (sb_readonly=0)
Feb 02 15:45:02 compute-0 nova_compute[239545]: 2026-02-02 15:45:02.341 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.341 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.342 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7db431-90ef-40a2-a799-cd5ae92cb44f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.343 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:45:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:02.343 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'env', 'PROCESS_TAG=haproxy-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:45:02 compute-0 nova_compute[239545]: 2026-02-02 15:45:02.346 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:02 compute-0 nova_compute[239545]: 2026-02-02 15:45:02.517 239549 DEBUG nova.compute.manager [req-443accdb-0dcd-40f9-94cf-0f4774fda694 req-ffe95349-8778-47d5-986a-a362f2298531 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received event network-changed-4cb7a453-9db5-4fbc-a7ba-59600d76589c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:02 compute-0 nova_compute[239545]: 2026-02-02 15:45:02.517 239549 DEBUG nova.compute.manager [req-443accdb-0dcd-40f9-94cf-0f4774fda694 req-ffe95349-8778-47d5-986a-a362f2298531 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Refreshing instance network info cache due to event network-changed-4cb7a453-9db5-4fbc-a7ba-59600d76589c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:45:02 compute-0 nova_compute[239545]: 2026-02-02 15:45:02.517 239549 DEBUG oslo_concurrency.lockutils [req-443accdb-0dcd-40f9-94cf-0f4774fda694 req-ffe95349-8778-47d5-986a-a362f2298531 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:02 compute-0 podman[267798]: 2026-02-02 15:45:02.676745206 +0000 UTC m=+0.045675773 container create 0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:45:02 compute-0 systemd[1]: Started libpod-conmon-0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652.scope.
Feb 02 15:45:02 compute-0 podman[267798]: 2026-02-02 15:45:02.654587727 +0000 UTC m=+0.023518314 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:45:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317df7ae878caf744ba209764c5383b33238a89c916a8c86593d6c39cfdf98b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:02 compute-0 podman[267798]: 2026-02-02 15:45:02.774202549 +0000 UTC m=+0.143133136 container init 0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:45:02 compute-0 podman[267798]: 2026-02-02 15:45:02.779130018 +0000 UTC m=+0.148060585 container start 0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:45:02 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[267813]: [NOTICE]   (267817) : New worker (267819) forked
Feb 02 15:45:02 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[267813]: [NOTICE]   (267817) : Loading success.
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.020 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047088.0181699, 51307c94-353b-4d22-a215-27dba54ba38a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.020 239549 INFO nova.compute.manager [-] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] VM Stopped (Lifecycle Event)
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.051 239549 DEBUG nova.compute.manager [None req-86b085a7-4c3b-4339-8db3-f714380a19b6 - - - - - -] [instance: 51307c94-353b-4d22-a215-27dba54ba38a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:03 compute-0 ceph-mon[75334]: pgmap v1602: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.3 MiB/s wr, 117 op/s
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.894 239549 DEBUG nova.network.neutron [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Updating instance_info_cache with network_info: [{"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.920 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Releasing lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.920 239549 DEBUG nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Instance network_info: |[{"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.921 239549 DEBUG oslo_concurrency.lockutils [req-443accdb-0dcd-40f9-94cf-0f4774fda694 req-ffe95349-8778-47d5-986a-a362f2298531 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.921 239549 DEBUG nova.network.neutron [req-443accdb-0dcd-40f9-94cf-0f4774fda694 req-ffe95349-8778-47d5-986a-a362f2298531 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Refreshing network info cache for port 4cb7a453-9db5-4fbc-a7ba-59600d76589c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.924 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Start _get_guest_xml network_info=[{"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': 'da341cbf-0704-4260-b27d-13a0c2066860', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9698e5da-2df0-4288-87d3-c3ebb6c2ab14', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9698e5da-2df0-4288-87d3-c3ebb6c2ab14', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '589acca5-dd9e-4695-b32a-0235932283d1', 'attached_at': '', 'detached_at': '', 'volume_id': '9698e5da-2df0-4288-87d3-c3ebb6c2ab14', 'serial': '9698e5da-2df0-4288-87d3-c3ebb6c2ab14'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.934 239549 WARNING nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.942 239549 DEBUG nova.virt.libvirt.host [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.943 239549 DEBUG nova.virt.libvirt.host [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.947 239549 DEBUG nova.virt.libvirt.host [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.948 239549 DEBUG nova.virt.libvirt.host [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.949 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.949 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.949 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.950 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.950 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.950 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.950 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.950 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.951 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.951 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.951 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.951 239549 DEBUG nova.virt.hardware [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:45:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.978 239549 DEBUG nova.storage.rbd_utils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 589acca5-dd9e-4695-b32a-0235932283d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:03 compute-0 nova_compute[239545]: 2026-02-02 15:45:03.983 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.404 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.439 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047104.438906, 58bc96ea-b6cb-4080-b353-861ed4e160f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.441 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] VM Started (Lifecycle Event)
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.462 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.466 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047104.4394374, 58bc96ea-b6cb-4080-b353-861ed4e160f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.466 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] VM Paused (Lifecycle Event)
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.483 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.487 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.505 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:45:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:45:04 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785412453' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.529 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.551 239549 DEBUG nova.virt.libvirt.vif [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:44:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1837701869',display_name='tempest-TestVolumeBootPattern-server-1837701869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1837701869',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVN3emQ3pa4ZbuxCkTmhDe1Vp6VQUY67rC+ITHBo+Tq5uE7NmayODM4fxB/CHWvUnJ+8HqCsQ4XM6GBraeEG0bMnApJ123caLkGqWErsSAkkLYVHXE8VvM9eqpwYxSifA==',key_name='tempest-TestVolumeBootPattern-570771141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-i8jac8m5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:44:57Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=589acca5-dd9e-4695-b32a-0235932283d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.552 239549 DEBUG nova.network.os_vif_util [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.553 239549 DEBUG nova.network.os_vif_util [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:66:84,bridge_name='br-int',has_traffic_filtering=True,id=4cb7a453-9db5-4fbc-a7ba-59600d76589c,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cb7a453-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.554 239549 DEBUG nova.objects.instance [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 589acca5-dd9e-4695-b32a-0235932283d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.567 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.569 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <uuid>589acca5-dd9e-4695-b32a-0235932283d1</uuid>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <name>instance-00000018</name>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <nova:name>tempest-TestVolumeBootPattern-server-1837701869</nova:name>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:45:03</nova:creationTime>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <nova:user uuid="b8e72a1cb6344869821da1cfc41bf8fc">tempest-TestVolumeBootPattern-77302308-project-member</nova:user>
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <nova:project uuid="8a28227cdc0a4390bebe7549f189bfe5">tempest-TestVolumeBootPattern-77302308</nova:project>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <nova:port uuid="4cb7a453-9db5-4fbc-a7ba-59600d76589c">
Feb 02 15:45:04 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <system>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <entry name="serial">589acca5-dd9e-4695-b32a-0235932283d1</entry>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <entry name="uuid">589acca5-dd9e-4695-b32a-0235932283d1</entry>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     </system>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <os>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   </os>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <features>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   </features>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/589acca5-dd9e-4695-b32a-0235932283d1_disk.config">
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       </source>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-9698e5da-2df0-4288-87d3-c3ebb6c2ab14">
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       </source>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:45:04 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <serial>9698e5da-2df0-4288-87d3-c3ebb6c2ab14</serial>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:f4:66:84"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <target dev="tap4cb7a453-9d"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1/console.log" append="off"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <video>
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     </video>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:45:04 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:45:04 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:45:04 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:45:04 compute-0 nova_compute[239545]: </domain>
Feb 02 15:45:04 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.570 239549 DEBUG nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Preparing to wait for external event network-vif-plugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.570 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "589acca5-dd9e-4695-b32a-0235932283d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.570 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.571 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.571 239549 DEBUG nova.virt.libvirt.vif [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:44:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1837701869',display_name='tempest-TestVolumeBootPattern-server-1837701869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1837701869',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVN3emQ3pa4ZbuxCkTmhDe1Vp6VQUY67rC+ITHBo+Tq5uE7NmayODM4fxB/CHWvUnJ+8HqCsQ4XM6GBraeEG0bMnApJ123caLkGqWErsSAkkLYVHXE8VvM9eqpwYxSifA==',key_name='tempest-TestVolumeBootPattern-570771141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-i8jac8m5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:44:57Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=589acca5-dd9e-4695-b32a-0235932283d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.571 239549 DEBUG nova.network.os_vif_util [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.572 239549 DEBUG nova.network.os_vif_util [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:66:84,bridge_name='br-int',has_traffic_filtering=True,id=4cb7a453-9db5-4fbc-a7ba-59600d76589c,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cb7a453-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.572 239549 DEBUG os_vif [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:66:84,bridge_name='br-int',has_traffic_filtering=True,id=4cb7a453-9db5-4fbc-a7ba-59600d76589c,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cb7a453-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.573 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.573 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.573 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.576 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.576 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4cb7a453-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.576 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4cb7a453-9d, col_values=(('external_ids', {'iface-id': '4cb7a453-9db5-4fbc-a7ba-59600d76589c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:66:84', 'vm-uuid': '589acca5-dd9e-4695-b32a-0235932283d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.577 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:04 compute-0 NetworkManager[49171]: <info>  [1770047104.5788] manager: (tap4cb7a453-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.580 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.583 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.584 239549 INFO os_vif [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:66:84,bridge_name='br-int',has_traffic_filtering=True,id=4cb7a453-9db5-4fbc-a7ba-59600d76589c,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cb7a453-9d')
Feb 02 15:45:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/785412453' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.604 239549 DEBUG nova.compute.manager [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received event network-vif-plugged-b06b3b06-65e4-495f-8828-87024e852a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.604 239549 DEBUG oslo_concurrency.lockutils [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.605 239549 DEBUG oslo_concurrency.lockutils [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.605 239549 DEBUG oslo_concurrency.lockutils [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.605 239549 DEBUG nova.compute.manager [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Processing event network-vif-plugged-b06b3b06-65e4-495f-8828-87024e852a05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.605 239549 DEBUG nova.compute.manager [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received event network-vif-plugged-b06b3b06-65e4-495f-8828-87024e852a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.606 239549 DEBUG oslo_concurrency.lockutils [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.606 239549 DEBUG oslo_concurrency.lockutils [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.606 239549 DEBUG oslo_concurrency.lockutils [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.606 239549 DEBUG nova.compute.manager [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] No waiting events found dispatching network-vif-plugged-b06b3b06-65e4-495f-8828-87024e852a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.607 239549 WARNING nova.compute.manager [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received unexpected event network-vif-plugged-b06b3b06-65e4-495f-8828-87024e852a05 for instance with vm_state building and task_state spawning.
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.607 239549 DEBUG nova.compute.manager [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received event network-changed-b40b5abb-11a7-4bce-96a9-904feea605f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.607 239549 DEBUG nova.compute.manager [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Refreshing instance network info cache due to event network-changed-b40b5abb-11a7-4bce-96a9-904feea605f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.607 239549 DEBUG oslo_concurrency.lockutils [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.608 239549 DEBUG oslo_concurrency.lockutils [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.608 239549 DEBUG nova.network.neutron [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Refreshing network info cache for port b40b5abb-11a7-4bce-96a9-904feea605f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.609 239549 DEBUG nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.615 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047104.6140323, 58bc96ea-b6cb-4080-b353-861ed4e160f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.616 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] VM Resumed (Lifecycle Event)
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.618 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.626 239549 INFO nova.virt.libvirt.driver [-] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Instance spawned successfully.
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.627 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.643 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.651 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.655 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.655 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.656 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No VIF found with MAC fa:16:3e:f4:66:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.656 239549 INFO nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Using config drive
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.683 239549 DEBUG nova.storage.rbd_utils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 589acca5-dd9e-4695-b32a-0235932283d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.689 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.692 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.692 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.693 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.693 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.693 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.694 239549 DEBUG nova.virt.libvirt.driver [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.741 239549 INFO nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Took 10.03 seconds to spawn the instance on the hypervisor.
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.742 239549 DEBUG nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.929 239549 INFO nova.compute.manager [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Took 12.87 seconds to build instance.
Feb 02 15:45:04 compute-0 nova_compute[239545]: 2026-02-02 15:45:04.960 239549 DEBUG oslo_concurrency.lockutils [None req-07980f54-dc69-448d-a42d-2f59fdee738a df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:05 compute-0 ceph-mon[75334]: pgmap v1603: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Feb 02 15:45:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 121 op/s
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.411 239549 INFO nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Creating config drive at /var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1/disk.config
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.416 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpir_rpqim execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.541 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpir_rpqim" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.577 239549 DEBUG nova.storage.rbd_utils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image 589acca5-dd9e-4695-b32a-0235932283d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.584 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1/disk.config 589acca5-dd9e-4695-b32a-0235932283d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.615 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.617 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.618 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.668 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.715 239549 DEBUG oslo_concurrency.processutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1/disk.config 589acca5-dd9e-4695-b32a-0235932283d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.717 239549 INFO nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Deleting local config drive /var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1/disk.config because it was imported into RBD.
Feb 02 15:45:06 compute-0 NetworkManager[49171]: <info>  [1770047106.7514] manager: (tap4cb7a453-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Feb 02 15:45:06 compute-0 kernel: tap4cb7a453-9d: entered promiscuous mode
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.753 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:06 compute-0 ovn_controller[144995]: 2026-02-02T15:45:06Z|00214|binding|INFO|Claiming lport 4cb7a453-9db5-4fbc-a7ba-59600d76589c for this chassis.
Feb 02 15:45:06 compute-0 ovn_controller[144995]: 2026-02-02T15:45:06Z|00215|binding|INFO|4cb7a453-9db5-4fbc-a7ba-59600d76589c: Claiming fa:16:3e:f4:66:84 10.100.0.10
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.763 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:66:84 10.100.0.10'], port_security=['fa:16:3e:f4:66:84 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '589acca5-dd9e-4695-b32a-0235932283d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '413c222f-1970-4ec0-b0a7-3e88c9a779d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=4cb7a453-9db5-4fbc-a7ba-59600d76589c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.764 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 4cb7a453-9db5-4fbc-a7ba-59600d76589c in datapath 473fc4ca-a137-447b-9349-9f4677babee6 bound to our chassis
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.764 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.766 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:45:06 compute-0 ovn_controller[144995]: 2026-02-02T15:45:06Z|00216|binding|INFO|Setting lport 4cb7a453-9db5-4fbc-a7ba-59600d76589c ovn-installed in OVS
Feb 02 15:45:06 compute-0 ovn_controller[144995]: 2026-02-02T15:45:06Z|00217|binding|INFO|Setting lport 4cb7a453-9db5-4fbc-a7ba-59600d76589c up in Southbound
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.773 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.774 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[61c8b9eb-39ea-4140-933d-37b68600ec62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.775 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap473fc4ca-a1 in ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:45:06 compute-0 nova_compute[239545]: 2026-02-02 15:45:06.775 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.778 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap473fc4ca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.778 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e42de06c-4115-439e-8a22-bd1d19c8dec8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.779 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f634fb-e2c6-4082-8455-6d66bb4fdd83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 systemd-udevd[267948]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:45:06 compute-0 systemd-machined[207609]: New machine qemu-24-instance-00000018.
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.792 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[953e21c5-32e7-449a-8525-41edab8e62a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 NetworkManager[49171]: <info>  [1770047106.7992] device (tap4cb7a453-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:45:06 compute-0 NetworkManager[49171]: <info>  [1770047106.8001] device (tap4cb7a453-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:45:06 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.805 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9fecf162-1d02-4c8c-8379-b57f298236e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.830 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[78d36e02-49c4-4fab-9514-b658b59d1017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.834 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[02b77e51-9dd8-425e-878b-be0231a4e3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 NetworkManager[49171]: <info>  [1770047106.8354] manager: (tap473fc4ca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/122)
Feb 02 15:45:06 compute-0 systemd-udevd[267953]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.861 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3b64eb-d92a-4973-b72b-17d1605fcf30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.864 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[50059f1c-e3b7-47ac-99d1-c0df3226e7b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 NetworkManager[49171]: <info>  [1770047106.8867] device (tap473fc4ca-a0): carrier: link connected
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.889 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[d51f16c6-919b-4aaf-95f5-140361fded7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.907 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8847bea2-e679-4375-b102-5b8fd8827044]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459280, 'reachable_time': 26890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267981, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.921 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[62eba961-8818-4461-8814-88a1f3663dfb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:14cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459280, 'tstamp': 459280}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267982, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.956 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[68e89808-9c28-435b-be24-6eda9fe9e0fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459280, 'reachable_time': 26890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267983, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:06.982 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[9effdb53-ea5b-4ce7-8538-818a65f151c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:07.030 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[83ecc02a-ccf9-4391-a492-028c8b08a1c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:07.032 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:07.033 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:07.034 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473fc4ca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:07 compute-0 kernel: tap473fc4ca-a0: entered promiscuous mode
Feb 02 15:45:07 compute-0 NetworkManager[49171]: <info>  [1770047107.0367] manager: (tap473fc4ca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.036 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.037 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:07.039 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473fc4ca-a0, col_values=(('external_ids', {'iface-id': '8ec763b2-de85-4ed5-bb5d-67e76d81beae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:07 compute-0 ovn_controller[144995]: 2026-02-02T15:45:07Z|00218|binding|INFO|Releasing lport 8ec763b2-de85-4ed5-bb5d-67e76d81beae from this chassis (sb_readonly=0)
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.040 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.045 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:07.046 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:07.048 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[fed92aab-a966-4baf-968e-d242eecc88b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:07.049 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/473fc4ca-a137-447b-9349-9f4677babee6.pid.haproxy
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:45:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:07.049 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'env', 'PROCESS_TAG=haproxy-473fc4ca-a137-447b-9349-9f4677babee6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/473fc4ca-a137-447b-9349-9f4677babee6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.107 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.167 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047107.1672382, 589acca5-dd9e-4695-b32a-0235932283d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.168 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] VM Started (Lifecycle Event)
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.182 239549 DEBUG nova.network.neutron [req-443accdb-0dcd-40f9-94cf-0f4774fda694 req-ffe95349-8778-47d5-986a-a362f2298531 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Updated VIF entry in instance network info cache for port 4cb7a453-9db5-4fbc-a7ba-59600d76589c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.184 239549 DEBUG nova.network.neutron [req-443accdb-0dcd-40f9-94cf-0f4774fda694 req-ffe95349-8778-47d5-986a-a362f2298531 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Updating instance_info_cache with network_info: [{"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.192 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.197 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047107.1699095, 589acca5-dd9e-4695-b32a-0235932283d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.198 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] VM Paused (Lifecycle Event)
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.214 239549 DEBUG oslo_concurrency.lockutils [req-443accdb-0dcd-40f9-94cf-0f4774fda694 req-ffe95349-8778-47d5-986a-a362f2298531 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.219 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.223 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.245 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.320 239549 DEBUG nova.compute.manager [req-5a5fffa5-3321-48a8-b87a-4438e2808235 req-62e936d6-1e44-4415-a830-3341dc9951be d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received event network-vif-plugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.322 239549 DEBUG oslo_concurrency.lockutils [req-5a5fffa5-3321-48a8-b87a-4438e2808235 req-62e936d6-1e44-4415-a830-3341dc9951be d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "589acca5-dd9e-4695-b32a-0235932283d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.322 239549 DEBUG oslo_concurrency.lockutils [req-5a5fffa5-3321-48a8-b87a-4438e2808235 req-62e936d6-1e44-4415-a830-3341dc9951be d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.323 239549 DEBUG oslo_concurrency.lockutils [req-5a5fffa5-3321-48a8-b87a-4438e2808235 req-62e936d6-1e44-4415-a830-3341dc9951be d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.323 239549 DEBUG nova.compute.manager [req-5a5fffa5-3321-48a8-b87a-4438e2808235 req-62e936d6-1e44-4415-a830-3341dc9951be d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Processing event network-vif-plugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.325 239549 DEBUG nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.330 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.331 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047107.3298953, 589acca5-dd9e-4695-b32a-0235932283d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.332 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] VM Resumed (Lifecycle Event)
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.336 239549 INFO nova.virt.libvirt.driver [-] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Instance spawned successfully.
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.338 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:45:07 compute-0 podman[268057]: 2026-02-02 15:45:07.357902582 +0000 UTC m=+0.048891461 container create 6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.363 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.374 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.379 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.382 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.383 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.383 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.384 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.384 239549 DEBUG nova.virt.libvirt.driver [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:07 compute-0 systemd[1]: Started libpod-conmon-6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365.scope.
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.405 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:45:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a1efaa94d0ed1927c87de7e54ac1af8d879769fff2bbbfe3d58141a752f235/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:07 compute-0 podman[268057]: 2026-02-02 15:45:07.333521179 +0000 UTC m=+0.024510088 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:45:07 compute-0 podman[268057]: 2026-02-02 15:45:07.435667456 +0000 UTC m=+0.126656355 container init 6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:45:07 compute-0 podman[268057]: 2026-02-02 15:45:07.444543202 +0000 UTC m=+0.135532081 container start 6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:45:07 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[268072]: [NOTICE]   (268076) : New worker (268078) forked
Feb 02 15:45:07 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[268072]: [NOTICE]   (268076) : Loading success.
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.482 239549 INFO nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Took 8.59 seconds to spawn the instance on the hypervisor.
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.483 239549 DEBUG nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.564 239549 INFO nova.compute.manager [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Took 11.40 seconds to build instance.
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.585 239549 DEBUG oslo_concurrency.lockutils [None req-8a8428a7-0b61-4711-85e8-8d92139b9ff0 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:07 compute-0 ceph-mon[75334]: pgmap v1604: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 121 op/s
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.719 239549 DEBUG nova.network.neutron [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated VIF entry in instance network info cache for port b40b5abb-11a7-4bce-96a9-904feea605f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.719 239549 DEBUG nova.network.neutron [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.734 239549 DEBUG oslo_concurrency.lockutils [req-45eea749-b23c-4ee1-8bc8-536a44348339 req-535e7464-bf0f-48c5-bce8-b82c5da4b9bc d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.735 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.735 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:45:07 compute-0 nova_compute[239545]: 2026-02-02 15:45:07.735 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:45:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 96 op/s
Feb 02 15:45:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.406 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.501 239549 DEBUG nova.compute.manager [req-d80f6074-630c-4085-9c93-ade0eecbcf92 req-99e54ab9-dbda-426d-9833-d3b3dfa38940 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received event network-vif-plugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.502 239549 DEBUG oslo_concurrency.lockutils [req-d80f6074-630c-4085-9c93-ade0eecbcf92 req-99e54ab9-dbda-426d-9833-d3b3dfa38940 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "589acca5-dd9e-4695-b32a-0235932283d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.503 239549 DEBUG oslo_concurrency.lockutils [req-d80f6074-630c-4085-9c93-ade0eecbcf92 req-99e54ab9-dbda-426d-9833-d3b3dfa38940 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.504 239549 DEBUG oslo_concurrency.lockutils [req-d80f6074-630c-4085-9c93-ade0eecbcf92 req-99e54ab9-dbda-426d-9833-d3b3dfa38940 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.504 239549 DEBUG nova.compute.manager [req-d80f6074-630c-4085-9c93-ade0eecbcf92 req-99e54ab9-dbda-426d-9833-d3b3dfa38940 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] No waiting events found dispatching network-vif-plugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.505 239549 WARNING nova.compute.manager [req-d80f6074-630c-4085-9c93-ade0eecbcf92 req-99e54ab9-dbda-426d-9833-d3b3dfa38940 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received unexpected event network-vif-plugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c for instance with vm_state active and task_state None.
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.577 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:09 compute-0 ceph-mon[75334]: pgmap v1605: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 96 op/s
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.658 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.662 239549 DEBUG nova.compute.manager [req-e06513c5-a549-40ce-9648-368f06257870 req-1c977ee6-8e65-46ac-bf61-ca7def08f1b9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received event network-changed-b06b3b06-65e4-495f-8828-87024e852a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.663 239549 DEBUG nova.compute.manager [req-e06513c5-a549-40ce-9648-368f06257870 req-1c977ee6-8e65-46ac-bf61-ca7def08f1b9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Refreshing instance network info cache due to event network-changed-b06b3b06-65e4-495f-8828-87024e852a05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.663 239549 DEBUG oslo_concurrency.lockutils [req-e06513c5-a549-40ce-9648-368f06257870 req-1c977ee6-8e65-46ac-bf61-ca7def08f1b9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-58bc96ea-b6cb-4080-b353-861ed4e160f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.663 239549 DEBUG oslo_concurrency.lockutils [req-e06513c5-a549-40ce-9648-368f06257870 req-1c977ee6-8e65-46ac-bf61-ca7def08f1b9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-58bc96ea-b6cb-4080-b353-861ed4e160f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.664 239549 DEBUG nova.network.neutron [req-e06513c5-a549-40ce-9648-368f06257870 req-1c977ee6-8e65-46ac-bf61-ca7def08f1b9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Refreshing network info cache for port b06b3b06-65e4-495f-8828-87024e852a05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.678 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:09 compute-0 nova_compute[239545]: 2026-02-02 15:45:09.678 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:45:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 27 KiB/s wr, 153 op/s
Feb 02 15:45:10 compute-0 nova_compute[239545]: 2026-02-02 15:45:10.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:45:10 compute-0 nova_compute[239545]: 2026-02-02 15:45:10.547 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:45:11 compute-0 ceph-mon[75334]: pgmap v1606: 305 pgs: 305 active+clean; 327 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 27 KiB/s wr, 153 op/s
Feb 02 15:45:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 345 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Feb 02 15:45:11 compute-0 ovn_controller[144995]: 2026-02-02T15:45:11Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:7b:e6 10.100.0.6
Feb 02 15:45:11 compute-0 ovn_controller[144995]: 2026-02-02T15:45:11Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:7b:e6 10.100.0.6
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.000 239549 DEBUG nova.network.neutron [req-e06513c5-a549-40ce-9648-368f06257870 req-1c977ee6-8e65-46ac-bf61-ca7def08f1b9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Updated VIF entry in instance network info cache for port b06b3b06-65e4-495f-8828-87024e852a05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.001 239549 DEBUG nova.network.neutron [req-e06513c5-a549-40ce-9648-368f06257870 req-1c977ee6-8e65-46ac-bf61-ca7def08f1b9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Updating instance_info_cache with network_info: [{"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.023 239549 DEBUG oslo_concurrency.lockutils [req-e06513c5-a549-40ce-9648-368f06257870 req-1c977ee6-8e65-46ac-bf61-ca7def08f1b9 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-58bc96ea-b6cb-4080-b353-861ed4e160f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.184 239549 DEBUG nova.compute.manager [req-b57cf6c9-8a57-4ece-99e1-37b06e01b02c req-55457528-cb72-4eff-bf5f-e233cd08a89d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received event network-changed-4cb7a453-9db5-4fbc-a7ba-59600d76589c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.184 239549 DEBUG nova.compute.manager [req-b57cf6c9-8a57-4ece-99e1-37b06e01b02c req-55457528-cb72-4eff-bf5f-e233cd08a89d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Refreshing instance network info cache due to event network-changed-4cb7a453-9db5-4fbc-a7ba-59600d76589c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.185 239549 DEBUG oslo_concurrency.lockutils [req-b57cf6c9-8a57-4ece-99e1-37b06e01b02c req-55457528-cb72-4eff-bf5f-e233cd08a89d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.185 239549 DEBUG oslo_concurrency.lockutils [req-b57cf6c9-8a57-4ece-99e1-37b06e01b02c req-55457528-cb72-4eff-bf5f-e233cd08a89d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.185 239549 DEBUG nova.network.neutron [req-b57cf6c9-8a57-4ece-99e1-37b06e01b02c req-55457528-cb72-4eff-bf5f-e233cd08a89d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Refreshing network info cache for port 4cb7a453-9db5-4fbc-a7ba-59600d76589c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.569 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.570 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.570 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.570 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:45:12 compute-0 nova_compute[239545]: 2026-02-02 15:45:12.570 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:45:13 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864572333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.103 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.174 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.175 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.179 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.179 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.182 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.182 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.334 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.335 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3785MB free_disk=59.94676384795457GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.335 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.335 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.403 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.403 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 58bc96ea-b6cb-4080-b353-861ed4e160f9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.403 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 589acca5-dd9e-4695-b32a-0235932283d1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.404 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.404 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.462 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.544 239549 DEBUG nova.network.neutron [req-b57cf6c9-8a57-4ece-99e1-37b06e01b02c req-55457528-cb72-4eff-bf5f-e233cd08a89d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Updated VIF entry in instance network info cache for port 4cb7a453-9db5-4fbc-a7ba-59600d76589c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.545 239549 DEBUG nova.network.neutron [req-b57cf6c9-8a57-4ece-99e1-37b06e01b02c req-55457528-cb72-4eff-bf5f-e233cd08a89d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Updating instance_info_cache with network_info: [{"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:13 compute-0 nova_compute[239545]: 2026-02-02 15:45:13.564 239549 DEBUG oslo_concurrency.lockutils [req-b57cf6c9-8a57-4ece-99e1-37b06e01b02c req-55457528-cb72-4eff-bf5f-e233cd08a89d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:13 compute-0 ceph-mon[75334]: pgmap v1607: 305 pgs: 305 active+clean; 345 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Feb 02 15:45:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1864572333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 351 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 200 op/s
Feb 02 15:45:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:45:14 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1997832303' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:14 compute-0 nova_compute[239545]: 2026-02-02 15:45:14.040 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:14 compute-0 nova_compute[239545]: 2026-02-02 15:45:14.045 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:45:14 compute-0 nova_compute[239545]: 2026-02-02 15:45:14.063 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:45:14 compute-0 nova_compute[239545]: 2026-02-02 15:45:14.083 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:45:14 compute-0 nova_compute[239545]: 2026-02-02 15:45:14.084 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:14 compute-0 nova_compute[239545]: 2026-02-02 15:45:14.408 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:14 compute-0 nova_compute[239545]: 2026-02-02 15:45:14.578 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:14 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1997832303' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:45:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:45:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:45:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:45:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:45:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:45:15 compute-0 nova_compute[239545]: 2026-02-02 15:45:15.084 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:45:15 compute-0 nova_compute[239545]: 2026-02-02 15:45:15.084 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:45:15 compute-0 nova_compute[239545]: 2026-02-02 15:45:15.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:45:15 compute-0 ceph-mon[75334]: pgmap v1608: 305 pgs: 305 active+clean; 351 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 200 op/s
Feb 02 15:45:15 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 02 15:45:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 360 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 209 op/s
Feb 02 15:45:16 compute-0 podman[268133]: 2026-02-02 15:45:16.336806531 +0000 UTC m=+0.071716646 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:45:16 compute-0 podman[268132]: 2026-02-02 15:45:16.351907159 +0000 UTC m=+0.092539473 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Feb 02 15:45:17 compute-0 ceph-mon[75334]: pgmap v1609: 305 pgs: 305 active+clean; 360 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 209 op/s
Feb 02 15:45:17 compute-0 ovn_controller[144995]: 2026-02-02T15:45:17Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:b7:65 10.100.0.4
Feb 02 15:45:17 compute-0 ovn_controller[144995]: 2026-02-02T15:45:17Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:b7:65 10.100.0.4
Feb 02 15:45:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 360 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Feb 02 15:45:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:19 compute-0 nova_compute[239545]: 2026-02-02 15:45:19.410 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:19 compute-0 nova_compute[239545]: 2026-02-02 15:45:19.580 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:19 compute-0 ceph-mon[75334]: pgmap v1610: 305 pgs: 305 active+clean; 360 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Feb 02 15:45:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 382 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 234 op/s
Feb 02 15:45:20 compute-0 ovn_controller[144995]: 2026-02-02T15:45:20Z|00050|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.10
Feb 02 15:45:20 compute-0 ovn_controller[144995]: 2026-02-02T15:45:20Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f4:66:84 10.100.0.10
Feb 02 15:45:21 compute-0 ceph-mon[75334]: pgmap v1611: 305 pgs: 305 active+clean; 382 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 234 op/s
Feb 02 15:45:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 429 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 8.0 MiB/s wr, 252 op/s
Feb 02 15:45:22 compute-0 nova_compute[239545]: 2026-02-02 15:45:22.963 239549 DEBUG oslo_concurrency.lockutils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:22 compute-0 nova_compute[239545]: 2026-02-02 15:45:22.964 239549 DEBUG oslo_concurrency.lockutils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:22 compute-0 nova_compute[239545]: 2026-02-02 15:45:22.978 239549 DEBUG nova.objects.instance [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lazy-loading 'flavor' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.007 239549 DEBUG oslo_concurrency.lockutils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.172 239549 DEBUG oslo_concurrency.lockutils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.173 239549 DEBUG oslo_concurrency.lockutils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.174 239549 INFO nova.compute.manager [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Attaching volume 07dac747-755a-4dfa-9f1f-96c172aeb2da to /dev/vdb
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.307 239549 DEBUG os_brick.utils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.309 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.317 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.318 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0ecbcb-838e-4c26-b827-2543597bf382]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.319 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.325 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.325 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[a4fd28f6-06b2-4813-8a25-131e1210dd24]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.326 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.331 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.331 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[99b96eee-6607-42f2-bb31-54e5918cd731]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.332 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0c5d45-ffd0-491e-a008-86fe247838d9]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.332 239549 DEBUG oslo_concurrency.processutils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.356 239549 DEBUG oslo_concurrency.processutils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.358 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.358 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.359 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.359 239549 DEBUG os_brick.utils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] <== get_connector_properties: return (51ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:45:23 compute-0 nova_compute[239545]: 2026-02-02 15:45:23.359 239549 DEBUG nova.virt.block_device [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating existing volume attachment record: f7e64edc-7f30-4124-88df-8ebe67ad61e0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:45:23 compute-0 ceph-mon[75334]: pgmap v1612: 305 pgs: 305 active+clean; 429 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 8.0 MiB/s wr, 252 op/s
Feb 02 15:45:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 429 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.2 MiB/s wr, 167 op/s
Feb 02 15:45:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:45:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1794080839' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.097 239549 DEBUG nova.objects.instance [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lazy-loading 'flavor' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.128 239549 DEBUG nova.virt.libvirt.driver [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Attempting to attach volume 07dac747-755a-4dfa-9f1f-96c172aeb2da with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.130 239549 DEBUG nova.virt.libvirt.guest [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:45:24 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:45:24 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-07dac747-755a-4dfa-9f1f-96c172aeb2da">
Feb 02 15:45:24 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:45:24 compute-0 nova_compute[239545]:   </source>
Feb 02 15:45:24 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:45:24 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:45:24 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:45:24 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:45:24 compute-0 nova_compute[239545]:   <serial>07dac747-755a-4dfa-9f1f-96c172aeb2da</serial>
Feb 02 15:45:24 compute-0 nova_compute[239545]: </disk>
Feb 02 15:45:24 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.223 239549 DEBUG nova.virt.libvirt.driver [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.224 239549 DEBUG nova.virt.libvirt.driver [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.224 239549 DEBUG nova.virt.libvirt.driver [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.224 239549 DEBUG nova.virt.libvirt.driver [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] No VIF found with MAC fa:16:3e:a3:7b:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.413 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.424 239549 DEBUG oslo_concurrency.lockutils [None req-7c03d574-5158-4a37-b951-8da546101968 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:24 compute-0 nova_compute[239545]: 2026-02-02 15:45:24.582 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1794080839' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00052|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.10
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f4:66:84 10.100.0.10
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.455 239549 DEBUG oslo_concurrency.lockutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "58bc96ea-b6cb-4080-b353-861ed4e160f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.455 239549 DEBUG oslo_concurrency.lockutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.456 239549 DEBUG oslo_concurrency.lockutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.456 239549 DEBUG oslo_concurrency.lockutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.456 239549 DEBUG oslo_concurrency.lockutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.457 239549 INFO nova.compute.manager [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Terminating instance
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.459 239549 DEBUG nova.compute.manager [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:45:25 compute-0 kernel: tapb06b3b06-65 (unregistering): left promiscuous mode
Feb 02 15:45:25 compute-0 NetworkManager[49171]: <info>  [1770047125.5001] device (tapb06b3b06-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00219|binding|INFO|Releasing lport b06b3b06-65e4-495f-8828-87024e852a05 from this chassis (sb_readonly=0)
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00220|binding|INFO|Setting lport b06b3b06-65e4-495f-8828-87024e852a05 down in Southbound
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.511 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00221|binding|INFO|Removing iface tapb06b3b06-65 ovn-installed in OVS
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.515 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.522 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b7:65 10.100.0.4'], port_security=['fa:16:3e:70:b7:65 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '58bc96ea-b6cb-4080-b353-861ed4e160f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e2499c6-4637-44db-a491-4fe8bcc3f081', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=b06b3b06-65e4-495f-8828-87024e852a05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.523 154982 INFO neutron.agent.ovn.metadata.agent [-] Port b06b3b06-65e4-495f-8828-87024e852a05 in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c unbound from our chassis
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.524 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.525 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.527 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[943760e1-0ec7-41c6-8809-e793a34e0e6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.527 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c namespace which is not needed anymore
Feb 02 15:45:25 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Feb 02 15:45:25 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 15.308s CPU time.
Feb 02 15:45:25 compute-0 systemd-machined[207609]: Machine qemu-23-instance-00000017 terminated.
Feb 02 15:45:25 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[267813]: [NOTICE]   (267817) : haproxy version is 2.8.14-c23fe91
Feb 02 15:45:25 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[267813]: [NOTICE]   (267817) : path to executable is /usr/sbin/haproxy
Feb 02 15:45:25 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[267813]: [WARNING]  (267817) : Exiting Master process...
Feb 02 15:45:25 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[267813]: [ALERT]    (267817) : Current worker (267819) exited with code 143 (Terminated)
Feb 02 15:45:25 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[267813]: [WARNING]  (267817) : All workers exited. Exiting... (0)
Feb 02 15:45:25 compute-0 systemd[1]: libpod-0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652.scope: Deactivated successfully.
Feb 02 15:45:25 compute-0 podman[268229]: 2026-02-02 15:45:25.666658523 +0000 UTC m=+0.050808809 container died 0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 02 15:45:25 compute-0 kernel: tapb06b3b06-65: entered promiscuous mode
Feb 02 15:45:25 compute-0 NetworkManager[49171]: <info>  [1770047125.6785] manager: (tapb06b3b06-65): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Feb 02 15:45:25 compute-0 systemd-udevd[268208]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:45:25 compute-0 kernel: tapb06b3b06-65 (unregistering): left promiscuous mode
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00222|binding|INFO|Claiming lport b06b3b06-65e4-495f-8828-87024e852a05 for this chassis.
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00223|binding|INFO|b06b3b06-65e4-495f-8828-87024e852a05: Claiming fa:16:3e:70:b7:65 10.100.0.4
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.681 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.689 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b7:65 10.100.0.4'], port_security=['fa:16:3e:70:b7:65 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '58bc96ea-b6cb-4080-b353-861ed4e160f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e2499c6-4637-44db-a491-4fe8bcc3f081', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=b06b3b06-65e4-495f-8828-87024e852a05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00224|binding|INFO|Setting lport b06b3b06-65e4-495f-8828-87024e852a05 ovn-installed in OVS
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00225|binding|INFO|Setting lport b06b3b06-65e4-495f-8828-87024e852a05 up in Southbound
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.696 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00226|binding|INFO|Releasing lport b06b3b06-65e4-495f-8828-87024e852a05 from this chassis (sb_readonly=1)
Feb 02 15:45:25 compute-0 ceph-mon[75334]: pgmap v1613: 305 pgs: 305 active+clean; 429 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.2 MiB/s wr, 167 op/s
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00227|if_status|INFO|Dropped 3 log messages in last 760 seconds (most recently, 760 seconds ago) due to excessive rate
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00228|if_status|INFO|Not setting lport b06b3b06-65e4-495f-8828-87024e852a05 down as sb is readonly
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00229|binding|INFO|Removing iface tapb06b3b06-65 ovn-installed in OVS
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00230|binding|INFO|Releasing lport b06b3b06-65e4-495f-8828-87024e852a05 from this chassis (sb_readonly=0)
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.700 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00231|binding|INFO|Setting lport b06b3b06-65e4-495f-8828-87024e852a05 down in Southbound
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.701 239549 INFO nova.virt.libvirt.driver [-] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Instance destroyed successfully.
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.701 239549 DEBUG nova.objects.instance [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lazy-loading 'resources' on Instance uuid 58bc96ea-b6cb-4080-b353-861ed4e160f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.705 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652-userdata-shm.mount: Deactivated successfully.
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.708 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b7:65 10.100.0.4'], port_security=['fa:16:3e:70:b7:65 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '58bc96ea-b6cb-4080-b353-861ed4e160f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e2499c6-4637-44db-a491-4fe8bcc3f081', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=b06b3b06-65e4-495f-8828-87024e852a05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-317df7ae878caf744ba209764c5383b33238a89c916a8c86593d6c39cfdf98b7-merged.mount: Deactivated successfully.
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.715 239549 DEBUG nova.virt.libvirt.vif [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:44:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-640335234',display_name='tempest-TransferEncryptedVolumeTest-server-640335234',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-640335234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH4lPXdat6TIfSOSKg5xYklqsZ5blpFjr9pJRpxK9EoeTRyB9ECumCAF+ZB72uHiJN6zvQWtj3yCwumCfWWkS7+am6bvE7SvfzxW5K4yPSBZ+jdyG6zmzmLhEEjLuT4TCQ==',key_name='tempest-TransferEncryptedVolumeTest-1394740004',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:45:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-uxc51fyo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:45:04Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=58bc96ea-b6cb-4080-b353-861ed4e160f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.715 239549 DEBUG nova.network.os_vif_util [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "b06b3b06-65e4-495f-8828-87024e852a05", "address": "fa:16:3e:70:b7:65", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06b3b06-65", "ovs_interfaceid": "b06b3b06-65e4-495f-8828-87024e852a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.716 239549 DEBUG nova.network.os_vif_util [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:b7:65,bridge_name='br-int',has_traffic_filtering=True,id=b06b3b06-65e4-495f-8828-87024e852a05,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06b3b06-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.716 239549 DEBUG os_vif [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:b7:65,bridge_name='br-int',has_traffic_filtering=True,id=b06b3b06-65e4-495f-8828-87024e852a05,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06b3b06-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.718 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.718 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb06b3b06-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.722 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:45:25 compute-0 podman[268229]: 2026-02-02 15:45:25.723887076 +0000 UTC m=+0.108037352 container cleanup 0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.724 239549 INFO os_vif [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:b7:65,bridge_name='br-int',has_traffic_filtering=True,id=b06b3b06-65e4-495f-8828-87024e852a05,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06b3b06-65')
Feb 02 15:45:25 compute-0 systemd[1]: libpod-conmon-0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652.scope: Deactivated successfully.
Feb 02 15:45:25 compute-0 podman[268264]: 2026-02-02 15:45:25.790752313 +0000 UTC m=+0.044296339 container remove 0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.796 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ce66066a-55c8-4119-bcdc-bb50d04ee4b6]: (4, ('Mon Feb  2 03:45:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c (0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652)\n0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652\nMon Feb  2 03:45:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c (0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652)\n0d747ddfdd16590c9f225416409b44fecf4fd438fcd6517d7f29723f990cd652\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.797 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9b0bd0-7f5f-47a9-83d1-9006e57cc852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.798 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6f67b7a-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:66:84 10.100.0.10
Feb 02 15:45:25 compute-0 ovn_controller[144995]: 2026-02-02T15:45:25Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:66:84 10.100.0.10
Feb 02 15:45:25 compute-0 kernel: tapb6f67b7a-30: left promiscuous mode
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.801 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.804 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.807 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3baffdbf-8ae4-4595-a1c0-0900fb5801e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.810 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.821 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf7d967-4f0b-4af8-920e-41645520e2cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.822 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d7273d08-517d-41ba-8c4f-6af45a75c005]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.835 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fab6d9-7950-4a2c-8d0e-976275887ff8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458805, 'reachable_time': 38252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268297, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.839 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.839 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4cfc48-cbde-4305-84e6-a04aca5f22f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 systemd[1]: run-netns-ovnmeta\x2db6f67b7a\x2d3fd7\x2d4623\x2d9937\x2d142eb5dabe2c.mount: Deactivated successfully.
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.840 154982 INFO neutron.agent.ovn.metadata.agent [-] Port b06b3b06-65e4-495f-8828-87024e852a05 in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c unbound from our chassis
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.842 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.843 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e3f83a-d2e9-4d3e-8f30-1c349b3d1b99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.844 154982 INFO neutron.agent.ovn.metadata.agent [-] Port b06b3b06-65e4-495f-8828-87024e852a05 in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c unbound from our chassis
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.846 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:45:25 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:25.846 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[20ba2a51-7ed2-4a5a-9bd3-14e871352ad6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.877 239549 INFO nova.virt.libvirt.driver [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Deleting instance files /var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9_del
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.878 239549 INFO nova.virt.libvirt.driver [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Deletion of /var/lib/nova/instances/58bc96ea-b6cb-4080-b353-861ed4e160f9_del complete
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.928 239549 INFO nova.compute.manager [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Took 0.47 seconds to destroy the instance on the hypervisor.
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.929 239549 DEBUG oslo.service.loopingcall [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.929 239549 DEBUG nova.compute.manager [-] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:45:25 compute-0 nova_compute[239545]: 2026-02-02 15:45:25.930 239549 DEBUG nova.network.neutron [-] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:45:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 431 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.9 MiB/s wr, 152 op/s
Feb 02 15:45:26 compute-0 nova_compute[239545]: 2026-02-02 15:45:26.712 239549 DEBUG nova.compute.manager [req-e2e30aa9-3663-4049-85e9-bd3f1a496a96 req-ce01eebd-1270-46b5-99f9-6665d73912c7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received event network-vif-unplugged-b06b3b06-65e4-495f-8828-87024e852a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:26 compute-0 nova_compute[239545]: 2026-02-02 15:45:26.713 239549 DEBUG oslo_concurrency.lockutils [req-e2e30aa9-3663-4049-85e9-bd3f1a496a96 req-ce01eebd-1270-46b5-99f9-6665d73912c7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:26 compute-0 nova_compute[239545]: 2026-02-02 15:45:26.713 239549 DEBUG oslo_concurrency.lockutils [req-e2e30aa9-3663-4049-85e9-bd3f1a496a96 req-ce01eebd-1270-46b5-99f9-6665d73912c7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:26 compute-0 nova_compute[239545]: 2026-02-02 15:45:26.714 239549 DEBUG oslo_concurrency.lockutils [req-e2e30aa9-3663-4049-85e9-bd3f1a496a96 req-ce01eebd-1270-46b5-99f9-6665d73912c7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:26 compute-0 nova_compute[239545]: 2026-02-02 15:45:26.714 239549 DEBUG nova.compute.manager [req-e2e30aa9-3663-4049-85e9-bd3f1a496a96 req-ce01eebd-1270-46b5-99f9-6665d73912c7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] No waiting events found dispatching network-vif-unplugged-b06b3b06-65e4-495f-8828-87024e852a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:45:26 compute-0 nova_compute[239545]: 2026-02-02 15:45:26.714 239549 DEBUG nova.compute.manager [req-e2e30aa9-3663-4049-85e9-bd3f1a496a96 req-ce01eebd-1270-46b5-99f9-6665d73912c7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received event network-vif-unplugged-b06b3b06-65e4-495f-8828-87024e852a05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.037 239549 DEBUG nova.network.neutron [-] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.052 239549 INFO nova.compute.manager [-] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Took 1.12 seconds to deallocate network for instance.
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.102 239549 DEBUG nova.compute.manager [req-aac5442b-7990-43e8-9eec-5a4f82fbec52 req-0a247fbd-357d-46da-9af2-c271a3b4620e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received event network-vif-deleted-b06b3b06-65e4-495f-8828-87024e852a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.193 239549 INFO nova.compute.manager [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Took 0.14 seconds to detach 1 volumes for instance.
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.237 239549 DEBUG oslo_concurrency.lockutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.238 239549 DEBUG oslo_concurrency.lockutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.312 239549 DEBUG oslo_concurrency.processutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Feb 02 15:45:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Feb 02 15:45:27 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Feb 02 15:45:27 compute-0 ceph-mon[75334]: pgmap v1614: 305 pgs: 305 active+clean; 431 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.9 MiB/s wr, 152 op/s
Feb 02 15:45:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:45:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300227823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.855 239549 DEBUG oslo_concurrency.processutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.861 239549 DEBUG nova.compute.provider_tree [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.886 239549 DEBUG nova.scheduler.client.report [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.920 239549 DEBUG oslo_concurrency.lockutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:27 compute-0 nova_compute[239545]: 2026-02-02 15:45:27.945 239549 INFO nova.scheduler.client.report [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Deleted allocations for instance 58bc96ea-b6cb-4080-b353-861ed4e160f9
Feb 02 15:45:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 431 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 7.0 MiB/s wr, 156 op/s
Feb 02 15:45:28 compute-0 nova_compute[239545]: 2026-02-02 15:45:28.017 239549 DEBUG oslo_concurrency.lockutils [None req-e7cc5173-5f20-491d-8b16-3870094cbbeb df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:45:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442636180' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:45:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:45:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442636180' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:45:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e463 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Feb 02 15:45:28 compute-0 ceph-mon[75334]: osdmap e463: 3 total, 3 up, 3 in
Feb 02 15:45:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2300227823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2442636180' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:45:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2442636180' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:45:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Feb 02 15:45:28 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Feb 02 15:45:28 compute-0 nova_compute[239545]: 2026-02-02 15:45:28.815 239549 DEBUG nova.compute.manager [req-1cc25e54-66cb-4505-b71a-6dd407f2c9fe req-5397651e-1097-40d9-93e3-880911d3c1fa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received event network-vif-plugged-b06b3b06-65e4-495f-8828-87024e852a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:28 compute-0 nova_compute[239545]: 2026-02-02 15:45:28.816 239549 DEBUG oslo_concurrency.lockutils [req-1cc25e54-66cb-4505-b71a-6dd407f2c9fe req-5397651e-1097-40d9-93e3-880911d3c1fa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:28 compute-0 nova_compute[239545]: 2026-02-02 15:45:28.816 239549 DEBUG oslo_concurrency.lockutils [req-1cc25e54-66cb-4505-b71a-6dd407f2c9fe req-5397651e-1097-40d9-93e3-880911d3c1fa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:28 compute-0 nova_compute[239545]: 2026-02-02 15:45:28.816 239549 DEBUG oslo_concurrency.lockutils [req-1cc25e54-66cb-4505-b71a-6dd407f2c9fe req-5397651e-1097-40d9-93e3-880911d3c1fa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "58bc96ea-b6cb-4080-b353-861ed4e160f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:28 compute-0 nova_compute[239545]: 2026-02-02 15:45:28.817 239549 DEBUG nova.compute.manager [req-1cc25e54-66cb-4505-b71a-6dd407f2c9fe req-5397651e-1097-40d9-93e3-880911d3c1fa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] No waiting events found dispatching network-vif-plugged-b06b3b06-65e4-495f-8828-87024e852a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:45:28 compute-0 nova_compute[239545]: 2026-02-02 15:45:28.817 239549 WARNING nova.compute.manager [req-1cc25e54-66cb-4505-b71a-6dd407f2c9fe req-5397651e-1097-40d9-93e3-880911d3c1fa d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Received unexpected event network-vif-plugged-b06b3b06-65e4-495f-8828-87024e852a05 for instance with vm_state deleted and task_state None.
Feb 02 15:45:29 compute-0 nova_compute[239545]: 2026-02-02 15:45:29.416 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:29 compute-0 ceph-mon[75334]: pgmap v1616: 305 pgs: 305 active+clean; 431 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 7.0 MiB/s wr, 156 op/s
Feb 02 15:45:29 compute-0 ceph-mon[75334]: osdmap e464: 3 total, 3 up, 3 in
Feb 02 15:45:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 432 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 63 KiB/s wr, 23 op/s
Feb 02 15:45:30 compute-0 nova_compute[239545]: 2026-02-02 15:45:30.720 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Feb 02 15:45:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Feb 02 15:45:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Feb 02 15:45:31 compute-0 ceph-mon[75334]: pgmap v1618: 305 pgs: 305 active+clean; 432 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 63 KiB/s wr, 23 op/s
Feb 02 15:45:31 compute-0 ceph-mon[75334]: osdmap e465: 3 total, 3 up, 3 in
Feb 02 15:45:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 433 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 209 KiB/s wr, 95 op/s
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.172 239549 DEBUG oslo_concurrency.lockutils [None req-b27fd6a7-9eab-4a92-95e6-119dddaca4bf 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.173 239549 DEBUG oslo_concurrency.lockutils [None req-b27fd6a7-9eab-4a92-95e6-119dddaca4bf 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.187 239549 INFO nova.compute.manager [None req-b27fd6a7-9eab-4a92-95e6-119dddaca4bf 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Detaching volume 07dac747-755a-4dfa-9f1f-96c172aeb2da
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.193 239549 DEBUG oslo_concurrency.lockutils [None req-b27fd6a7-9eab-4a92-95e6-119dddaca4bf 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server [None req-b27fd6a7-9eab-4a92-95e6-119dddaca4bf 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Exception during message handling: nova.exception.CinderConnectionFailed: Connection to cinder host failed: Unable to establish connection to https://cinder-internal.openstack.svc:8776/: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 700, in urlopen
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     httplib_response = self._make_request(
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 446, in _make_request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     six.raise_from(e, None)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "<string>", line 3, in raise_from
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 441, in _make_request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     httplib_response = conn.getresponse()
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/http/client.py", line 1377, in getresponse
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     response.begin()
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/http/client.py", line 320, in begin
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     version, status, reason = self._read_status()
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/http/client.py", line 289, in _read_status
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise RemoteDisconnected("Remote end closed connection without"
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server http.client.RemoteDisconnected: Remote end closed connection without response
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server 
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server 
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/requests/adapters.py", line 612, in send
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     resp = conn.urlopen(
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 756, in urlopen
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     retries = retries.increment(
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/urllib3/util/retry.py", line 534, in increment
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise six.reraise(type(error), error, _stacktrace)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/urllib3/packages/six.py", line 708, in reraise
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise value.with_traceback(tb)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 700, in urlopen
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     httplib_response = self._make_request(
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 446, in _make_request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     six.raise_from(e, None)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "<string>", line 3, in raise_from
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 441, in _make_request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     httplib_response = conn.getresponse()
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/http/client.py", line 1377, in getresponse
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     response.begin()
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/http/client.py", line 320, in begin
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     version, status, reason = self._read_status()
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/http/client.py", line 289, in _read_status
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise RemoteDisconnected("Remote end closed connection without"
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server 
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server 
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 1022, in _send_request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     resp = self.session.request(method, url, **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/requests/sessions.py", line 544, in request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     resp = self.send(prep, **send_kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/requests/sessions.py", line 657, in send
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     r = adapter.send(request, **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/requests/adapters.py", line 671, in send
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise ConnectionError(err, request=request)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server 
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server 
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     res = method(self, ctx, *args, **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 435, in wrapper
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     res = method(self, ctx, volume_id, *args, **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 504, in get
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     item = cinderclient(
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 267, in cinderclient
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     version = _check_microversion(context, url, microversion)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 185, in _check_microversion
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     max_api_version = _get_highest_client_server_version(context, url)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 167, in _get_highest_client_server_version
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     min_server, max_server = _get_server_version(context, url)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 146, in _get_server_version
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     response = _SESSION.get(version_url, auth=auth)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 1141, in get
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     return self.request(url, 'GET', **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 931, in request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     resp = send(**kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 1038, in _send_request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise exceptions.ConnectFailure(msg)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://cinder-internal.openstack.svc:8776/: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server 
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server 
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     self.force_reraise()
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise self.value
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     self.force_reraise()
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise self.value
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7648, in detach_volume
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     do_detach_volume(context, volume_id, instance, attachment_id)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 414, in inner
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     return f(*args, **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7645, in do_detach_volume
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     self._detach_volume(context, bdm, instance,
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7596, in _detach_volume
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     driver_bdm.detach(context, instance, self.volume_api, self.driver,
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 533, in detach
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     volume = self._get_volume(context, volume_api, self.volume_id)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 406, in _get_volume
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     return volume_api.get(context, volume_id, microversion='3.48')
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 401, in wrapper
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     _reraise(exception.CinderConnectionFailed(reason=err_msg))
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise desired_exc.with_traceback(sys.exc_info()[2])
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     res = method(self, ctx, *args, **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 435, in wrapper
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     res = method(self, ctx, volume_id, *args, **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 504, in get
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     item = cinderclient(
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 267, in cinderclient
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     version = _check_microversion(context, url, microversion)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 185, in _check_microversion
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     max_api_version = _get_highest_client_server_version(context, url)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 167, in _get_highest_client_server_version
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     min_server, max_server = _get_server_version(context, url)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 146, in _get_server_version
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     response = _SESSION.get(version_url, auth=auth)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 1141, in get
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     return self.request(url, 'GET', **kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 931, in request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     resp = send(**kwargs)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 1038, in _send_request
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server     raise exceptions.ConnectFailure(msg)
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server nova.exception.CinderConnectionFailed: Connection to cinder host failed: Unable to establish connection to https://cinder-internal.openstack.svc:8776/: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Feb 02 15:45:32 compute-0 nova_compute[239545]: 2026-02-02 15:45:32.248 239549 ERROR oslo_messaging.rpc.server 
Feb 02 15:45:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:33 compute-0 ceph-mon[75334]: pgmap v1620: 305 pgs: 305 active+clean; 433 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 209 KiB/s wr, 95 op/s
Feb 02 15:45:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 224 KiB/s wr, 93 op/s
Feb 02 15:45:34 compute-0 nova_compute[239545]: 2026-02-02 15:45:34.418 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:35 compute-0 nova_compute[239545]: 2026-02-02 15:45:35.644 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "dae5d782-1829-48e1-836e-4f8301eeb88f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:35 compute-0 nova_compute[239545]: 2026-02-02 15:45:35.645 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:35 compute-0 nova_compute[239545]: 2026-02-02 15:45:35.662 239549 DEBUG nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:45:35 compute-0 nova_compute[239545]: 2026-02-02 15:45:35.721 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:35 compute-0 nova_compute[239545]: 2026-02-02 15:45:35.779 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:35 compute-0 nova_compute[239545]: 2026-02-02 15:45:35.780 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:35 compute-0 nova_compute[239545]: 2026-02-02 15:45:35.787 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:45:35 compute-0 nova_compute[239545]: 2026-02-02 15:45:35.788 239549 INFO nova.compute.claims [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:45:35 compute-0 ceph-mon[75334]: pgmap v1621: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 224 KiB/s wr, 93 op/s
Feb 02 15:45:35 compute-0 nova_compute[239545]: 2026-02-02 15:45:35.930 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 182 KiB/s wr, 89 op/s
Feb 02 15:45:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:45:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365915304' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.537 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.544 239549 DEBUG nova.compute.provider_tree [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.568 239549 DEBUG nova.scheduler.client.report [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.597 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.598 239549 DEBUG nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.642 239549 DEBUG nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.643 239549 DEBUG nova.network.neutron [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.660 239549 INFO nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.678 239549 DEBUG nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.714 239549 INFO nova.virt.block_device [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Booting with volume 4c639a87-991a-40d6-b1a2-c7bd5580d6b1 at /dev/vda
Feb 02 15:45:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/365915304' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.831 239549 DEBUG os_brick.utils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.832 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.841 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.841 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[7d79671f-46c4-41ff-8dd4-ede865b6e7b7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.842 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.847 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.848 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[4f16115e-a7da-4d55-bab7-5ff456793f7d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.849 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.855 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.855 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[60e39d43-afb9-4db7-bcb0-f4009e7009cb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.856 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[d14f6bc9-ef69-41f1-a79e-9eac4278b96d]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.857 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.874 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.876 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.876 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.877 239549 DEBUG os_brick.initiator.connectors.lightos [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.877 239549 DEBUG os_brick.utils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] <== get_connector_properties: return (44ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:45:36 compute-0 nova_compute[239545]: 2026-02-02 15:45:36.877 239549 DEBUG nova.virt.block_device [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Updating existing volume attachment record: 391dffe9-78f3-471a-9dfb-ede96ee10899 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.145 239549 DEBUG nova.policy [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df03e4d41ae644fca567cfe648b7bad6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:45:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:45:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3425274395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:37 compute-0 ceph-mon[75334]: pgmap v1622: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 182 KiB/s wr, 89 op/s
Feb 02 15:45:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3425274395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.821 239549 DEBUG nova.network.neutron [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Successfully created port: 15f9fd08-446b-4dd8-8735-546bb477e16b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.902 239549 DEBUG nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.904 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.904 239549 INFO nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Creating image(s)
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.905 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.905 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Ensure instance console log exists: /var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.906 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.906 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:37 compute-0 nova_compute[239545]: 2026-02-02 15:45:37.906 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 139 KiB/s wr, 70 op/s
Feb 02 15:45:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:38 compute-0 nova_compute[239545]: 2026-02-02 15:45:38.519 239549 DEBUG nova.network.neutron [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Successfully updated port: 15f9fd08-446b-4dd8-8735-546bb477e16b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:45:38 compute-0 nova_compute[239545]: 2026-02-02 15:45:38.533 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "refresh_cache-dae5d782-1829-48e1-836e-4f8301eeb88f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:38 compute-0 nova_compute[239545]: 2026-02-02 15:45:38.534 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquired lock "refresh_cache-dae5d782-1829-48e1-836e-4f8301eeb88f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:38 compute-0 nova_compute[239545]: 2026-02-02 15:45:38.534 239549 DEBUG nova.network.neutron [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:45:38 compute-0 nova_compute[239545]: 2026-02-02 15:45:38.603 239549 DEBUG nova.compute.manager [req-87d196d5-b67c-42f1-9630-403dc51c11a5 req-1aedf0a3-892c-439d-a907-35549e4df947 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received event network-changed-15f9fd08-446b-4dd8-8735-546bb477e16b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:38 compute-0 nova_compute[239545]: 2026-02-02 15:45:38.604 239549 DEBUG nova.compute.manager [req-87d196d5-b67c-42f1-9630-403dc51c11a5 req-1aedf0a3-892c-439d-a907-35549e4df947 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Refreshing instance network info cache due to event network-changed-15f9fd08-446b-4dd8-8735-546bb477e16b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:45:38 compute-0 nova_compute[239545]: 2026-02-02 15:45:38.604 239549 DEBUG oslo_concurrency.lockutils [req-87d196d5-b67c-42f1-9630-403dc51c11a5 req-1aedf0a3-892c-439d-a907-35549e4df947 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-dae5d782-1829-48e1-836e-4f8301eeb88f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:38 compute-0 nova_compute[239545]: 2026-02-02 15:45:38.659 239549 DEBUG nova.network.neutron [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.419 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.640 239549 DEBUG nova.network.neutron [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Updating instance_info_cache with network_info: [{"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.656 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Releasing lock "refresh_cache-dae5d782-1829-48e1-836e-4f8301eeb88f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.657 239549 DEBUG nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Instance network_info: |[{"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.657 239549 DEBUG oslo_concurrency.lockutils [req-87d196d5-b67c-42f1-9630-403dc51c11a5 req-1aedf0a3-892c-439d-a907-35549e4df947 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-dae5d782-1829-48e1-836e-4f8301eeb88f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.657 239549 DEBUG nova.network.neutron [req-87d196d5-b67c-42f1-9630-403dc51c11a5 req-1aedf0a3-892c-439d-a907-35549e4df947 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Refreshing network info cache for port 15f9fd08-446b-4dd8-8735-546bb477e16b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.660 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Start _get_guest_xml network_info=[{"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '391dffe9-78f3-471a-9dfb-ede96ee10899', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'dae5d782-1829-48e1-836e-4f8301eeb88f', 'attached_at': '', 'detached_at': '', 'volume_id': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'serial': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.666 239549 WARNING nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.671 239549 DEBUG nova.virt.libvirt.host [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.671 239549 DEBUG nova.virt.libvirt.host [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.676 239549 DEBUG nova.virt.libvirt.host [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.677 239549 DEBUG nova.virt.libvirt.host [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.677 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.678 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.678 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.678 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.678 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.679 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.679 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.679 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.679 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.680 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.680 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.680 239549 DEBUG nova.virt.hardware [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.700 239549 DEBUG nova.storage.rbd_utils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image dae5d782-1829-48e1-836e-4f8301eeb88f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:39 compute-0 nova_compute[239545]: 2026-02-02 15:45:39.703 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:39 compute-0 ceph-mon[75334]: pgmap v1623: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 139 KiB/s wr, 70 op/s
Feb 02 15:45:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 133 KiB/s wr, 65 op/s
Feb 02 15:45:40 compute-0 sudo[268388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:45:40 compute-0 sudo[268388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:45:40 compute-0 sudo[268388]: pam_unix(sudo:session): session closed for user root
Feb 02 15:45:40 compute-0 sudo[268413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:45:40 compute-0 sudo[268413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:45:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:45:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969906335' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.261 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.417 239549 DEBUG os_brick.encryptors [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Using volume encryption metadata '{'encryption_key_id': '99928bbb-1a74-4666-b753-6ab1de395869', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'dae5d782-1829-48e1-836e-4f8301eeb88f', 'attached_at': '', 'detached_at': '', 'volume_id': '4c639a87-991a-40d6-b1a2-c7bd5580d6b1', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.422 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.444 239549 DEBUG barbicanclient.v1.secrets [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/99928bbb-1a74-4666-b753-6ab1de395869 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.445 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.469 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.471 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.492 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.493 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.516 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.517 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.539 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.540 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.573 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.574 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.599 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.600 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.623 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.624 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 sudo[268413]: pam_unix(sudo:session): session closed for user root
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.649 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.650 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.672 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.673 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:45:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:45:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:45:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:45:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:45:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:45:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:45:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:45:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:45:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:45:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:45:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.695 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.696 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.704 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047125.7032168, 58bc96ea-b6cb-4080-b353-861ed4e160f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.704 239549 INFO nova.compute.manager [-] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] VM Stopped (Lifecycle Event)
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.719 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.720 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.725 239549 DEBUG nova.compute.manager [None req-95dfc73a-8b6f-45ba-99fc-21f0521590c4 - - - - - -] [instance: 58bc96ea-b6cb-4080-b353-861ed4e160f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.726 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:40 compute-0 sudo[268472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:45:40 compute-0 sudo[268472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:45:40 compute-0 sudo[268472]: pam_unix(sudo:session): session closed for user root
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.741 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.742 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.760 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.761 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.782 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 sudo[268497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.783 239549 INFO barbicanclient.base [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Calculated Secrets uuid ref: secrets/99928bbb-1a74-4666-b753-6ab1de395869
Feb 02 15:45:40 compute-0 sudo[268497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.802 239549 DEBUG barbicanclient.client [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.803 239549 DEBUG nova.virt.libvirt.host [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <usage type="volume">
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <volume>4c639a87-991a-40d6-b1a2-c7bd5580d6b1</volume>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   </usage>
Feb 02 15:45:40 compute-0 nova_compute[239545]: </secret>
Feb 02 15:45:40 compute-0 nova_compute[239545]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.831 239549 DEBUG nova.virt.libvirt.vif [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:45:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1867779399',display_name='tempest-TransferEncryptedVolumeTest-server-1867779399',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1867779399',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH4lPXdat6TIfSOSKg5xYklqsZ5blpFjr9pJRpxK9EoeTRyB9ECumCAF+ZB72uHiJN6zvQWtj3yCwumCfWWkS7+am6bvE7SvfzxW5K4yPSBZ+jdyG6zmzmLhEEjLuT4TCQ==',key_name='tempest-TransferEncryptedVolumeTest-1394740004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-9hnysknf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:45:36Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=dae5d782-1829-48e1-836e-4f8301eeb88f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.831 239549 DEBUG nova.network.os_vif_util [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:45:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/969906335' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:45:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:45:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:45:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:45:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:45:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.832 239549 DEBUG nova.network.os_vif_util [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:48:48,bridge_name='br-int',has_traffic_filtering=True,id=15f9fd08-446b-4dd8-8735-546bb477e16b,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f9fd08-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.834 239549 DEBUG nova.objects.instance [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid dae5d782-1829-48e1-836e-4f8301eeb88f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.845 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <uuid>dae5d782-1829-48e1-836e-4f8301eeb88f</uuid>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <name>instance-00000019</name>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1867779399</nova:name>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:45:39</nova:creationTime>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <nova:user uuid="df03e4d41ae644fca567cfe648b7bad6">tempest-TransferEncryptedVolumeTest-1895614673-project-member</nova:user>
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <nova:project uuid="6d6011a66bdb41cea09b6018ceeec7d4">tempest-TransferEncryptedVolumeTest-1895614673</nova:project>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <nova:port uuid="15f9fd08-446b-4dd8-8735-546bb477e16b">
Feb 02 15:45:40 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <system>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <entry name="serial">dae5d782-1829-48e1-836e-4f8301eeb88f</entry>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <entry name="uuid">dae5d782-1829-48e1-836e-4f8301eeb88f</entry>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     </system>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <os>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   </os>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <features>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   </features>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/dae5d782-1829-48e1-836e-4f8301eeb88f_disk.config">
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       </source>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-4c639a87-991a-40d6-b1a2-c7bd5580d6b1">
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       </source>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <serial>4c639a87-991a-40d6-b1a2-c7bd5580d6b1</serial>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <encryption format="luks">
Feb 02 15:45:40 compute-0 nova_compute[239545]:         <secret type="passphrase" uuid="038d05af-1102-4d45-801d-aa639f080938"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       </encryption>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:33:48:48"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <target dev="tap15f9fd08-44"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f/console.log" append="off"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <video>
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     </video>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:45:40 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:45:40 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:45:40 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:45:40 compute-0 nova_compute[239545]: </domain>
Feb 02 15:45:40 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.847 239549 DEBUG nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Preparing to wait for external event network-vif-plugged-15f9fd08-446b-4dd8-8735-546bb477e16b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.847 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.847 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.847 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.848 239549 DEBUG nova.virt.libvirt.vif [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:45:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1867779399',display_name='tempest-TransferEncryptedVolumeTest-server-1867779399',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1867779399',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH4lPXdat6TIfSOSKg5xYklqsZ5blpFjr9pJRpxK9EoeTRyB9ECumCAF+ZB72uHiJN6zvQWtj3yCwumCfWWkS7+am6bvE7SvfzxW5K4yPSBZ+jdyG6zmzmLhEEjLuT4TCQ==',key_name='tempest-TransferEncryptedVolumeTest-1394740004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-9hnysknf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:45:36Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=dae5d782-1829-48e1-836e-4f8301eeb88f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.848 239549 DEBUG nova.network.os_vif_util [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.849 239549 DEBUG nova.network.os_vif_util [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:48:48,bridge_name='br-int',has_traffic_filtering=True,id=15f9fd08-446b-4dd8-8735-546bb477e16b,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f9fd08-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.849 239549 DEBUG os_vif [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:48:48,bridge_name='br-int',has_traffic_filtering=True,id=15f9fd08-446b-4dd8-8735-546bb477e16b,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f9fd08-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.849 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.850 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.850 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.853 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.854 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15f9fd08-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.854 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15f9fd08-44, col_values=(('external_ids', {'iface-id': '15f9fd08-446b-4dd8-8735-546bb477e16b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:48:48', 'vm-uuid': 'dae5d782-1829-48e1-836e-4f8301eeb88f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.856 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:40 compute-0 NetworkManager[49171]: <info>  [1770047140.8576] manager: (tap15f9fd08-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.858 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.861 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.862 239549 INFO os_vif [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:48:48,bridge_name='br-int',has_traffic_filtering=True,id=15f9fd08-446b-4dd8-8735-546bb477e16b,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f9fd08-44')
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.912 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.912 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.913 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] No VIF found with MAC fa:16:3e:33:48:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.913 239549 INFO nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Using config drive
Feb 02 15:45:40 compute-0 nova_compute[239545]: 2026-02-02 15:45:40.935 239549 DEBUG nova.storage.rbd_utils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image dae5d782-1829-48e1-836e-4f8301eeb88f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:41 compute-0 podman[268553]: 2026-02-02 15:45:41.0782951 +0000 UTC m=+0.051863044 container create 53942c6041c12dea3d17157c8b76af0a41c2607c5ecf89e5a1418c54a794f184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:45:41 compute-0 systemd[1]: Started libpod-conmon-53942c6041c12dea3d17157c8b76af0a41c2607c5ecf89e5a1418c54a794f184.scope.
Feb 02 15:45:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:45:41 compute-0 podman[268553]: 2026-02-02 15:45:41.050660527 +0000 UTC m=+0.024228401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:45:41 compute-0 podman[268553]: 2026-02-02 15:45:41.161891275 +0000 UTC m=+0.135459129 container init 53942c6041c12dea3d17157c8b76af0a41c2607c5ecf89e5a1418c54a794f184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:45:41 compute-0 podman[268553]: 2026-02-02 15:45:41.16994835 +0000 UTC m=+0.143516174 container start 53942c6041c12dea3d17157c8b76af0a41c2607c5ecf89e5a1418c54a794f184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:45:41 compute-0 podman[268553]: 2026-02-02 15:45:41.173786874 +0000 UTC m=+0.147354708 container attach 53942c6041c12dea3d17157c8b76af0a41c2607c5ecf89e5a1418c54a794f184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 15:45:41 compute-0 boring_hofstadter[268570]: 167 167
Feb 02 15:45:41 compute-0 systemd[1]: libpod-53942c6041c12dea3d17157c8b76af0a41c2607c5ecf89e5a1418c54a794f184.scope: Deactivated successfully.
Feb 02 15:45:41 compute-0 podman[268553]: 2026-02-02 15:45:41.176237124 +0000 UTC m=+0.149804958 container died 53942c6041c12dea3d17157c8b76af0a41c2607c5ecf89e5a1418c54a794f184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a73ff658cd81859eb1d77d4514f83ff09387eab9bdc248b553eb22b817f0e9a2-merged.mount: Deactivated successfully.
Feb 02 15:45:41 compute-0 podman[268553]: 2026-02-02 15:45:41.231979831 +0000 UTC m=+0.205547665 container remove 53942c6041c12dea3d17157c8b76af0a41c2607c5ecf89e5a1418c54a794f184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hofstadter, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:45:41 compute-0 systemd[1]: libpod-conmon-53942c6041c12dea3d17157c8b76af0a41c2607c5ecf89e5a1418c54a794f184.scope: Deactivated successfully.
Feb 02 15:45:41 compute-0 podman[268594]: 2026-02-02 15:45:41.379285887 +0000 UTC m=+0.038652482 container create a4a56fcfd3c515d8bf9a60e54b4a9d5847dd2a8a0081f9a7af2f9b2506451e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_payne, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:45:41 compute-0 systemd[1]: Started libpod-conmon-a4a56fcfd3c515d8bf9a60e54b4a9d5847dd2a8a0081f9a7af2f9b2506451e2f.scope.
Feb 02 15:45:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dd7881707642827184717d11c6eaa590b0c4eb144f7f6134cde55122d6f495/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dd7881707642827184717d11c6eaa590b0c4eb144f7f6134cde55122d6f495/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dd7881707642827184717d11c6eaa590b0c4eb144f7f6134cde55122d6f495/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dd7881707642827184717d11c6eaa590b0c4eb144f7f6134cde55122d6f495/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dd7881707642827184717d11c6eaa590b0c4eb144f7f6134cde55122d6f495/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:41 compute-0 podman[268594]: 2026-02-02 15:45:41.362662072 +0000 UTC m=+0.022028697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:45:41 compute-0 podman[268594]: 2026-02-02 15:45:41.464650495 +0000 UTC m=+0.124017130 container init a4a56fcfd3c515d8bf9a60e54b4a9d5847dd2a8a0081f9a7af2f9b2506451e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:45:41 compute-0 podman[268594]: 2026-02-02 15:45:41.472873275 +0000 UTC m=+0.132239880 container start a4a56fcfd3c515d8bf9a60e54b4a9d5847dd2a8a0081f9a7af2f9b2506451e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:45:41 compute-0 podman[268594]: 2026-02-02 15:45:41.476426841 +0000 UTC m=+0.135793476 container attach a4a56fcfd3c515d8bf9a60e54b4a9d5847dd2a8a0081f9a7af2f9b2506451e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_payne, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.526 239549 INFO nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Creating config drive at /var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f/disk.config
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.534 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgafrns0u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.667 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgafrns0u" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.686 239549 DEBUG nova.storage.rbd_utils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] rbd image dae5d782-1829-48e1-836e-4f8301eeb88f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.689 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f/disk.config dae5d782-1829-48e1-836e-4f8301eeb88f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.712 239549 DEBUG nova.network.neutron [req-87d196d5-b67c-42f1-9630-403dc51c11a5 req-1aedf0a3-892c-439d-a907-35549e4df947 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Updated VIF entry in instance network info cache for port 15f9fd08-446b-4dd8-8735-546bb477e16b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.714 239549 DEBUG nova.network.neutron [req-87d196d5-b67c-42f1-9630-403dc51c11a5 req-1aedf0a3-892c-439d-a907-35549e4df947 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Updating instance_info_cache with network_info: [{"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.732 239549 DEBUG oslo_concurrency.lockutils [req-87d196d5-b67c-42f1-9630-403dc51c11a5 req-1aedf0a3-892c-439d-a907-35549e4df947 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-dae5d782-1829-48e1-836e-4f8301eeb88f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.819 239549 DEBUG oslo_concurrency.processutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f/disk.config dae5d782-1829-48e1-836e-4f8301eeb88f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.820 239549 INFO nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Deleting local config drive /var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f/disk.config because it was imported into RBD.
Feb 02 15:45:41 compute-0 ceph-mon[75334]: pgmap v1624: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 133 KiB/s wr, 65 op/s
Feb 02 15:45:41 compute-0 kernel: tap15f9fd08-44: entered promiscuous mode
Feb 02 15:45:41 compute-0 NetworkManager[49171]: <info>  [1770047141.8754] manager: (tap15f9fd08-44): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Feb 02 15:45:41 compute-0 ovn_controller[144995]: 2026-02-02T15:45:41Z|00232|binding|INFO|Claiming lport 15f9fd08-446b-4dd8-8735-546bb477e16b for this chassis.
Feb 02 15:45:41 compute-0 ovn_controller[144995]: 2026-02-02T15:45:41Z|00233|binding|INFO|15f9fd08-446b-4dd8-8735-546bb477e16b: Claiming fa:16:3e:33:48:48 10.100.0.14
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.875 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:41 compute-0 nova_compute[239545]: 2026-02-02 15:45:41.883 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:41 compute-0 ovn_controller[144995]: 2026-02-02T15:45:41Z|00234|binding|INFO|Setting lport 15f9fd08-446b-4dd8-8735-546bb477e16b up in Southbound
Feb 02 15:45:41 compute-0 ovn_controller[144995]: 2026-02-02T15:45:41Z|00235|binding|INFO|Setting lport 15f9fd08-446b-4dd8-8735-546bb477e16b ovn-installed in OVS
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.887 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:48:48 10.100.0.14'], port_security=['fa:16:3e:33:48:48 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dae5d782-1829-48e1-836e-4f8301eeb88f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e2499c6-4637-44db-a491-4fe8bcc3f081', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=15f9fd08-446b-4dd8-8735-546bb477e16b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.893 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 15f9fd08-446b-4dd8-8735-546bb477e16b in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c bound to our chassis
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.899 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.910 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b39362-6980-4d82-8708-73dbf67af9be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.911 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6f67b7a-31 in ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.912 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6f67b7a-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.912 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[917fd36c-3dfc-4151-ac63-d7ab89ee11cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:41 compute-0 strange_payne[268610]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:45:41 compute-0 systemd-machined[207609]: New machine qemu-25-instance-00000019.
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.914 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e5dc95a3-1fb1-43be-a87a-761ddafa944d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:41 compute-0 strange_payne[268610]: --> All data devices are unavailable
Feb 02 15:45:41 compute-0 systemd-udevd[268685]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:45:41 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.928 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[368ca00a-60ed-45ea-8d18-1bbc85f9c73c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:41 compute-0 NetworkManager[49171]: <info>  [1770047141.9415] device (tap15f9fd08-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:45:41 compute-0 NetworkManager[49171]: <info>  [1770047141.9421] device (tap15f9fd08-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:45:41 compute-0 systemd[1]: libpod-a4a56fcfd3c515d8bf9a60e54b4a9d5847dd2a8a0081f9a7af2f9b2506451e2f.scope: Deactivated successfully.
Feb 02 15:45:41 compute-0 podman[268594]: 2026-02-02 15:45:41.948862963 +0000 UTC m=+0.608229568 container died a4a56fcfd3c515d8bf9a60e54b4a9d5847dd2a8a0081f9a7af2f9b2506451e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.955 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc729ae-2734-4ff5-a968-5279378fe7ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9dd7881707642827184717d11c6eaa590b0c4eb144f7f6134cde55122d6f495-merged.mount: Deactivated successfully.
Feb 02 15:45:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 22 KiB/s wr, 13 op/s
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.992 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[b5877bba-4507-4af1-90e0-790bbd31711f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:41.998 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7766c924-dafe-4360-9cc5-8f918227eba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:41 compute-0 NetworkManager[49171]: <info>  [1770047141.9993] manager: (tapb6f67b7a-30): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Feb 02 15:45:42 compute-0 systemd-udevd[268688]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:45:42 compute-0 podman[268594]: 2026-02-02 15:45:42.000860768 +0000 UTC m=+0.660227363 container remove a4a56fcfd3c515d8bf9a60e54b4a9d5847dd2a8a0081f9a7af2f9b2506451e2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 02 15:45:42 compute-0 systemd[1]: libpod-conmon-a4a56fcfd3c515d8bf9a60e54b4a9d5847dd2a8a0081f9a7af2f9b2506451e2f.scope: Deactivated successfully.
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.032 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[772f9793-a6da-487b-ac21-ece33d40c1f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.036 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e84f42-91a1-441f-8402-dd596c690f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:42 compute-0 sudo[268497]: pam_unix(sudo:session): session closed for user root
Feb 02 15:45:42 compute-0 NetworkManager[49171]: <info>  [1770047142.0546] device (tapb6f67b7a-30): carrier: link connected
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.060 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[e774e08f-4abb-4f30-b330-b7ec4fc3964d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.074 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[556ec868-76e4-40cd-8259-29d8f6bb23e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6f67b7a-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462797, 'reachable_time': 21454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268726, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.089 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbf5bc0-db12-4298-8be6-7b8c1f3a9bdc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe04:b29'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462797, 'tstamp': 462797}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268736, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.103 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8ff6d7-7e1f-4dd9-a27e-1bb957b5022e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6f67b7a-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:0b:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462797, 'reachable_time': 21454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268750, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:42 compute-0 sudo[268727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:45:42 compute-0 sudo[268727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:45:42 compute-0 sudo[268727]: pam_unix(sudo:session): session closed for user root
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.132 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5607af-b484-445b-a2d8-3dd340daf30b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:42 compute-0 sudo[268756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:45:42 compute-0 sudo[268756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.176 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8cc51e-9a40-42ad-b685-86367390b72b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.178 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6f67b7a-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.179 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.179 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6f67b7a-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:42 compute-0 kernel: tapb6f67b7a-30: entered promiscuous mode
Feb 02 15:45:42 compute-0 NetworkManager[49171]: <info>  [1770047142.1819] manager: (tapb6f67b7a-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Feb 02 15:45:42 compute-0 nova_compute[239545]: 2026-02-02 15:45:42.181 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.184 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6f67b7a-30, col_values=(('external_ids', {'iface-id': '4216aeff-7d93-404b-9880-8737d42e9d19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:42 compute-0 ovn_controller[144995]: 2026-02-02T15:45:42Z|00236|binding|INFO|Releasing lport 4216aeff-7d93-404b-9880-8737d42e9d19 from this chassis (sb_readonly=0)
Feb 02 15:45:42 compute-0 nova_compute[239545]: 2026-02-02 15:45:42.194 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.195 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.196 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[62fae235-f30b-46ab-b86f-108d64fd3d1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.197 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.pid.haproxy
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID b6f67b7a-3fd7-4623-9937-142eb5dabe2c
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:45:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:42.198 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'env', 'PROCESS_TAG=haproxy-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6f67b7a-3fd7-4623-9937-142eb5dabe2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:45:42 compute-0 podman[268836]: 2026-02-02 15:45:42.444849297 +0000 UTC m=+0.040531848 container create c72545137a4a8d754c233029711fc20dd6b79b1b734eb72e56995a1cc569fd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:45:42 compute-0 systemd[1]: Started libpod-conmon-c72545137a4a8d754c233029711fc20dd6b79b1b734eb72e56995a1cc569fd85.scope.
Feb 02 15:45:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:45:42 compute-0 nova_compute[239545]: 2026-02-02 15:45:42.516 239549 DEBUG nova.compute.manager [req-841f23c7-6a00-4a9e-beaf-1ee1b9a463f2 req-03b8d673-bf0a-4caf-8895-a207ce01e798 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received event network-vif-plugged-15f9fd08-446b-4dd8-8735-546bb477e16b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:42 compute-0 nova_compute[239545]: 2026-02-02 15:45:42.517 239549 DEBUG oslo_concurrency.lockutils [req-841f23c7-6a00-4a9e-beaf-1ee1b9a463f2 req-03b8d673-bf0a-4caf-8895-a207ce01e798 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:42 compute-0 nova_compute[239545]: 2026-02-02 15:45:42.517 239549 DEBUG oslo_concurrency.lockutils [req-841f23c7-6a00-4a9e-beaf-1ee1b9a463f2 req-03b8d673-bf0a-4caf-8895-a207ce01e798 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:42 compute-0 nova_compute[239545]: 2026-02-02 15:45:42.518 239549 DEBUG oslo_concurrency.lockutils [req-841f23c7-6a00-4a9e-beaf-1ee1b9a463f2 req-03b8d673-bf0a-4caf-8895-a207ce01e798 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:42 compute-0 nova_compute[239545]: 2026-02-02 15:45:42.518 239549 DEBUG nova.compute.manager [req-841f23c7-6a00-4a9e-beaf-1ee1b9a463f2 req-03b8d673-bf0a-4caf-8895-a207ce01e798 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Processing event network-vif-plugged-15f9fd08-446b-4dd8-8735-546bb477e16b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:45:42 compute-0 podman[268836]: 2026-02-02 15:45:42.426459749 +0000 UTC m=+0.022142320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:45:42 compute-0 podman[268836]: 2026-02-02 15:45:42.524956626 +0000 UTC m=+0.120639177 container init c72545137a4a8d754c233029711fc20dd6b79b1b734eb72e56995a1cc569fd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:45:42 compute-0 podman[268836]: 2026-02-02 15:45:42.530625554 +0000 UTC m=+0.126308105 container start c72545137a4a8d754c233029711fc20dd6b79b1b734eb72e56995a1cc569fd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 02 15:45:42 compute-0 podman[268836]: 2026-02-02 15:45:42.533627017 +0000 UTC m=+0.129309598 container attach c72545137a4a8d754c233029711fc20dd6b79b1b734eb72e56995a1cc569fd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:45:42 compute-0 tender_montalcini[268873]: 167 167
Feb 02 15:45:42 compute-0 systemd[1]: libpod-c72545137a4a8d754c233029711fc20dd6b79b1b734eb72e56995a1cc569fd85.scope: Deactivated successfully.
Feb 02 15:45:42 compute-0 podman[268836]: 2026-02-02 15:45:42.536967079 +0000 UTC m=+0.132649630 container died c72545137a4a8d754c233029711fc20dd6b79b1b734eb72e56995a1cc569fd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 15:45:42 compute-0 podman[268872]: 2026-02-02 15:45:42.550473447 +0000 UTC m=+0.055182393 container create 14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-066f4eb8a614ec21a3058cf77285dd909587e2dfc13c1e31d09f94aa0227437a-merged.mount: Deactivated successfully.
Feb 02 15:45:42 compute-0 podman[268836]: 2026-02-02 15:45:42.574859732 +0000 UTC m=+0.170542273 container remove c72545137a4a8d754c233029711fc20dd6b79b1b734eb72e56995a1cc569fd85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:45:42 compute-0 systemd[1]: Started libpod-conmon-14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044.scope.
Feb 02 15:45:42 compute-0 systemd[1]: libpod-conmon-c72545137a4a8d754c233029711fc20dd6b79b1b734eb72e56995a1cc569fd85.scope: Deactivated successfully.
Feb 02 15:45:42 compute-0 podman[268872]: 2026-02-02 15:45:42.520416466 +0000 UTC m=+0.025125392 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:45:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c357fd5ace59e88be6580fa97564ac0c8f187edc52399abb40aefa9365b3694/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:42 compute-0 podman[268872]: 2026-02-02 15:45:42.633278824 +0000 UTC m=+0.137987770 container init 14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:45:42 compute-0 podman[268872]: 2026-02-02 15:45:42.63763532 +0000 UTC m=+0.142344246 container start 14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 15:45:42 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[268904]: [NOTICE]   (268908) : New worker (268914) forked
Feb 02 15:45:42 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[268904]: [NOTICE]   (268908) : Loading success.
Feb 02 15:45:42 compute-0 podman[268924]: 2026-02-02 15:45:42.721754728 +0000 UTC m=+0.036899570 container create e8973a026a8344a79be94a82b8763a1687421bb2db8d143606438e9b0fb8c000 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:45:42 compute-0 systemd[1]: Started libpod-conmon-e8973a026a8344a79be94a82b8763a1687421bb2db8d143606438e9b0fb8c000.scope.
Feb 02 15:45:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3663866703868c0be0c6d25507791b3696f56800b736a8dc3aaf3c13940a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3663866703868c0be0c6d25507791b3696f56800b736a8dc3aaf3c13940a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3663866703868c0be0c6d25507791b3696f56800b736a8dc3aaf3c13940a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3663866703868c0be0c6d25507791b3696f56800b736a8dc3aaf3c13940a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:42 compute-0 podman[268924]: 2026-02-02 15:45:42.704472086 +0000 UTC m=+0.019616908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:45:42 compute-0 podman[268924]: 2026-02-02 15:45:42.812443845 +0000 UTC m=+0.127588667 container init e8973a026a8344a79be94a82b8763a1687421bb2db8d143606438e9b0fb8c000 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:45:42 compute-0 podman[268924]: 2026-02-02 15:45:42.818234556 +0000 UTC m=+0.133379358 container start e8973a026a8344a79be94a82b8763a1687421bb2db8d143606438e9b0fb8c000 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hugle, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:45:42 compute-0 podman[268924]: 2026-02-02 15:45:42.821094936 +0000 UTC m=+0.136239758 container attach e8973a026a8344a79be94a82b8763a1687421bb2db8d143606438e9b0fb8c000 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:45:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:45:42
Feb 02 15:45:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:45:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:45:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.control']
Feb 02 15:45:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]: {
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:     "0": [
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:         {
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "devices": [
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "/dev/loop3"
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             ],
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_name": "ceph_lv0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_size": "21470642176",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "name": "ceph_lv0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "tags": {
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.cluster_name": "ceph",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.crush_device_class": "",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.encrypted": "0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.objectstore": "bluestore",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.osd_id": "0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.type": "block",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.vdo": "0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.with_tpm": "0"
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             },
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "type": "block",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "vg_name": "ceph_vg0"
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:         }
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:     ],
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:     "1": [
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:         {
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "devices": [
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "/dev/loop4"
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             ],
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_name": "ceph_lv1",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_size": "21470642176",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "name": "ceph_lv1",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "tags": {
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.cluster_name": "ceph",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.crush_device_class": "",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.encrypted": "0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.objectstore": "bluestore",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.osd_id": "1",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.type": "block",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.vdo": "0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.with_tpm": "0"
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             },
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "type": "block",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "vg_name": "ceph_vg1"
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:         }
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:     ],
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:     "2": [
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:         {
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "devices": [
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "/dev/loop5"
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             ],
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_name": "ceph_lv2",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_size": "21470642176",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "name": "ceph_lv2",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "tags": {
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.cluster_name": "ceph",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.crush_device_class": "",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.encrypted": "0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.objectstore": "bluestore",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.osd_id": "2",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.type": "block",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.vdo": "0",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:                 "ceph.with_tpm": "0"
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             },
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "type": "block",
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:             "vg_name": "ceph_vg2"
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:         }
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]:     ]
Feb 02 15:45:43 compute-0 xenodochial_hugle[268940]: }
Feb 02 15:45:43 compute-0 systemd[1]: libpod-e8973a026a8344a79be94a82b8763a1687421bb2db8d143606438e9b0fb8c000.scope: Deactivated successfully.
Feb 02 15:45:43 compute-0 podman[268924]: 2026-02-02 15:45:43.108955023 +0000 UTC m=+0.424099825 container died e8973a026a8344a79be94a82b8763a1687421bb2db8d143606438e9b0fb8c000 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:45:43 compute-0 podman[268924]: 2026-02-02 15:45:43.157920435 +0000 UTC m=+0.473065237 container remove e8973a026a8344a79be94a82b8763a1687421bb2db8d143606438e9b0fb8c000 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hugle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:45:43 compute-0 systemd[1]: libpod-conmon-e8973a026a8344a79be94a82b8763a1687421bb2db8d143606438e9b0fb8c000.scope: Deactivated successfully.
Feb 02 15:45:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:43 compute-0 sudo[268756]: pam_unix(sudo:session): session closed for user root
Feb 02 15:45:43 compute-0 sudo[268963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:45:43 compute-0 sudo[268963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:45:43 compute-0 sudo[268963]: pam_unix(sudo:session): session closed for user root
Feb 02 15:45:43 compute-0 sudo[268988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:45:43 compute-0 sudo[268988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a70f3663866703868c0be0c6d25507791b3696f56800b736a8dc3aaf3c13940a-merged.mount: Deactivated successfully.
Feb 02 15:45:43 compute-0 podman[269025]: 2026-02-02 15:45:43.535353224 +0000 UTC m=+0.039085503 container create f73b42b528cf2086eff8aeb690bb16ce9c1a7972b0c2ccec90afb2498ea0da33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:45:43 compute-0 systemd[1]: Started libpod-conmon-f73b42b528cf2086eff8aeb690bb16ce9c1a7972b0c2ccec90afb2498ea0da33.scope.
Feb 02 15:45:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:45:43 compute-0 podman[269025]: 2026-02-02 15:45:43.519392184 +0000 UTC m=+0.023124473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:45:43 compute-0 podman[269025]: 2026-02-02 15:45:43.634178389 +0000 UTC m=+0.137910688 container init f73b42b528cf2086eff8aeb690bb16ce9c1a7972b0c2ccec90afb2498ea0da33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wright, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 02 15:45:43 compute-0 podman[269025]: 2026-02-02 15:45:43.642886551 +0000 UTC m=+0.146618830 container start f73b42b528cf2086eff8aeb690bb16ce9c1a7972b0c2ccec90afb2498ea0da33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:45:43 compute-0 podman[269025]: 2026-02-02 15:45:43.647476793 +0000 UTC m=+0.151209072 container attach f73b42b528cf2086eff8aeb690bb16ce9c1a7972b0c2ccec90afb2498ea0da33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 02 15:45:43 compute-0 charming_wright[269041]: 167 167
Feb 02 15:45:43 compute-0 systemd[1]: libpod-f73b42b528cf2086eff8aeb690bb16ce9c1a7972b0c2ccec90afb2498ea0da33.scope: Deactivated successfully.
Feb 02 15:45:43 compute-0 podman[269025]: 2026-02-02 15:45:43.650427585 +0000 UTC m=+0.154159874 container died f73b42b528cf2086eff8aeb690bb16ce9c1a7972b0c2ccec90afb2498ea0da33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wright, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9675d24f31317e6328209a16f42f701bb965436e541523c5352a8072ed2662b5-merged.mount: Deactivated successfully.
Feb 02 15:45:43 compute-0 podman[269025]: 2026-02-02 15:45:43.687474206 +0000 UTC m=+0.191206485 container remove f73b42b528cf2086eff8aeb690bb16ce9c1a7972b0c2ccec90afb2498ea0da33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:45:43 compute-0 systemd[1]: libpod-conmon-f73b42b528cf2086eff8aeb690bb16ce9c1a7972b0c2ccec90afb2498ea0da33.scope: Deactivated successfully.
Feb 02 15:45:43 compute-0 podman[269065]: 2026-02-02 15:45:43.83592302 +0000 UTC m=+0.039787579 container create 25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:45:43 compute-0 ceph-mon[75334]: pgmap v1625: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 22 KiB/s wr, 13 op/s
Feb 02 15:45:43 compute-0 systemd[1]: Started libpod-conmon-25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c.scope.
Feb 02 15:45:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:45:43 compute-0 podman[269065]: 2026-02-02 15:45:43.815638687 +0000 UTC m=+0.019503286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a554a51e4471bd8a75104411c3caa5fc5da91b03e2f943c1a5ecc5d66c07cbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a554a51e4471bd8a75104411c3caa5fc5da91b03e2f943c1a5ecc5d66c07cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a554a51e4471bd8a75104411c3caa5fc5da91b03e2f943c1a5ecc5d66c07cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a554a51e4471bd8a75104411c3caa5fc5da91b03e2f943c1a5ecc5d66c07cbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:45:43 compute-0 podman[269065]: 2026-02-02 15:45:43.930178065 +0000 UTC m=+0.134042604 container init 25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:45:43 compute-0 podman[269065]: 2026-02-02 15:45:43.937461102 +0000 UTC m=+0.141325651 container start 25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:45:43 compute-0 podman[269065]: 2026-02-02 15:45:43.940965637 +0000 UTC m=+0.144830206 container attach 25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:45:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 34 KiB/s wr, 15 op/s
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.420 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:44 compute-0 lvm[269156]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:45:44 compute-0 lvm[269156]: VG ceph_vg0 finished
Feb 02 15:45:44 compute-0 lvm[269158]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:45:44 compute-0 lvm[269158]: VG ceph_vg1 finished
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.591 239549 DEBUG nova.compute.manager [req-8a1a4a18-21b2-434a-b736-350957bad46e req-d3db6b01-a270-4d4b-843b-22d948c9c5ad d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received event network-vif-plugged-15f9fd08-446b-4dd8-8735-546bb477e16b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.592 239549 DEBUG oslo_concurrency.lockutils [req-8a1a4a18-21b2-434a-b736-350957bad46e req-d3db6b01-a270-4d4b-843b-22d948c9c5ad d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.592 239549 DEBUG oslo_concurrency.lockutils [req-8a1a4a18-21b2-434a-b736-350957bad46e req-d3db6b01-a270-4d4b-843b-22d948c9c5ad d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.592 239549 DEBUG oslo_concurrency.lockutils [req-8a1a4a18-21b2-434a-b736-350957bad46e req-d3db6b01-a270-4d4b-843b-22d948c9c5ad d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.592 239549 DEBUG nova.compute.manager [req-8a1a4a18-21b2-434a-b736-350957bad46e req-d3db6b01-a270-4d4b-843b-22d948c9c5ad d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] No waiting events found dispatching network-vif-plugged-15f9fd08-446b-4dd8-8735-546bb477e16b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.592 239549 WARNING nova.compute.manager [req-8a1a4a18-21b2-434a-b736-350957bad46e req-d3db6b01-a270-4d4b-843b-22d948c9c5ad d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received unexpected event network-vif-plugged-15f9fd08-446b-4dd8-8735-546bb477e16b for instance with vm_state building and task_state spawning.
Feb 02 15:45:44 compute-0 lvm[269159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:45:44 compute-0 lvm[269159]: VG ceph_vg2 finished
Feb 02 15:45:44 compute-0 amazing_kapitsa[269081]: {}
Feb 02 15:45:44 compute-0 systemd[1]: libpod-25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c.scope: Deactivated successfully.
Feb 02 15:45:44 compute-0 systemd[1]: libpod-25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c.scope: Consumed 1.033s CPU time.
Feb 02 15:45:44 compute-0 podman[269065]: 2026-02-02 15:45:44.720698808 +0000 UTC m=+0.924563367 container died 25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a554a51e4471bd8a75104411c3caa5fc5da91b03e2f943c1a5ecc5d66c07cbb-merged.mount: Deactivated successfully.
Feb 02 15:45:44 compute-0 podman[269065]: 2026-02-02 15:45:44.760656322 +0000 UTC m=+0.964520881 container remove 25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:45:44 compute-0 systemd[1]: libpod-conmon-25901e6da5bfc736cfec86523c068f30d917fe6f2caf15b19f995f6b97bfac9c.scope: Deactivated successfully.
Feb 02 15:45:44 compute-0 sudo[268988]: pam_unix(sudo:session): session closed for user root
Feb 02 15:45:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:45:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:45:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:45:44 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:45:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Feb 02 15:45:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Feb 02 15:45:44 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.889 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047144.8888328, dae5d782-1829-48e1-836e-4f8301eeb88f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.889 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] VM Started (Lifecycle Event)
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.892 239549 DEBUG nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.898 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.901 239549 INFO nova.virt.libvirt.driver [-] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Instance spawned successfully.
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.901 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:45:44 compute-0 sudo[269180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:45:44 compute-0 sudo[269180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:45:44 compute-0 sudo[269180]: pam_unix(sudo:session): session closed for user root
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.915 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.920 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.923 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.923 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.924 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.924 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.924 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.924 239549 DEBUG nova.virt.libvirt.driver [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.955 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.955 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047144.8923423, dae5d782-1829-48e1-836e-4f8301eeb88f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.955 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] VM Paused (Lifecycle Event)
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:45:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.983 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.986 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047144.8969378, dae5d782-1829-48e1-836e-4f8301eeb88f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.986 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] VM Resumed (Lifecycle Event)
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.995 239549 INFO nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Took 7.09 seconds to spawn the instance on the hypervisor.
Feb 02 15:45:44 compute-0 nova_compute[239545]: 2026-02-02 15:45:44.995 239549 DEBUG nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:45 compute-0 nova_compute[239545]: 2026-02-02 15:45:45.004 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:45 compute-0 nova_compute[239545]: 2026-02-02 15:45:45.006 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:45:45 compute-0 nova_compute[239545]: 2026-02-02 15:45:45.026 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:45:45 compute-0 nova_compute[239545]: 2026-02-02 15:45:45.070 239549 INFO nova.compute.manager [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Took 9.32 seconds to build instance.
Feb 02 15:45:45 compute-0 nova_compute[239545]: 2026-02-02 15:45:45.087 239549 DEBUG oslo_concurrency.lockutils [None req-4ce3d33d-ba45-4dfb-afb5-71a554cad6e9 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:45 compute-0 ceph-mon[75334]: pgmap v1626: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 34 KiB/s wr, 15 op/s
Feb 02 15:45:45 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:45:45 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:45:45 compute-0 ceph-mon[75334]: osdmap e466: 3 total, 3 up, 3 in
Feb 02 15:45:45 compute-0 nova_compute[239545]: 2026-02-02 15:45:45.857 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 23 KiB/s wr, 23 op/s
Feb 02 15:45:47 compute-0 podman[269207]: 2026-02-02 15:45:47.326506603 +0000 UTC m=+0.062940542 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 02 15:45:47 compute-0 podman[269206]: 2026-02-02 15:45:47.354190537 +0000 UTC m=+0.086552078 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127)
Feb 02 15:45:47 compute-0 ceph-mon[75334]: pgmap v1628: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 23 KiB/s wr, 23 op/s
Feb 02 15:45:47 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 23 KiB/s wr, 23 op/s
Feb 02 15:45:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.378 239549 DEBUG nova.compute.manager [req-1baf06ee-f9a0-4064-b368-967d2cb4584a req-b2502b8b-5369-489c-971e-0187a361eedd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received event network-changed-15f9fd08-446b-4dd8-8735-546bb477e16b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.379 239549 DEBUG nova.compute.manager [req-1baf06ee-f9a0-4064-b368-967d2cb4584a req-b2502b8b-5369-489c-971e-0187a361eedd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Refreshing instance network info cache due to event network-changed-15f9fd08-446b-4dd8-8735-546bb477e16b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.379 239549 DEBUG oslo_concurrency.lockutils [req-1baf06ee-f9a0-4064-b368-967d2cb4584a req-b2502b8b-5369-489c-971e-0187a361eedd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-dae5d782-1829-48e1-836e-4f8301eeb88f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.379 239549 DEBUG oslo_concurrency.lockutils [req-1baf06ee-f9a0-4064-b368-967d2cb4584a req-b2502b8b-5369-489c-971e-0187a361eedd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-dae5d782-1829-48e1-836e-4f8301eeb88f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.379 239549 DEBUG nova.network.neutron [req-1baf06ee-f9a0-4064-b368-967d2cb4584a req-b2502b8b-5369-489c-971e-0187a361eedd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Refreshing network info cache for port 15f9fd08-446b-4dd8-8735-546bb477e16b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.422 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:49 compute-0 ceph-mon[75334]: pgmap v1629: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 23 KiB/s wr, 23 op/s
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.898 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "e8ad53dc-3c67-426c-8c27-6467369ab230" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.899 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.916 239549 DEBUG nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.981 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.982 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:49 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 80 op/s
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.991 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:45:49 compute-0 nova_compute[239545]: 2026-02-02 15:45:49.991 239549 INFO nova.compute.claims [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.147 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:45:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1392775336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.703 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.709 239549 DEBUG nova.compute.provider_tree [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.732 239549 DEBUG nova.scheduler.client.report [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.755 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.755 239549 DEBUG nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.794 239549 DEBUG nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.794 239549 DEBUG nova.network.neutron [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.815 239549 INFO nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.833 239549 DEBUG nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.858 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1392775336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.883 239549 INFO nova.virt.block_device [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Booting with volume 99a4390c-2ad5-4a2c-ae1c-ebb5ec19389b at /dev/vda
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.985 239549 DEBUG nova.network.neutron [req-1baf06ee-f9a0-4064-b368-967d2cb4584a req-b2502b8b-5369-489c-971e-0187a361eedd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Updated VIF entry in instance network info cache for port 15f9fd08-446b-4dd8-8735-546bb477e16b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:45:50 compute-0 nova_compute[239545]: 2026-02-02 15:45:50.985 239549 DEBUG nova.network.neutron [req-1baf06ee-f9a0-4064-b368-967d2cb4584a req-b2502b8b-5369-489c-971e-0187a361eedd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Updating instance_info_cache with network_info: [{"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.003 239549 DEBUG oslo_concurrency.lockutils [req-1baf06ee-f9a0-4064-b368-967d2cb4584a req-b2502b8b-5369-489c-971e-0187a361eedd d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-dae5d782-1829-48e1-836e-4f8301eeb88f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.008 239549 DEBUG os_brick.utils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.010 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.018 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.019 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[a394f931-ef06-48d1-90be-0bf9ce56c502]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.020 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.025 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.025 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba5aacb-f3e3-4de8-98a4-16880aff0458]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.027 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.033 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.033 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4ae8cd-7a27-4aa6-9f3d-c5c683ea4797]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.035 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[125e47b3-a4a6-4d1c-9eca-eda2f18ac8ee]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.035 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.058 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.061 239549 DEBUG os_brick.initiator.connectors.lightos [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.061 239549 DEBUG os_brick.initiator.connectors.lightos [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.061 239549 DEBUG os_brick.initiator.connectors.lightos [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.062 239549 DEBUG os_brick.utils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.062 239549 DEBUG nova.virt.block_device [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Updating existing volume attachment record: a623a415-88ce-412d-8637-18b0b682a829 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:45:51 compute-0 nova_compute[239545]: 2026-02-02 15:45:51.437 239549 DEBUG nova.policy [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b8e72a1cb6344869821da1cfc41bf8fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:45:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:45:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2678484178' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:51 compute-0 ceph-mon[75334]: pgmap v1630: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 80 op/s
Feb 02 15:45:51 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2678484178' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 23 KiB/s wr, 121 op/s
Feb 02 15:45:52 compute-0 nova_compute[239545]: 2026-02-02 15:45:52.054 239549 DEBUG nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:45:52 compute-0 nova_compute[239545]: 2026-02-02 15:45:52.055 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:45:52 compute-0 nova_compute[239545]: 2026-02-02 15:45:52.055 239549 INFO nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Creating image(s)
Feb 02 15:45:52 compute-0 nova_compute[239545]: 2026-02-02 15:45:52.056 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:45:52 compute-0 nova_compute[239545]: 2026-02-02 15:45:52.056 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Ensure instance console log exists: /var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:45:52 compute-0 nova_compute[239545]: 2026-02-02 15:45:52.056 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:52 compute-0 nova_compute[239545]: 2026-02-02 15:45:52.056 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:52 compute-0 nova_compute[239545]: 2026-02-02 15:45:52.056 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:52 compute-0 nova_compute[239545]: 2026-02-02 15:45:52.463 239549 DEBUG nova.network.neutron [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Successfully created port: 6ac4365b-c94a-415d-9912-d322dc0a9d81 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.065 239549 DEBUG nova.network.neutron [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Successfully updated port: 6ac4365b-c94a-415d-9912-d322dc0a9d81 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.083 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.083 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquired lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.083 239549 DEBUG nova.network.neutron [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.150 239549 DEBUG nova.compute.manager [req-6ee7d1c5-77df-4da1-ab4c-9d04fc236ae2 req-0a44d85d-a0a0-4c5b-a256-94891b9134ce d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received event network-changed-6ac4365b-c94a-415d-9912-d322dc0a9d81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.150 239549 DEBUG nova.compute.manager [req-6ee7d1c5-77df-4da1-ab4c-9d04fc236ae2 req-0a44d85d-a0a0-4c5b-a256-94891b9134ce d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Refreshing instance network info cache due to event network-changed-6ac4365b-c94a-415d-9912-d322dc0a9d81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.151 239549 DEBUG oslo_concurrency.lockutils [req-6ee7d1c5-77df-4da1-ab4c-9d04fc236ae2 req-0a44d85d-a0a0-4c5b-a256-94891b9134ce d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:45:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.380 239549 DEBUG nova.network.neutron [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:45:53 compute-0 ceph-mon[75334]: pgmap v1631: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 23 KiB/s wr, 121 op/s
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.952 239549 DEBUG nova.network.neutron [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Updating instance_info_cache with network_info: [{"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.977 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Releasing lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.977 239549 DEBUG nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Instance network_info: |[{"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.977 239549 DEBUG oslo_concurrency.lockutils [req-6ee7d1c5-77df-4da1-ab4c-9d04fc236ae2 req-0a44d85d-a0a0-4c5b-a256-94891b9134ce d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.977 239549 DEBUG nova.network.neutron [req-6ee7d1c5-77df-4da1-ab4c-9d04fc236ae2 req-0a44d85d-a0a0-4c5b-a256-94891b9134ce d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Refreshing network info cache for port 6ac4365b-c94a-415d-9912-d322dc0a9d81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.980 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Start _get_guest_xml network_info=[{"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': 'a623a415-88ce-412d-8637-18b0b682a829', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-99a4390c-2ad5-4a2c-ae1c-ebb5ec19389b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '99a4390c-2ad5-4a2c-ae1c-ebb5ec19389b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'e8ad53dc-3c67-426c-8c27-6467369ab230', 'attached_at': '', 'detached_at': '', 'volume_id': '99a4390c-2ad5-4a2c-ae1c-ebb5ec19389b', 'serial': '99a4390c-2ad5-4a2c-ae1c-ebb5ec19389b'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.985 239549 WARNING nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.989 239549 DEBUG nova.virt.libvirt.host [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.990 239549 DEBUG nova.virt.libvirt.host [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:45:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.3 KiB/s wr, 117 op/s
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.996 239549 DEBUG nova.virt.libvirt.host [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.997 239549 DEBUG nova.virt.libvirt.host [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.997 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.997 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.998 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.998 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.998 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.998 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.999 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.999 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.999 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:45:53 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.999 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:53.999 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.000 239549 DEBUG nova.virt.hardware [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.019 239549 DEBUG nova.storage.rbd_utils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image e8ad53dc-3c67-426c-8c27-6467369ab230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.022 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.423 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:45:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840492751' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.556 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007675707072432932 of space, bias 1.0, pg target 0.23027121217298796 quantized to 32 (current 32)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0036825570012548594 of space, bias 1.0, pg target 1.1047671003764579 quantized to 32 (current 32)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.567505878404519e-06 of space, bias 1.0, pg target 0.0007676842576429513 quantized to 32 (current 32)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006676739746528949 of space, bias 1.0, pg target 0.19963451842121557 quantized to 32 (current 32)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4576648310493459e-06 of space, bias 4.0, pg target 0.0017433671379350176 quantized to 16 (current 16)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:45:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.577 239549 DEBUG nova.virt.libvirt.vif [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:45:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1679658676',display_name='tempest-TestVolumeBootPattern-server-1679658676',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1679658676',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVN3emQ3pa4ZbuxCkTmhDe1Vp6VQUY67rC+ITHBo+Tq5uE7NmayODM4fxB/CHWvUnJ+8HqCsQ4XM6GBraeEG0bMnApJ123caLkGqWErsSAkkLYVHXE8VvM9eqpwYxSifA==',key_name='tempest-TestVolumeBootPattern-570771141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-utg3ih7x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:45:50Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=e8ad53dc-3c67-426c-8c27-6467369ab230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.577 239549 DEBUG nova.network.os_vif_util [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.578 239549 DEBUG nova.network.os_vif_util [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:0a:66,bridge_name='br-int',has_traffic_filtering=True,id=6ac4365b-c94a-415d-9912-d322dc0a9d81,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ac4365b-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.579 239549 DEBUG nova.objects.instance [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8ad53dc-3c67-426c-8c27-6467369ab230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.592 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <uuid>e8ad53dc-3c67-426c-8c27-6467369ab230</uuid>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <name>instance-0000001a</name>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <nova:name>tempest-TestVolumeBootPattern-server-1679658676</nova:name>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:45:53</nova:creationTime>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <nova:user uuid="b8e72a1cb6344869821da1cfc41bf8fc">tempest-TestVolumeBootPattern-77302308-project-member</nova:user>
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <nova:project uuid="8a28227cdc0a4390bebe7549f189bfe5">tempest-TestVolumeBootPattern-77302308</nova:project>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <nova:port uuid="6ac4365b-c94a-415d-9912-d322dc0a9d81">
Feb 02 15:45:54 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <system>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <entry name="serial">e8ad53dc-3c67-426c-8c27-6467369ab230</entry>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <entry name="uuid">e8ad53dc-3c67-426c-8c27-6467369ab230</entry>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     </system>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <os>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   </os>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <features>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   </features>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/e8ad53dc-3c67-426c-8c27-6467369ab230_disk.config">
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       </source>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-99a4390c-2ad5-4a2c-ae1c-ebb5ec19389b">
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       </source>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:45:54 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <serial>99a4390c-2ad5-4a2c-ae1c-ebb5ec19389b</serial>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:a0:0a:66"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <target dev="tap6ac4365b-c9"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230/console.log" append="off"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <video>
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     </video>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:45:54 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:45:54 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:45:54 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:45:54 compute-0 nova_compute[239545]: </domain>
Feb 02 15:45:54 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.593 239549 DEBUG nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Preparing to wait for external event network-vif-plugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.593 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.593 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.594 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.594 239549 DEBUG nova.virt.libvirt.vif [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:45:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1679658676',display_name='tempest-TestVolumeBootPattern-server-1679658676',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1679658676',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVN3emQ3pa4ZbuxCkTmhDe1Vp6VQUY67rC+ITHBo+Tq5uE7NmayODM4fxB/CHWvUnJ+8HqCsQ4XM6GBraeEG0bMnApJ123caLkGqWErsSAkkLYVHXE8VvM9eqpwYxSifA==',key_name='tempest-TestVolumeBootPattern-570771141',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-utg3ih7x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:45:50Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=e8ad53dc-3c67-426c-8c27-6467369ab230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.595 239549 DEBUG nova.network.os_vif_util [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.595 239549 DEBUG nova.network.os_vif_util [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:0a:66,bridge_name='br-int',has_traffic_filtering=True,id=6ac4365b-c94a-415d-9912-d322dc0a9d81,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ac4365b-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.595 239549 DEBUG os_vif [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:0a:66,bridge_name='br-int',has_traffic_filtering=True,id=6ac4365b-c94a-415d-9912-d322dc0a9d81,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ac4365b-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.596 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.596 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.596 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.602 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.602 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ac4365b-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.603 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6ac4365b-c9, col_values=(('external_ids', {'iface-id': '6ac4365b-c94a-415d-9912-d322dc0a9d81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:0a:66', 'vm-uuid': 'e8ad53dc-3c67-426c-8c27-6467369ab230'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:54 compute-0 NetworkManager[49171]: <info>  [1770047154.6057] manager: (tap6ac4365b-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.604 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.609 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.611 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.612 239549 INFO os_vif [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:0a:66,bridge_name='br-int',has_traffic_filtering=True,id=6ac4365b-c94a-415d-9912-d322dc0a9d81,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ac4365b-c9')
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.657 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.657 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.657 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] No VIF found with MAC fa:16:3e:a0:0a:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.658 239549 INFO nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Using config drive
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.682 239549 DEBUG nova.storage.rbd_utils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image e8ad53dc-3c67-426c-8c27-6467369ab230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1840492751' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.971 239549 INFO nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Creating config drive at /var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230/disk.config
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.974 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp35h5x8kg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.994 239549 DEBUG nova.network.neutron [req-6ee7d1c5-77df-4da1-ab4c-9d04fc236ae2 req-0a44d85d-a0a0-4c5b-a256-94891b9134ce d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Updated VIF entry in instance network info cache for port 6ac4365b-c94a-415d-9912-d322dc0a9d81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:45:54 compute-0 nova_compute[239545]: 2026-02-02 15:45:54.995 239549 DEBUG nova.network.neutron [req-6ee7d1c5-77df-4da1-ab4c-9d04fc236ae2 req-0a44d85d-a0a0-4c5b-a256-94891b9134ce d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Updating instance_info_cache with network_info: [{"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.012 239549 DEBUG oslo_concurrency.lockutils [req-6ee7d1c5-77df-4da1-ab4c-9d04fc236ae2 req-0a44d85d-a0a0-4c5b-a256-94891b9134ce d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.097 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp35h5x8kg" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.118 239549 DEBUG nova.storage.rbd_utils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] rbd image e8ad53dc-3c67-426c-8c27-6467369ab230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.128 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230/disk.config e8ad53dc-3c67-426c-8c27-6467369ab230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.241 239549 DEBUG oslo_concurrency.processutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230/disk.config e8ad53dc-3c67-426c-8c27-6467369ab230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.242 239549 INFO nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Deleting local config drive /var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230/disk.config because it was imported into RBD.
Feb 02 15:45:55 compute-0 kernel: tap6ac4365b-c9: entered promiscuous mode
Feb 02 15:45:55 compute-0 NetworkManager[49171]: <info>  [1770047155.2784] manager: (tap6ac4365b-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Feb 02 15:45:55 compute-0 ovn_controller[144995]: 2026-02-02T15:45:55Z|00237|binding|INFO|Claiming lport 6ac4365b-c94a-415d-9912-d322dc0a9d81 for this chassis.
Feb 02 15:45:55 compute-0 ovn_controller[144995]: 2026-02-02T15:45:55Z|00238|binding|INFO|6ac4365b-c94a-415d-9912-d322dc0a9d81: Claiming fa:16:3e:a0:0a:66 10.100.0.3
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.279 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.286 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:0a:66 10.100.0.3'], port_security=['fa:16:3e:a0:0a:66 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e8ad53dc-3c67-426c-8c27-6467369ab230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '413c222f-1970-4ec0-b0a7-3e88c9a779d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=6ac4365b-c94a-415d-9912-d322dc0a9d81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.287 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.289 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 6ac4365b-c94a-415d-9912-d322dc0a9d81 in datapath 473fc4ca-a137-447b-9349-9f4677babee6 bound to our chassis
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.289 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.291 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:45:55 compute-0 ovn_controller[144995]: 2026-02-02T15:45:55Z|00239|binding|INFO|Setting lport 6ac4365b-c94a-415d-9912-d322dc0a9d81 ovn-installed in OVS
Feb 02 15:45:55 compute-0 ovn_controller[144995]: 2026-02-02T15:45:55Z|00240|binding|INFO|Setting lport 6ac4365b-c94a-415d-9912-d322dc0a9d81 up in Southbound
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.294 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.308 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0897981a-770e-4dd6-8b5a-bd22f63ee2c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:55 compute-0 systemd-machined[207609]: New machine qemu-26-instance-0000001a.
Feb 02 15:45:55 compute-0 systemd-udevd[269397]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:45:55 compute-0 NetworkManager[49171]: <info>  [1770047155.3216] device (tap6ac4365b-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:45:55 compute-0 NetworkManager[49171]: <info>  [1770047155.3219] device (tap6ac4365b-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:45:55 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.338 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[07165a61-54ae-437a-8b2a-b78e2b7dee4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.341 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[24711349-94dc-44e9-8a63-21f7011fa547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.364 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf843c4-ff70-4a68-96dc-93b21cb5f4cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.379 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d5629f10-a7de-40c0-a40f-149bc23af778]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459280, 'reachable_time': 26890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269407, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.391 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[558e076e-f427-4559-be48-7cb05f4d5296]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap473fc4ca-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459291, 'tstamp': 459291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269410, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap473fc4ca-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459294, 'tstamp': 459294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269410, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.394 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.395 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.396 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473fc4ca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.397 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.397 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473fc4ca-a0, col_values=(('external_ids', {'iface-id': '8ec763b2-de85-4ed5-bb5d-67e76d81beae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.397 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.502 239549 DEBUG nova.compute.manager [req-c42cc5c4-603f-41b5-9354-6a1c239d12e9 req-0d3b2497-7cb4-4eee-b76c-61e8e37f6d66 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received event network-vif-plugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.503 239549 DEBUG oslo_concurrency.lockutils [req-c42cc5c4-603f-41b5-9354-6a1c239d12e9 req-0d3b2497-7cb4-4eee-b76c-61e8e37f6d66 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.504 239549 DEBUG oslo_concurrency.lockutils [req-c42cc5c4-603f-41b5-9354-6a1c239d12e9 req-0d3b2497-7cb4-4eee-b76c-61e8e37f6d66 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.504 239549 DEBUG oslo_concurrency.lockutils [req-c42cc5c4-603f-41b5-9354-6a1c239d12e9 req-0d3b2497-7cb4-4eee-b76c-61e8e37f6d66 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.504 239549 DEBUG nova.compute.manager [req-c42cc5c4-603f-41b5-9354-6a1c239d12e9 req-0d3b2497-7cb4-4eee-b76c-61e8e37f6d66 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Processing event network-vif-plugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.600 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.601 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:45:55 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:55.602 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.854 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047155.8541353, e8ad53dc-3c67-426c-8c27-6467369ab230 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.854 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] VM Started (Lifecycle Event)
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.856 239549 DEBUG nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.859 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.862 239549 INFO nova.virt.libvirt.driver [-] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Instance spawned successfully.
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.863 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.884 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.891 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.895 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.895 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.896 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.896 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.896 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.897 239549 DEBUG nova.virt.libvirt.driver [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:45:55 compute-0 ceph-mon[75334]: pgmap v1632: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.3 KiB/s wr, 117 op/s
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.926 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.927 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047155.8542752, e8ad53dc-3c67-426c-8c27-6467369ab230 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.927 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] VM Paused (Lifecycle Event)
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.961 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.965 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047155.8593755, e8ad53dc-3c67-426c-8c27-6467369ab230 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.966 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] VM Resumed (Lifecycle Event)
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.971 239549 INFO nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Took 3.92 seconds to spawn the instance on the hypervisor.
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.971 239549 DEBUG nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.983 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:45:55 compute-0 nova_compute[239545]: 2026-02-02 15:45:55.987 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:45:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 8.5 KiB/s wr, 106 op/s
Feb 02 15:45:56 compute-0 nova_compute[239545]: 2026-02-02 15:45:56.012 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:45:56 compute-0 nova_compute[239545]: 2026-02-02 15:45:56.040 239549 INFO nova.compute.manager [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Took 6.08 seconds to build instance.
Feb 02 15:45:56 compute-0 nova_compute[239545]: 2026-02-02 15:45:56.055 239549 DEBUG oslo_concurrency.lockutils [None req-12ef8d50-cac0-4a59-8072-1dbe5f14ab50 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:56 compute-0 ovn_controller[144995]: 2026-02-02T15:45:56Z|00056|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.14
Feb 02 15:45:56 compute-0 ovn_controller[144995]: 2026-02-02T15:45:56Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:33:48:48 10.100.0.14
Feb 02 15:45:57 compute-0 nova_compute[239545]: 2026-02-02 15:45:57.579 239549 DEBUG nova.compute.manager [req-fe1df98e-bf25-42d3-98c2-4e3ce213ebe5 req-a945f773-ae47-4d08-bb78-5953b79b824e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received event network-vif-plugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:45:57 compute-0 nova_compute[239545]: 2026-02-02 15:45:57.579 239549 DEBUG oslo_concurrency.lockutils [req-fe1df98e-bf25-42d3-98c2-4e3ce213ebe5 req-a945f773-ae47-4d08-bb78-5953b79b824e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:57 compute-0 nova_compute[239545]: 2026-02-02 15:45:57.579 239549 DEBUG oslo_concurrency.lockutils [req-fe1df98e-bf25-42d3-98c2-4e3ce213ebe5 req-a945f773-ae47-4d08-bb78-5953b79b824e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:57 compute-0 nova_compute[239545]: 2026-02-02 15:45:57.579 239549 DEBUG oslo_concurrency.lockutils [req-fe1df98e-bf25-42d3-98c2-4e3ce213ebe5 req-a945f773-ae47-4d08-bb78-5953b79b824e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:57 compute-0 nova_compute[239545]: 2026-02-02 15:45:57.580 239549 DEBUG nova.compute.manager [req-fe1df98e-bf25-42d3-98c2-4e3ce213ebe5 req-a945f773-ae47-4d08-bb78-5953b79b824e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] No waiting events found dispatching network-vif-plugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:45:57 compute-0 nova_compute[239545]: 2026-02-02 15:45:57.580 239549 WARNING nova.compute.manager [req-fe1df98e-bf25-42d3-98c2-4e3ce213ebe5 req-a945f773-ae47-4d08-bb78-5953b79b824e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received unexpected event network-vif-plugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 for instance with vm_state active and task_state None.
Feb 02 15:45:57 compute-0 ceph-mon[75334]: pgmap v1633: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 8.5 KiB/s wr, 106 op/s
Feb 02 15:45:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.7 KiB/s wr, 83 op/s
Feb 02 15:45:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:45:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:59.255 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:45:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:59.256 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:45:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:45:59.257 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:45:59 compute-0 nova_compute[239545]: 2026-02-02 15:45:59.426 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:59 compute-0 nova_compute[239545]: 2026-02-02 15:45:59.605 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:45:59 compute-0 ceph-mon[75334]: pgmap v1634: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.7 KiB/s wr, 83 op/s
Feb 02 15:45:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 19 KiB/s wr, 133 op/s
Feb 02 15:46:00 compute-0 ovn_controller[144995]: 2026-02-02T15:46:00Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.14
Feb 02 15:46:00 compute-0 ovn_controller[144995]: 2026-02-02T15:46:00Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:33:48:48 10.100.0.14
Feb 02 15:46:01 compute-0 nova_compute[239545]: 2026-02-02 15:46:01.606 239549 DEBUG nova.compute.manager [req-409a9c25-3a6f-427c-a419-b6876f51c007 req-ccb336c1-0e31-4865-a863-b05f8f294672 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received event network-changed-6ac4365b-c94a-415d-9912-d322dc0a9d81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:01 compute-0 nova_compute[239545]: 2026-02-02 15:46:01.607 239549 DEBUG nova.compute.manager [req-409a9c25-3a6f-427c-a419-b6876f51c007 req-ccb336c1-0e31-4865-a863-b05f8f294672 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Refreshing instance network info cache due to event network-changed-6ac4365b-c94a-415d-9912-d322dc0a9d81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:46:01 compute-0 nova_compute[239545]: 2026-02-02 15:46:01.607 239549 DEBUG oslo_concurrency.lockutils [req-409a9c25-3a6f-427c-a419-b6876f51c007 req-ccb336c1-0e31-4865-a863-b05f8f294672 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:46:01 compute-0 nova_compute[239545]: 2026-02-02 15:46:01.608 239549 DEBUG oslo_concurrency.lockutils [req-409a9c25-3a6f-427c-a419-b6876f51c007 req-ccb336c1-0e31-4865-a863-b05f8f294672 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:46:01 compute-0 nova_compute[239545]: 2026-02-02 15:46:01.609 239549 DEBUG nova.network.neutron [req-409a9c25-3a6f-427c-a419-b6876f51c007 req-ccb336c1-0e31-4865-a863-b05f8f294672 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Refreshing network info cache for port 6ac4365b-c94a-415d-9912-d322dc0a9d81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:46:01 compute-0 ovn_controller[144995]: 2026-02-02T15:46:01Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:48:48 10.100.0.14
Feb 02 15:46:01 compute-0 ovn_controller[144995]: 2026-02-02T15:46:01Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:48:48 10.100.0.14
Feb 02 15:46:01 compute-0 ceph-mon[75334]: pgmap v1635: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 19 KiB/s wr, 133 op/s
Feb 02 15:46:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 36 KiB/s wr, 161 op/s
Feb 02 15:46:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:03 compute-0 nova_compute[239545]: 2026-02-02 15:46:03.628 239549 DEBUG nova.network.neutron [req-409a9c25-3a6f-427c-a419-b6876f51c007 req-ccb336c1-0e31-4865-a863-b05f8f294672 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Updated VIF entry in instance network info cache for port 6ac4365b-c94a-415d-9912-d322dc0a9d81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:46:03 compute-0 nova_compute[239545]: 2026-02-02 15:46:03.629 239549 DEBUG nova.network.neutron [req-409a9c25-3a6f-427c-a419-b6876f51c007 req-ccb336c1-0e31-4865-a863-b05f8f294672 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Updating instance_info_cache with network_info: [{"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:46:03 compute-0 nova_compute[239545]: 2026-02-02 15:46:03.654 239549 DEBUG oslo_concurrency.lockutils [req-409a9c25-3a6f-427c-a419-b6876f51c007 req-ccb336c1-0e31-4865-a863-b05f8f294672 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:46:03 compute-0 ceph-mon[75334]: pgmap v1636: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 36 KiB/s wr, 161 op/s
Feb 02 15:46:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 36 KiB/s wr, 127 op/s
Feb 02 15:46:04 compute-0 nova_compute[239545]: 2026-02-02 15:46:04.427 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:04 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:04.604 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:46:04 compute-0 nova_compute[239545]: 2026-02-02 15:46:04.608 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:05 compute-0 ceph-mon[75334]: pgmap v1637: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 36 KiB/s wr, 127 op/s
Feb 02 15:46:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 127 op/s
Feb 02 15:46:06 compute-0 nova_compute[239545]: 2026-02-02 15:46:06.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:06 compute-0 nova_compute[239545]: 2026-02-02 15:46:06.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:46:06 compute-0 nova_compute[239545]: 2026-02-02 15:46:06.567 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:46:06 compute-0 nova_compute[239545]: 2026-02-02 15:46:06.568 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:07 compute-0 ceph-mon[75334]: pgmap v1638: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 127 op/s
Feb 02 15:46:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 126 op/s
Feb 02 15:46:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:09 compute-0 nova_compute[239545]: 2026-02-02 15:46:09.429 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:09 compute-0 nova_compute[239545]: 2026-02-02 15:46:09.610 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:09 compute-0 ceph-mon[75334]: pgmap v1639: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 126 op/s
Feb 02 15:46:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 126 op/s
Feb 02 15:46:10 compute-0 nova_compute[239545]: 2026-02-02 15:46:10.563 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:10 compute-0 nova_compute[239545]: 2026-02-02 15:46:10.564 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:11 compute-0 ovn_controller[144995]: 2026-02-02T15:46:11Z|00062|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.3
Feb 02 15:46:11 compute-0 ovn_controller[144995]: 2026-02-02T15:46:11Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:a0:0a:66 10.100.0.3
Feb 02 15:46:11 compute-0 nova_compute[239545]: 2026-02-02 15:46:11.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:11 compute-0 ceph-mon[75334]: pgmap v1640: 305 pgs: 305 active+clean; 434 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 126 op/s
Feb 02 15:46:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 448 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 534 KiB/s wr, 101 op/s
Feb 02 15:46:12 compute-0 ceph-mon[75334]: pgmap v1641: 305 pgs: 305 active+clean; 448 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 534 KiB/s wr, 101 op/s
Feb 02 15:46:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:13 compute-0 nova_compute[239545]: 2026-02-02 15:46:13.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 448 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 872 KiB/s rd, 517 KiB/s wr, 38 op/s
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.431 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.580 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.581 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.581 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.581 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.581 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:46:14 compute-0 nova_compute[239545]: 2026-02-02 15:46:14.612 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:46:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:46:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:46:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:46:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:46:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:46:15 compute-0 ceph-mon[75334]: pgmap v1642: 305 pgs: 305 active+clean; 448 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 872 KiB/s rd, 517 KiB/s wr, 38 op/s
Feb 02 15:46:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:46:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2074241794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.131 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.215 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.216 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.220 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.220 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.220 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.223 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.223 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.227 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.227 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.396 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.397 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3538MB free_disk=59.94209109432995GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.397 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.398 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.475 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.475 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 589acca5-dd9e-4695-b32a-0235932283d1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.475 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance dae5d782-1829-48e1-836e-4f8301eeb88f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.476 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance e8ad53dc-3c67-426c-8c27-6467369ab230 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.476 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.476 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:46:15 compute-0 nova_compute[239545]: 2026-02-02 15:46:15.557 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:46:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 448 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 516 KiB/s wr, 50 op/s
Feb 02 15:46:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2074241794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:46:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152181936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:16 compute-0 ovn_controller[144995]: 2026-02-02T15:46:16Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:a0:0a:66 10.100.0.3
Feb 02 15:46:16 compute-0 nova_compute[239545]: 2026-02-02 15:46:16.146 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:46:16 compute-0 nova_compute[239545]: 2026-02-02 15:46:16.154 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:46:16 compute-0 nova_compute[239545]: 2026-02-02 15:46:16.174 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:46:16 compute-0 nova_compute[239545]: 2026-02-02 15:46:16.203 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:46:16 compute-0 nova_compute[239545]: 2026-02-02 15:46:16.204 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:16 compute-0 ovn_controller[144995]: 2026-02-02T15:46:16Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a0:0a:66 10.100.0.3
Feb 02 15:46:16 compute-0 ovn_controller[144995]: 2026-02-02T15:46:16Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a0:0a:66 10.100.0.3
Feb 02 15:46:17 compute-0 ceph-mon[75334]: pgmap v1643: 305 pgs: 305 active+clean; 448 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 516 KiB/s wr, 50 op/s
Feb 02 15:46:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3152181936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 448 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 506 KiB/s wr, 50 op/s
Feb 02 15:46:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:18 compute-0 nova_compute[239545]: 2026-02-02 15:46:18.204 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:18 compute-0 nova_compute[239545]: 2026-02-02 15:46:18.206 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:46:18 compute-0 podman[269500]: 2026-02-02 15:46:18.323974325 +0000 UTC m=+0.062500852 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 02 15:46:18 compute-0 podman[269499]: 2026-02-02 15:46:18.356467107 +0000 UTC m=+0.094710477 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Feb 02 15:46:19 compute-0 ceph-mon[75334]: pgmap v1644: 305 pgs: 305 active+clean; 448 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 506 KiB/s wr, 50 op/s
Feb 02 15:46:19 compute-0 nova_compute[239545]: 2026-02-02 15:46:19.434 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:19 compute-0 nova_compute[239545]: 2026-02-02 15:46:19.613 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 448 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 506 KiB/s wr, 50 op/s
Feb 02 15:46:21 compute-0 ceph-mon[75334]: pgmap v1645: 305 pgs: 305 active+clean; 448 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 506 KiB/s wr, 50 op/s
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.620 239549 DEBUG oslo_concurrency.lockutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "dae5d782-1829-48e1-836e-4f8301eeb88f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.621 239549 DEBUG oslo_concurrency.lockutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.621 239549 DEBUG oslo_concurrency.lockutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.622 239549 DEBUG oslo_concurrency.lockutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.622 239549 DEBUG oslo_concurrency.lockutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.624 239549 INFO nova.compute.manager [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Terminating instance
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.625 239549 DEBUG nova.compute.manager [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:46:21 compute-0 kernel: tap15f9fd08-44 (unregistering): left promiscuous mode
Feb 02 15:46:21 compute-0 NetworkManager[49171]: <info>  [1770047181.6677] device (tap15f9fd08-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:46:21 compute-0 ovn_controller[144995]: 2026-02-02T15:46:21Z|00241|binding|INFO|Releasing lport 15f9fd08-446b-4dd8-8735-546bb477e16b from this chassis (sb_readonly=0)
Feb 02 15:46:21 compute-0 ovn_controller[144995]: 2026-02-02T15:46:21Z|00242|binding|INFO|Setting lport 15f9fd08-446b-4dd8-8735-546bb477e16b down in Southbound
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.677 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:21 compute-0 ovn_controller[144995]: 2026-02-02T15:46:21Z|00243|binding|INFO|Removing iface tap15f9fd08-44 ovn-installed in OVS
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.679 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.688 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:48:48 10.100.0.14'], port_security=['fa:16:3e:33:48:48 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dae5d782-1829-48e1-836e-4f8301eeb88f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d6011a66bdb41cea09b6018ceeec7d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e2499c6-4637-44db-a491-4fe8bcc3f081', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b377d79-8c51-4c47-82b4-3451b94df20d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=15f9fd08-446b-4dd8-8735-546bb477e16b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.687 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.689 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 15f9fd08-446b-4dd8-8735-546bb477e16b in datapath b6f67b7a-3fd7-4623-9937-142eb5dabe2c unbound from our chassis
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.691 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6f67b7a-3fd7-4623-9937-142eb5dabe2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.693 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec8911f-6fee-4fd6-82ab-15fbf20a156a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.694 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c namespace which is not needed anymore
Feb 02 15:46:21 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Feb 02 15:46:21 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 15.504s CPU time.
Feb 02 15:46:21 compute-0 systemd-machined[207609]: Machine qemu-25-instance-00000019 terminated.
Feb 02 15:46:21 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[268904]: [NOTICE]   (268908) : haproxy version is 2.8.14-c23fe91
Feb 02 15:46:21 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[268904]: [NOTICE]   (268908) : path to executable is /usr/sbin/haproxy
Feb 02 15:46:21 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[268904]: [WARNING]  (268908) : Exiting Master process...
Feb 02 15:46:21 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[268904]: [ALERT]    (268908) : Current worker (268914) exited with code 143 (Terminated)
Feb 02 15:46:21 compute-0 neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c[268904]: [WARNING]  (268908) : All workers exited. Exiting... (0)
Feb 02 15:46:21 compute-0 systemd[1]: libpod-14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044.scope: Deactivated successfully.
Feb 02 15:46:21 compute-0 podman[269571]: 2026-02-02 15:46:21.816846065 +0000 UTC m=+0.043675254 container died 14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044-userdata-shm.mount: Deactivated successfully.
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.845 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c357fd5ace59e88be6580fa97564ac0c8f187edc52399abb40aefa9365b3694-merged.mount: Deactivated successfully.
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.852 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.857 239549 INFO nova.virt.libvirt.driver [-] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Instance destroyed successfully.
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.858 239549 DEBUG nova.objects.instance [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lazy-loading 'resources' on Instance uuid dae5d782-1829-48e1-836e-4f8301eeb88f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:46:21 compute-0 podman[269571]: 2026-02-02 15:46:21.861615315 +0000 UTC m=+0.088444504 container cleanup 14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 02 15:46:21 compute-0 systemd[1]: libpod-conmon-14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044.scope: Deactivated successfully.
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.872 239549 DEBUG nova.virt.libvirt.vif [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:45:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1867779399',display_name='tempest-TransferEncryptedVolumeTest-server-1867779399',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1867779399',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH4lPXdat6TIfSOSKg5xYklqsZ5blpFjr9pJRpxK9EoeTRyB9ECumCAF+ZB72uHiJN6zvQWtj3yCwumCfWWkS7+am6bvE7SvfzxW5K4yPSBZ+jdyG6zmzmLhEEjLuT4TCQ==',key_name='tempest-TransferEncryptedVolumeTest-1394740004',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:45:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d6011a66bdb41cea09b6018ceeec7d4',ramdisk_id='',reservation_id='r-9hnysknf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1895614673',owner_user_name='tempest-TransferEncryptedVolumeTest-1895614673-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:45:45Z,user_data=None,user_id='df03e4d41ae644fca567cfe648b7bad6',uuid=dae5d782-1829-48e1-836e-4f8301eeb88f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.872 239549 DEBUG nova.network.os_vif_util [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converting VIF {"id": "15f9fd08-446b-4dd8-8735-546bb477e16b", "address": "fa:16:3e:33:48:48", "network": {"id": "b6f67b7a-3fd7-4623-9937-142eb5dabe2c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1837811353-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d6011a66bdb41cea09b6018ceeec7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15f9fd08-44", "ovs_interfaceid": "15f9fd08-446b-4dd8-8735-546bb477e16b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.873 239549 DEBUG nova.network.os_vif_util [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:48:48,bridge_name='br-int',has_traffic_filtering=True,id=15f9fd08-446b-4dd8-8735-546bb477e16b,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f9fd08-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.873 239549 DEBUG os_vif [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:48:48,bridge_name='br-int',has_traffic_filtering=True,id=15f9fd08-446b-4dd8-8735-546bb477e16b,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f9fd08-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.875 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.876 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15f9fd08-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.877 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.880 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.883 239549 INFO os_vif [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:48:48,bridge_name='br-int',has_traffic_filtering=True,id=15f9fd08-446b-4dd8-8735-546bb477e16b,network=Network(b6f67b7a-3fd7-4623-9937-142eb5dabe2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15f9fd08-44')
Feb 02 15:46:21 compute-0 podman[269607]: 2026-02-02 15:46:21.924729031 +0000 UTC m=+0.043232153 container remove 14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.929 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[93fa3984-cca7-4275-acc9-c1cf2e67f60d]: (4, ('Mon Feb  2 03:46:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c (14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044)\n14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044\nMon Feb  2 03:46:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c (14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044)\n14dcdd50e05646274d944fc1dccc0ebeb49d05d6646b614cb262f1e48a91d044\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.931 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[08b3d3e4-0646-459f-88fc-1a0c73b6786b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.932 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6f67b7a-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.934 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:21 compute-0 kernel: tapb6f67b7a-30: left promiscuous mode
Feb 02 15:46:21 compute-0 nova_compute[239545]: 2026-02-02 15:46:21.940 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.942 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[471ad5d4-a660-4a9e-b985-b5581878986c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.955 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[76b64e85-cac6-4086-81c2-b45a9ec47799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.956 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8f73a47d-70b9-412b-9930-070825912529]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.976 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[56fb028f-5865-4147-9e43-cab1c48bcfd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462789, 'reachable_time': 35192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269640, 'error': None, 'target': 'ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:21 compute-0 systemd[1]: run-netns-ovnmeta\x2db6f67b7a\x2d3fd7\x2d4623\x2d9937\x2d142eb5dabe2c.mount: Deactivated successfully.
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.979 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6f67b7a-3fd7-4623-9937-142eb5dabe2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:46:21 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:21.979 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bbfa74-60d6-4651-9449-2b8a08da3cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 452 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 594 KiB/s wr, 56 op/s
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.018 239549 INFO nova.virt.libvirt.driver [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Deleting instance files /var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f_del
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.019 239549 INFO nova.virt.libvirt.driver [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Deletion of /var/lib/nova/instances/dae5d782-1829-48e1-836e-4f8301eeb88f_del complete
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.069 239549 INFO nova.compute.manager [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Took 0.44 seconds to destroy the instance on the hypervisor.
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.070 239549 DEBUG oslo.service.loopingcall [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.070 239549 DEBUG nova.compute.manager [-] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.070 239549 DEBUG nova.network.neutron [-] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.665 239549 DEBUG nova.compute.manager [req-89190343-f887-4f76-b5cb-a4ff78bab46d req-ee657cbb-414b-4b08-ba60-c8d0588ed8e7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received event network-vif-unplugged-15f9fd08-446b-4dd8-8735-546bb477e16b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.665 239549 DEBUG oslo_concurrency.lockutils [req-89190343-f887-4f76-b5cb-a4ff78bab46d req-ee657cbb-414b-4b08-ba60-c8d0588ed8e7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.665 239549 DEBUG oslo_concurrency.lockutils [req-89190343-f887-4f76-b5cb-a4ff78bab46d req-ee657cbb-414b-4b08-ba60-c8d0588ed8e7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.666 239549 DEBUG oslo_concurrency.lockutils [req-89190343-f887-4f76-b5cb-a4ff78bab46d req-ee657cbb-414b-4b08-ba60-c8d0588ed8e7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.666 239549 DEBUG nova.compute.manager [req-89190343-f887-4f76-b5cb-a4ff78bab46d req-ee657cbb-414b-4b08-ba60-c8d0588ed8e7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] No waiting events found dispatching network-vif-unplugged-15f9fd08-446b-4dd8-8735-546bb477e16b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:46:22 compute-0 nova_compute[239545]: 2026-02-02 15:46:22.666 239549 DEBUG nova.compute.manager [req-89190343-f887-4f76-b5cb-a4ff78bab46d req-ee657cbb-414b-4b08-ba60-c8d0588ed8e7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received event network-vif-unplugged-15f9fd08-446b-4dd8-8735-546bb477e16b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.035 239549 DEBUG nova.network.neutron [-] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.051 239549 INFO nova.compute.manager [-] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Took 0.98 seconds to deallocate network for instance.
Feb 02 15:46:23 compute-0 ceph-mon[75334]: pgmap v1646: 305 pgs: 305 active+clean; 452 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 594 KiB/s wr, 56 op/s
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.090 239549 DEBUG nova.compute.manager [req-bd5e0496-eefe-4756-b6a8-df39fd82343f req-018ad99e-0801-475b-bb7a-d6df09d59fbf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received event network-vif-deleted-15f9fd08-446b-4dd8-8735-546bb477e16b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.207 239549 INFO nova.compute.manager [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Took 0.16 seconds to detach 1 volumes for instance.
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.267 239549 DEBUG oslo_concurrency.lockutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.268 239549 DEBUG oslo_concurrency.lockutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.360 239549 DEBUG oslo_concurrency.processutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:46:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:46:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201639015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.922 239549 DEBUG oslo_concurrency.processutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.929 239549 DEBUG nova.compute.provider_tree [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.945 239549 DEBUG nova.scheduler.client.report [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.969 239549 DEBUG oslo_concurrency.lockutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:23 compute-0 nova_compute[239545]: 2026-02-02 15:46:23.994 239549 INFO nova.scheduler.client.report [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Deleted allocations for instance dae5d782-1829-48e1-836e-4f8301eeb88f
Feb 02 15:46:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 452 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 91 KiB/s wr, 35 op/s
Feb 02 15:46:24 compute-0 nova_compute[239545]: 2026-02-02 15:46:24.053 239549 DEBUG oslo_concurrency.lockutils [None req-321cb4e4-72fb-4b05-a00d-968d07bf8894 df03e4d41ae644fca567cfe648b7bad6 6d6011a66bdb41cea09b6018ceeec7d4 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4201639015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:24 compute-0 nova_compute[239545]: 2026-02-02 15:46:24.437 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:24 compute-0 nova_compute[239545]: 2026-02-02 15:46:24.741 239549 DEBUG nova.compute.manager [req-05c65a13-fb90-40e3-9894-6065eb2b663a req-61a741de-fb7d-4ef5-b2be-15362b38fb82 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received event network-vif-plugged-15f9fd08-446b-4dd8-8735-546bb477e16b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:24 compute-0 nova_compute[239545]: 2026-02-02 15:46:24.741 239549 DEBUG oslo_concurrency.lockutils [req-05c65a13-fb90-40e3-9894-6065eb2b663a req-61a741de-fb7d-4ef5-b2be-15362b38fb82 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:24 compute-0 nova_compute[239545]: 2026-02-02 15:46:24.742 239549 DEBUG oslo_concurrency.lockutils [req-05c65a13-fb90-40e3-9894-6065eb2b663a req-61a741de-fb7d-4ef5-b2be-15362b38fb82 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:24 compute-0 nova_compute[239545]: 2026-02-02 15:46:24.742 239549 DEBUG oslo_concurrency.lockutils [req-05c65a13-fb90-40e3-9894-6065eb2b663a req-61a741de-fb7d-4ef5-b2be-15362b38fb82 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "dae5d782-1829-48e1-836e-4f8301eeb88f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:24 compute-0 nova_compute[239545]: 2026-02-02 15:46:24.743 239549 DEBUG nova.compute.manager [req-05c65a13-fb90-40e3-9894-6065eb2b663a req-61a741de-fb7d-4ef5-b2be-15362b38fb82 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] No waiting events found dispatching network-vif-plugged-15f9fd08-446b-4dd8-8735-546bb477e16b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:46:24 compute-0 nova_compute[239545]: 2026-02-02 15:46:24.743 239549 WARNING nova.compute.manager [req-05c65a13-fb90-40e3-9894-6065eb2b663a req-61a741de-fb7d-4ef5-b2be-15362b38fb82 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Received unexpected event network-vif-plugged-15f9fd08-446b-4dd8-8735-546bb477e16b for instance with vm_state deleted and task_state None.
Feb 02 15:46:25 compute-0 ceph-mon[75334]: pgmap v1647: 305 pgs: 305 active+clean; 452 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 91 KiB/s wr, 35 op/s
Feb 02 15:46:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:46:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7205 writes, 32K keys, 7205 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 7205 writes, 7205 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2079 writes, 9788 keys, 2079 commit groups, 1.0 writes per commit group, ingest: 12.43 MB, 0.02 MB/s
                                           Interval WAL: 2079 writes, 2079 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    126.6      0.30              0.07        17    0.017       0      0       0.0       0.0
                                             L6      1/0   10.36 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5    175.1    144.9      0.89              0.32        16    0.056     81K   9456       0.0       0.0
                                            Sum      1/0   10.36 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5    131.6    140.3      1.19              0.40        33    0.036     81K   9456       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.8    157.6    165.7      0.37              0.11        10    0.037     32K   3661       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    175.1    144.9      0.89              0.32        16    0.056     81K   9456       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    127.8      0.29              0.07        16    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.036, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.07 MB/s read, 1.2 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1f12ef8d0#2 capacity: 304.00 MB usage: 18.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000256 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1213,17.84 MB,5.86971%) FilterBlock(34,235.23 KB,0.0755661%) IndexBlock(34,463.73 KB,0.148969%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 15:46:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 452 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 447 KiB/s rd, 95 KiB/s wr, 37 op/s
Feb 02 15:46:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:46:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085192224' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:46:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:46:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085192224' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:46:26 compute-0 nova_compute[239545]: 2026-02-02 15:46:26.879 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:27 compute-0 ceph-mon[75334]: pgmap v1648: 305 pgs: 305 active+clean; 452 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 447 KiB/s rd, 95 KiB/s wr, 37 op/s
Feb 02 15:46:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1085192224' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:46:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1085192224' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:46:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 452 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 92 KiB/s wr, 24 op/s
Feb 02 15:46:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:46:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3863784057' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:46:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:46:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3863784057' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:46:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3863784057' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:46:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3863784057' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:46:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:29 compute-0 ceph-mon[75334]: pgmap v1649: 305 pgs: 305 active+clean; 452 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 92 KiB/s wr, 24 op/s
Feb 02 15:46:29 compute-0 nova_compute[239545]: 2026-02-02 15:46:29.439 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 383 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 298 KiB/s rd, 92 KiB/s wr, 38 op/s
Feb 02 15:46:31 compute-0 ceph-mon[75334]: pgmap v1650: 305 pgs: 305 active+clean; 383 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 298 KiB/s rd, 92 KiB/s wr, 38 op/s
Feb 02 15:46:31 compute-0 nova_compute[239545]: 2026-02-02 15:46:31.882 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 269 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 95 KiB/s wr, 44 op/s
Feb 02 15:46:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:33 compute-0 ceph-mon[75334]: pgmap v1651: 305 pgs: 305 active+clean; 269 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 95 KiB/s wr, 44 op/s
Feb 02 15:46:33 compute-0 ovn_controller[144995]: 2026-02-02T15:46:33Z|00244|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:46:33 compute-0 ovn_controller[144995]: 2026-02-02T15:46:33Z|00245|binding|INFO|Releasing lport 8ec763b2-de85-4ed5-bb5d-67e76d81beae from this chassis (sb_readonly=0)
Feb 02 15:46:33 compute-0 nova_compute[239545]: 2026-02-02 15:46:33.569 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 269 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 6.8 KiB/s wr, 38 op/s
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.441 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.778 239549 DEBUG nova.compute.manager [req-e696bffe-4ce4-4a58-ab47-a24d58f1ab84 req-b5c218ba-c909-4bd4-8199-882f87d2dbc1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received event network-changed-6ac4365b-c94a-415d-9912-d322dc0a9d81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.778 239549 DEBUG nova.compute.manager [req-e696bffe-4ce4-4a58-ab47-a24d58f1ab84 req-b5c218ba-c909-4bd4-8199-882f87d2dbc1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Refreshing instance network info cache due to event network-changed-6ac4365b-c94a-415d-9912-d322dc0a9d81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.778 239549 DEBUG oslo_concurrency.lockutils [req-e696bffe-4ce4-4a58-ab47-a24d58f1ab84 req-b5c218ba-c909-4bd4-8199-882f87d2dbc1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.778 239549 DEBUG oslo_concurrency.lockutils [req-e696bffe-4ce4-4a58-ab47-a24d58f1ab84 req-b5c218ba-c909-4bd4-8199-882f87d2dbc1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.779 239549 DEBUG nova.network.neutron [req-e696bffe-4ce4-4a58-ab47-a24d58f1ab84 req-b5c218ba-c909-4bd4-8199-882f87d2dbc1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Refreshing network info cache for port 6ac4365b-c94a-415d-9912-d322dc0a9d81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.906 239549 DEBUG oslo_concurrency.lockutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "e8ad53dc-3c67-426c-8c27-6467369ab230" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.907 239549 DEBUG oslo_concurrency.lockutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.907 239549 DEBUG oslo_concurrency.lockutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.907 239549 DEBUG oslo_concurrency.lockutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.908 239549 DEBUG oslo_concurrency.lockutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.910 239549 INFO nova.compute.manager [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Terminating instance
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.912 239549 DEBUG nova.compute.manager [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:46:34 compute-0 kernel: tap6ac4365b-c9 (unregistering): left promiscuous mode
Feb 02 15:46:34 compute-0 NetworkManager[49171]: <info>  [1770047194.9580] device (tap6ac4365b-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:46:34 compute-0 ovn_controller[144995]: 2026-02-02T15:46:34Z|00246|binding|INFO|Releasing lport 6ac4365b-c94a-415d-9912-d322dc0a9d81 from this chassis (sb_readonly=0)
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.964 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:34 compute-0 ovn_controller[144995]: 2026-02-02T15:46:34Z|00247|binding|INFO|Setting lport 6ac4365b-c94a-415d-9912-d322dc0a9d81 down in Southbound
Feb 02 15:46:34 compute-0 ovn_controller[144995]: 2026-02-02T15:46:34Z|00248|binding|INFO|Removing iface tap6ac4365b-c9 ovn-installed in OVS
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.968 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:34 compute-0 nova_compute[239545]: 2026-02-02 15:46:34.973 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:34 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:34.975 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:0a:66 10.100.0.3'], port_security=['fa:16:3e:a0:0a:66 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e8ad53dc-3c67-426c-8c27-6467369ab230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '413c222f-1970-4ec0-b0a7-3e88c9a779d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=6ac4365b-c94a-415d-9912-d322dc0a9d81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:46:34 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:34.976 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 6ac4365b-c94a-415d-9912-d322dc0a9d81 in datapath 473fc4ca-a137-447b-9349-9f4677babee6 unbound from our chassis
Feb 02 15:46:34 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:34.978 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473fc4ca-a137-447b-9349-9f4677babee6
Feb 02 15:46:34 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:34.992 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[888e4fd5-78f1-4bbb-92e1-2149b6e631a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.018 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c48048-6bfd-4a1d-b943-99bc57863802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.021 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[73911996-4479-4d96-8a45-8ff01b564945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:35 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Feb 02 15:46:35 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 17.246s CPU time.
Feb 02 15:46:35 compute-0 systemd-machined[207609]: Machine qemu-26-instance-0000001a terminated.
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.049 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4c4815-f136-460d-9340-b366c6c0392b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.064 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[784f54ad-cae9-4504-9331-751d16454aaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473fc4ca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:14:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459280, 'reachable_time': 15928, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269676, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.082 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5489dd-2eed-4994-b03d-51450c1fac1b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap473fc4ca-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459291, 'tstamp': 459291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269677, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap473fc4ca-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459294, 'tstamp': 459294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269677, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.084 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.086 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.089 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.089 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473fc4ca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.090 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.090 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473fc4ca-a0, col_values=(('external_ids', {'iface-id': '8ec763b2-de85-4ed5-bb5d-67e76d81beae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:46:35 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:35.090 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.134 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.138 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.147 239549 INFO nova.virt.libvirt.driver [-] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Instance destroyed successfully.
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.148 239549 DEBUG nova.objects.instance [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'resources' on Instance uuid e8ad53dc-3c67-426c-8c27-6467369ab230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.166 239549 DEBUG nova.virt.libvirt.vif [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:45:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1679658676',display_name='tempest-TestVolumeBootPattern-server-1679658676',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1679658676',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVN3emQ3pa4ZbuxCkTmhDe1Vp6VQUY67rC+ITHBo+Tq5uE7NmayODM4fxB/CHWvUnJ+8HqCsQ4XM6GBraeEG0bMnApJ123caLkGqWErsSAkkLYVHXE8VvM9eqpwYxSifA==',key_name='tempest-TestVolumeBootPattern-570771141',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:45:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-utg3ih7x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:45:56Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=e8ad53dc-3c67-426c-8c27-6467369ab230,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.167 239549 DEBUG nova.network.os_vif_util [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.167 239549 DEBUG nova.network.os_vif_util [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a0:0a:66,bridge_name='br-int',has_traffic_filtering=True,id=6ac4365b-c94a-415d-9912-d322dc0a9d81,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ac4365b-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.168 239549 DEBUG os_vif [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:0a:66,bridge_name='br-int',has_traffic_filtering=True,id=6ac4365b-c94a-415d-9912-d322dc0a9d81,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ac4365b-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.169 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.169 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ac4365b-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.171 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.173 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.176 239549 INFO os_vif [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:0a:66,bridge_name='br-int',has_traffic_filtering=True,id=6ac4365b-c94a-415d-9912-d322dc0a9d81,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ac4365b-c9')
Feb 02 15:46:35 compute-0 ceph-mon[75334]: pgmap v1652: 305 pgs: 305 active+clean; 269 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 6.8 KiB/s wr, 38 op/s
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.304 239549 INFO nova.virt.libvirt.driver [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Deleting instance files /var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230_del
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.305 239549 INFO nova.virt.libvirt.driver [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Deletion of /var/lib/nova/instances/e8ad53dc-3c67-426c-8c27-6467369ab230_del complete
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.368 239549 INFO nova.compute.manager [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Took 0.46 seconds to destroy the instance on the hypervisor.
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.369 239549 DEBUG oslo.service.loopingcall [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.369 239549 DEBUG nova.compute.manager [-] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.369 239549 DEBUG nova.network.neutron [-] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.967 239549 DEBUG nova.network.neutron [-] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:46:35 compute-0 nova_compute[239545]: 2026-02-02 15:46:35.992 239549 INFO nova.compute.manager [-] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Took 0.62 seconds to deallocate network for instance.
Feb 02 15:46:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 273 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 53 KiB/s wr, 41 op/s
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.081 239549 DEBUG nova.compute.manager [req-fc5c31de-7dbf-4155-a628-0a1b208b21c3 req-756da546-f5cb-4af6-b087-a5648e844710 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received event network-vif-deleted-6ac4365b-c94a-415d-9912-d322dc0a9d81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.162 239549 INFO nova.compute.manager [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Took 0.17 seconds to detach 1 volumes for instance.
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.208 239549 DEBUG oslo_concurrency.lockutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.208 239549 DEBUG oslo_concurrency.lockutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.308 239549 DEBUG oslo_concurrency.processutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.564 239549 DEBUG nova.network.neutron [req-e696bffe-4ce4-4a58-ab47-a24d58f1ab84 req-b5c218ba-c909-4bd4-8199-882f87d2dbc1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Updated VIF entry in instance network info cache for port 6ac4365b-c94a-415d-9912-d322dc0a9d81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.565 239549 DEBUG nova.network.neutron [req-e696bffe-4ce4-4a58-ab47-a24d58f1ab84 req-b5c218ba-c909-4bd4-8199-882f87d2dbc1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Updating instance_info_cache with network_info: [{"id": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "address": "fa:16:3e:a0:0a:66", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ac4365b-c9", "ovs_interfaceid": "6ac4365b-c94a-415d-9912-d322dc0a9d81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.589 239549 DEBUG oslo_concurrency.lockutils [req-e696bffe-4ce4-4a58-ab47-a24d58f1ab84 req-b5c218ba-c909-4bd4-8199-882f87d2dbc1 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-e8ad53dc-3c67-426c-8c27-6467369ab230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.857 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047181.8558624, dae5d782-1829-48e1-836e-4f8301eeb88f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.858 239549 INFO nova.compute.manager [-] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] VM Stopped (Lifecycle Event)
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.862 239549 DEBUG nova.compute.manager [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received event network-vif-unplugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.863 239549 DEBUG oslo_concurrency.lockutils [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.863 239549 DEBUG oslo_concurrency.lockutils [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.864 239549 DEBUG oslo_concurrency.lockutils [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:46:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667553639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.864 239549 DEBUG nova.compute.manager [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] No waiting events found dispatching network-vif-unplugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.865 239549 WARNING nova.compute.manager [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received unexpected event network-vif-unplugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 for instance with vm_state deleted and task_state None.
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.866 239549 DEBUG nova.compute.manager [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received event network-vif-plugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.866 239549 DEBUG oslo_concurrency.lockutils [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.867 239549 DEBUG oslo_concurrency.lockutils [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.867 239549 DEBUG oslo_concurrency.lockutils [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.868 239549 DEBUG nova.compute.manager [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] No waiting events found dispatching network-vif-plugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.868 239549 WARNING nova.compute.manager [req-f15b3456-2dea-400e-9c22-93034667069c req-c7756cc4-9a22-47e7-b250-5bb1bb2e2097 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Received unexpected event network-vif-plugged-6ac4365b-c94a-415d-9912-d322dc0a9d81 for instance with vm_state deleted and task_state None.
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.881 239549 DEBUG oslo_concurrency.processutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.887 239549 DEBUG nova.compute.provider_tree [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.889 239549 DEBUG nova.compute.manager [None req-2a036afe-a43a-4600-8192-e627964c272b - - - - - -] [instance: dae5d782-1829-48e1-836e-4f8301eeb88f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.901 239549 DEBUG nova.scheduler.client.report [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:46:36 compute-0 nova_compute[239545]: 2026-02-02 15:46:36.925 239549 DEBUG oslo_concurrency.lockutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:37 compute-0 nova_compute[239545]: 2026-02-02 15:46:37.009 239549 INFO nova.scheduler.client.report [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Deleted allocations for instance e8ad53dc-3c67-426c-8c27-6467369ab230
Feb 02 15:46:37 compute-0 nova_compute[239545]: 2026-02-02 15:46:37.069 239549 DEBUG oslo_concurrency.lockutils [None req-38113830-30f7-43b1-9c1c-12bdd4f2bd37 b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "e8ad53dc-3c67-426c-8c27-6467369ab230" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:37 compute-0 ceph-mon[75334]: pgmap v1653: 305 pgs: 305 active+clean; 273 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 53 KiB/s wr, 41 op/s
Feb 02 15:46:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1667553639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 273 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 49 KiB/s wr, 27 op/s
Feb 02 15:46:38 compute-0 ovn_controller[144995]: 2026-02-02T15:46:38Z|00249|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:46:38 compute-0 ovn_controller[144995]: 2026-02-02T15:46:38Z|00250|binding|INFO|Releasing lport 8ec763b2-de85-4ed5-bb5d-67e76d81beae from this chassis (sb_readonly=0)
Feb 02 15:46:38 compute-0 nova_compute[239545]: 2026-02-02 15:46:38.147 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:46:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1257162608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:46:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:46:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1257162608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:46:39 compute-0 ceph-mon[75334]: pgmap v1654: 305 pgs: 305 active+clean; 273 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 49 KiB/s wr, 27 op/s
Feb 02 15:46:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1257162608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:46:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1257162608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:46:39 compute-0 nova_compute[239545]: 2026-02-02 15:46:39.443 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 273 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 49 KiB/s wr, 30 op/s
Feb 02 15:46:40 compute-0 nova_compute[239545]: 2026-02-02 15:46:40.171 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Feb 02 15:46:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Feb 02 15:46:40 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Feb 02 15:46:41 compute-0 ceph-mon[75334]: pgmap v1655: 305 pgs: 305 active+clean; 273 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 49 KiB/s wr, 30 op/s
Feb 02 15:46:41 compute-0 ceph-mon[75334]: osdmap e467: 3 total, 3 up, 3 in
Feb 02 15:46:41 compute-0 ovn_controller[144995]: 2026-02-02T15:46:41Z|00251|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:46:41 compute-0 ovn_controller[144995]: 2026-02-02T15:46:41Z|00252|binding|INFO|Releasing lport 8ec763b2-de85-4ed5-bb5d-67e76d81beae from this chassis (sb_readonly=0)
Feb 02 15:46:41 compute-0 nova_compute[239545]: 2026-02-02 15:46:41.571 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 251 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 58 KiB/s wr, 62 op/s
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.020 239549 DEBUG nova.compute.manager [req-0e01dc2b-ef9e-4814-94a5-2b682869cff2 req-254bb6a6-0b4c-48f6-94f3-8275d94097ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received event network-changed-4cb7a453-9db5-4fbc-a7ba-59600d76589c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.020 239549 DEBUG nova.compute.manager [req-0e01dc2b-ef9e-4814-94a5-2b682869cff2 req-254bb6a6-0b4c-48f6-94f3-8275d94097ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Refreshing instance network info cache due to event network-changed-4cb7a453-9db5-4fbc-a7ba-59600d76589c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.020 239549 DEBUG oslo_concurrency.lockutils [req-0e01dc2b-ef9e-4814-94a5-2b682869cff2 req-254bb6a6-0b4c-48f6-94f3-8275d94097ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.021 239549 DEBUG oslo_concurrency.lockutils [req-0e01dc2b-ef9e-4814-94a5-2b682869cff2 req-254bb6a6-0b4c-48f6-94f3-8275d94097ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.021 239549 DEBUG nova.network.neutron [req-0e01dc2b-ef9e-4814-94a5-2b682869cff2 req-254bb6a6-0b4c-48f6-94f3-8275d94097ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Refreshing network info cache for port 4cb7a453-9db5-4fbc-a7ba-59600d76589c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.086 239549 DEBUG oslo_concurrency.lockutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "589acca5-dd9e-4695-b32a-0235932283d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.086 239549 DEBUG oslo_concurrency.lockutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.087 239549 DEBUG oslo_concurrency.lockutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "589acca5-dd9e-4695-b32a-0235932283d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.087 239549 DEBUG oslo_concurrency.lockutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.088 239549 DEBUG oslo_concurrency.lockutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.089 239549 INFO nova.compute.manager [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Terminating instance
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.091 239549 DEBUG nova.compute.manager [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:46:42 compute-0 kernel: tap4cb7a453-9d (unregistering): left promiscuous mode
Feb 02 15:46:42 compute-0 NetworkManager[49171]: <info>  [1770047202.1402] device (tap4cb7a453-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.146 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:42 compute-0 ovn_controller[144995]: 2026-02-02T15:46:42Z|00253|binding|INFO|Releasing lport 4cb7a453-9db5-4fbc-a7ba-59600d76589c from this chassis (sb_readonly=0)
Feb 02 15:46:42 compute-0 ovn_controller[144995]: 2026-02-02T15:46:42Z|00254|binding|INFO|Setting lport 4cb7a453-9db5-4fbc-a7ba-59600d76589c down in Southbound
Feb 02 15:46:42 compute-0 ovn_controller[144995]: 2026-02-02T15:46:42Z|00255|binding|INFO|Removing iface tap4cb7a453-9d ovn-installed in OVS
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.149 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.153 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.155 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:66:84 10.100.0.10'], port_security=['fa:16:3e:f4:66:84 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '589acca5-dd9e-4695-b32a-0235932283d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473fc4ca-a137-447b-9349-9f4677babee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a28227cdc0a4390bebe7549f189bfe5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '413c222f-1970-4ec0-b0a7-3e88c9a779d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061039df-5525-4ce5-81d9-5c81632af158, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=4cb7a453-9db5-4fbc-a7ba-59600d76589c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.156 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 4cb7a453-9db5-4fbc-a7ba-59600d76589c in datapath 473fc4ca-a137-447b-9349-9f4677babee6 unbound from our chassis
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.157 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 473fc4ca-a137-447b-9349-9f4677babee6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.158 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3efa3f-64f7-422b-b0be-41f2b7f8f4fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.158 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 namespace which is not needed anymore
Feb 02 15:46:42 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Feb 02 15:46:42 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 15.743s CPU time.
Feb 02 15:46:42 compute-0 systemd-machined[207609]: Machine qemu-24-instance-00000018 terminated.
Feb 02 15:46:42 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[268072]: [NOTICE]   (268076) : haproxy version is 2.8.14-c23fe91
Feb 02 15:46:42 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[268072]: [NOTICE]   (268076) : path to executable is /usr/sbin/haproxy
Feb 02 15:46:42 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[268072]: [WARNING]  (268076) : Exiting Master process...
Feb 02 15:46:42 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[268072]: [WARNING]  (268076) : Exiting Master process...
Feb 02 15:46:42 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[268072]: [ALERT]    (268076) : Current worker (268078) exited with code 143 (Terminated)
Feb 02 15:46:42 compute-0 neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6[268072]: [WARNING]  (268076) : All workers exited. Exiting... (0)
Feb 02 15:46:42 compute-0 systemd[1]: libpod-6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365.scope: Deactivated successfully.
Feb 02 15:46:42 compute-0 podman[269754]: 2026-02-02 15:46:42.27321953 +0000 UTC m=+0.038724403 container died 6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365-userdata-shm.mount: Deactivated successfully.
Feb 02 15:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7a1efaa94d0ed1927c87de7e54ac1af8d879769fff2bbbfe3d58141a752f235-merged.mount: Deactivated successfully.
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.311 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:42 compute-0 podman[269754]: 2026-02-02 15:46:42.315491979 +0000 UTC m=+0.080996862 container cleanup 6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.317 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:42 compute-0 systemd[1]: libpod-conmon-6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365.scope: Deactivated successfully.
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.326 239549 INFO nova.virt.libvirt.driver [-] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Instance destroyed successfully.
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.326 239549 DEBUG nova.objects.instance [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lazy-loading 'resources' on Instance uuid 589acca5-dd9e-4695-b32a-0235932283d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.339 239549 DEBUG nova.virt.libvirt.vif [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:44:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1837701869',display_name='tempest-TestVolumeBootPattern-server-1837701869',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1837701869',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVN3emQ3pa4ZbuxCkTmhDe1Vp6VQUY67rC+ITHBo+Tq5uE7NmayODM4fxB/CHWvUnJ+8HqCsQ4XM6GBraeEG0bMnApJ123caLkGqWErsSAkkLYVHXE8VvM9eqpwYxSifA==',key_name='tempest-TestVolumeBootPattern-570771141',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:45:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a28227cdc0a4390bebe7549f189bfe5',ramdisk_id='',reservation_id='r-i8jac8m5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-77302308',owner_user_name='tempest-TestVolumeBootPattern-77302308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:45:07Z,user_data=None,user_id='b8e72a1cb6344869821da1cfc41bf8fc',uuid=589acca5-dd9e-4695-b32a-0235932283d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.340 239549 DEBUG nova.network.os_vif_util [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converting VIF {"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.341 239549 DEBUG nova.network.os_vif_util [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:66:84,bridge_name='br-int',has_traffic_filtering=True,id=4cb7a453-9db5-4fbc-a7ba-59600d76589c,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cb7a453-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.341 239549 DEBUG os_vif [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:66:84,bridge_name='br-int',has_traffic_filtering=True,id=4cb7a453-9db5-4fbc-a7ba-59600d76589c,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cb7a453-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.344 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.344 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4cb7a453-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.348 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.351 239549 INFO os_vif [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:66:84,bridge_name='br-int',has_traffic_filtering=True,id=4cb7a453-9db5-4fbc-a7ba-59600d76589c,network=Network(473fc4ca-a137-447b-9349-9f4677babee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cb7a453-9d')
Feb 02 15:46:42 compute-0 podman[269791]: 2026-02-02 15:46:42.378730659 +0000 UTC m=+0.042692331 container remove 6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.383 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a95be09d-a697-49ff-af78-521ed1a2abd7]: (4, ('Mon Feb  2 03:46:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365)\n6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365\nMon Feb  2 03:46:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 (6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365)\n6c2afc788c66868967ed709332c9f2cd3b36f9d62cea72d5243156e8fbd3c365\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.385 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1ba1c5-6950-4e30-a30a-8f71ca5e30cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.385 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473fc4ca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.387 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:42 compute-0 kernel: tap473fc4ca-a0: left promiscuous mode
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.397 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.398 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e5af571c-4fbb-4b83-92c2-d3de0b5f6af4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.412 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6bf9d5-b52a-4ff8-868e-8e4b9d7163d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.413 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e703fcb8-1ce3-47f1-b08b-a85b3c1c151f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.430 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a88ff851-4993-47f6-af5b-ada75793d5b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459274, 'reachable_time': 18290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269823, 'error': None, 'target': 'ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.432 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-473fc4ca-a137-447b-9349-9f4677babee6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:46:42 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:42.433 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[10eaecca-8377-4d30-abb8-3e6cc0ea24ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:46:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d473fc4ca\x2da137\x2d447b\x2d9349\x2d9f4677babee6.mount: Deactivated successfully.
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.493 239549 INFO nova.virt.libvirt.driver [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Deleting instance files /var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1_del
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.494 239549 INFO nova.virt.libvirt.driver [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Deletion of /var/lib/nova/instances/589acca5-dd9e-4695-b32a-0235932283d1_del complete
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.557 239549 INFO nova.compute.manager [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Took 0.46 seconds to destroy the instance on the hypervisor.
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.558 239549 DEBUG oslo.service.loopingcall [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.558 239549 DEBUG nova.compute.manager [-] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:46:42 compute-0 nova_compute[239545]: 2026-02-02 15:46:42.558 239549 DEBUG nova.network.neutron [-] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:46:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:46:42
Feb 02 15:46:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:46:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:46:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'backups', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log']
Feb 02 15:46:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:46:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:43 compute-0 ceph-mon[75334]: pgmap v1657: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 251 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 58 KiB/s wr, 62 op/s
Feb 02 15:46:43 compute-0 nova_compute[239545]: 2026-02-02 15:46:43.703 239549 DEBUG nova.network.neutron [-] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:46:43 compute-0 nova_compute[239545]: 2026-02-02 15:46:43.760 239549 INFO nova.compute.manager [-] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Took 1.20 seconds to deallocate network for instance.
Feb 02 15:46:43 compute-0 nova_compute[239545]: 2026-02-02 15:46:43.807 239549 DEBUG nova.compute.manager [req-64edcb9e-e7b9-4836-8682-2246c3abd519 req-e0a278aa-76fb-4b7f-b4a5-48984d8799e7 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received event network-vif-deleted-4cb7a453-9db5-4fbc-a7ba-59600d76589c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:43 compute-0 nova_compute[239545]: 2026-02-02 15:46:43.812 239549 DEBUG nova.network.neutron [req-0e01dc2b-ef9e-4814-94a5-2b682869cff2 req-254bb6a6-0b4c-48f6-94f3-8275d94097ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Updated VIF entry in instance network info cache for port 4cb7a453-9db5-4fbc-a7ba-59600d76589c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:46:43 compute-0 nova_compute[239545]: 2026-02-02 15:46:43.813 239549 DEBUG nova.network.neutron [req-0e01dc2b-ef9e-4814-94a5-2b682869cff2 req-254bb6a6-0b4c-48f6-94f3-8275d94097ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Updating instance_info_cache with network_info: [{"id": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "address": "fa:16:3e:f4:66:84", "network": {"id": "473fc4ca-a137-447b-9349-9f4677babee6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-260660660-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a28227cdc0a4390bebe7549f189bfe5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cb7a453-9d", "ovs_interfaceid": "4cb7a453-9db5-4fbc-a7ba-59600d76589c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:46:43 compute-0 nova_compute[239545]: 2026-02-02 15:46:43.838 239549 DEBUG oslo_concurrency.lockutils [req-0e01dc2b-ef9e-4814-94a5-2b682869cff2 req-254bb6a6-0b4c-48f6-94f3-8275d94097ac d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-589acca5-dd9e-4695-b32a-0235932283d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 251 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 58 KiB/s wr, 62 op/s
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.083 239549 INFO nova.compute.manager [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Took 0.32 seconds to detach 1 volumes for instance.
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.126 239549 DEBUG oslo_concurrency.lockutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.126 239549 DEBUG oslo_concurrency.lockutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.155 239549 DEBUG nova.compute.manager [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received event network-vif-unplugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.155 239549 DEBUG oslo_concurrency.lockutils [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "589acca5-dd9e-4695-b32a-0235932283d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.155 239549 DEBUG oslo_concurrency.lockutils [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.155 239549 DEBUG oslo_concurrency.lockutils [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.156 239549 DEBUG nova.compute.manager [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] No waiting events found dispatching network-vif-unplugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.156 239549 WARNING nova.compute.manager [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received unexpected event network-vif-unplugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c for instance with vm_state deleted and task_state None.
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.156 239549 DEBUG nova.compute.manager [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received event network-vif-plugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.156 239549 DEBUG oslo_concurrency.lockutils [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "589acca5-dd9e-4695-b32a-0235932283d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.156 239549 DEBUG oslo_concurrency.lockutils [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.157 239549 DEBUG oslo_concurrency.lockutils [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.157 239549 DEBUG nova.compute.manager [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] No waiting events found dispatching network-vif-plugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.157 239549 WARNING nova.compute.manager [req-4bb23387-e364-46b5-bc0d-c9bef364cb7d req-06a6b1a8-2e4d-49c7-aec3-658698602192 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Received unexpected event network-vif-plugged-4cb7a453-9db5-4fbc-a7ba-59600d76589c for instance with vm_state deleted and task_state None.
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.191 239549 DEBUG oslo_concurrency.processutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.444 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:46:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1281387022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.723 239549 DEBUG oslo_concurrency.processutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.728 239549 DEBUG nova.compute.provider_tree [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.746 239549 DEBUG nova.scheduler.client.report [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.763 239549 DEBUG oslo_concurrency.lockutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.793 239549 INFO nova.scheduler.client.report [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Deleted allocations for instance 589acca5-dd9e-4695-b32a-0235932283d1
Feb 02 15:46:44 compute-0 nova_compute[239545]: 2026-02-02 15:46:44.869 239549 DEBUG oslo_concurrency.lockutils [None req-f23a0ec5-53c6-4aec-8c8e-fe7ac235441e b8e72a1cb6344869821da1cfc41bf8fc 8a28227cdc0a4390bebe7549f189bfe5 - - default default] Lock "589acca5-dd9e-4695-b32a-0235932283d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:44 compute-0 sudo[269847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:46:44 compute-0 sudo[269847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:46:44 compute-0 sudo[269847]: pam_unix(sudo:session): session closed for user root
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:46:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:46:45 compute-0 sudo[269872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:46:45 compute-0 sudo[269872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:46:45 compute-0 ceph-mon[75334]: pgmap v1658: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 251 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 58 KiB/s wr, 62 op/s
Feb 02 15:46:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1281387022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:46:45 compute-0 sudo[269872]: pam_unix(sudo:session): session closed for user root
Feb 02 15:46:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:46:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:46:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:46:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:46:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:46:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:46:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:46:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:46:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:46:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:46:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:46:45 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:46:45 compute-0 sudo[269927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:46:45 compute-0 sudo[269927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:46:45 compute-0 sudo[269927]: pam_unix(sudo:session): session closed for user root
Feb 02 15:46:45 compute-0 sudo[269952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:46:45 compute-0 sudo[269952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:46:45 compute-0 podman[269989]: 2026-02-02 15:46:45.793258211 +0000 UTC m=+0.045484568 container create 6db82f5a895781d32fb062713cf85f20d87a781d35c5463cfcd85d9cf6904236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:46:45 compute-0 systemd[1]: Started libpod-conmon-6db82f5a895781d32fb062713cf85f20d87a781d35c5463cfcd85d9cf6904236.scope.
Feb 02 15:46:45 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:46:45 compute-0 podman[269989]: 2026-02-02 15:46:45.867639581 +0000 UTC m=+0.119865968 container init 6db82f5a895781d32fb062713cf85f20d87a781d35c5463cfcd85d9cf6904236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:46:45 compute-0 podman[269989]: 2026-02-02 15:46:45.775357195 +0000 UTC m=+0.027583572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:46:45 compute-0 podman[269989]: 2026-02-02 15:46:45.876859486 +0000 UTC m=+0.129085843 container start 6db82f5a895781d32fb062713cf85f20d87a781d35c5463cfcd85d9cf6904236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:46:45 compute-0 podman[269989]: 2026-02-02 15:46:45.880830553 +0000 UTC m=+0.133056940 container attach 6db82f5a895781d32fb062713cf85f20d87a781d35c5463cfcd85d9cf6904236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 02 15:46:45 compute-0 clever_meitner[270005]: 167 167
Feb 02 15:46:45 compute-0 systemd[1]: libpod-6db82f5a895781d32fb062713cf85f20d87a781d35c5463cfcd85d9cf6904236.scope: Deactivated successfully.
Feb 02 15:46:45 compute-0 podman[269989]: 2026-02-02 15:46:45.882243707 +0000 UTC m=+0.134470064 container died 6db82f5a895781d32fb062713cf85f20d87a781d35c5463cfcd85d9cf6904236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:46:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-97e80da538228e732ad2e559f7cf0a9c3bb987ea5e02e85cea8ae4c9510cee2a-merged.mount: Deactivated successfully.
Feb 02 15:46:45 compute-0 podman[269989]: 2026-02-02 15:46:45.919067033 +0000 UTC m=+0.171293410 container remove 6db82f5a895781d32fb062713cf85f20d87a781d35c5463cfcd85d9cf6904236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:46:45 compute-0 systemd[1]: libpod-conmon-6db82f5a895781d32fb062713cf85f20d87a781d35c5463cfcd85d9cf6904236.scope: Deactivated successfully.
Feb 02 15:46:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 251 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Feb 02 15:46:46 compute-0 podman[270029]: 2026-02-02 15:46:46.051747954 +0000 UTC m=+0.040598450 container create a105a08ef7843a3a30fefc987db4602e37b7d92eddfad5a83e99033019ed5538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:46:46 compute-0 systemd[1]: Started libpod-conmon-a105a08ef7843a3a30fefc987db4602e37b7d92eddfad5a83e99033019ed5538.scope.
Feb 02 15:46:46 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e232d72341fca316c1a2c383a3b0986db8bd899a35aed26db5881d87dbd43c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e232d72341fca316c1a2c383a3b0986db8bd899a35aed26db5881d87dbd43c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e232d72341fca316c1a2c383a3b0986db8bd899a35aed26db5881d87dbd43c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e232d72341fca316c1a2c383a3b0986db8bd899a35aed26db5881d87dbd43c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e232d72341fca316c1a2c383a3b0986db8bd899a35aed26db5881d87dbd43c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:46 compute-0 podman[270029]: 2026-02-02 15:46:46.115781133 +0000 UTC m=+0.104631649 container init a105a08ef7843a3a30fefc987db4602e37b7d92eddfad5a83e99033019ed5538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:46:46 compute-0 podman[270029]: 2026-02-02 15:46:46.122393133 +0000 UTC m=+0.111243629 container start a105a08ef7843a3a30fefc987db4602e37b7d92eddfad5a83e99033019ed5538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 15:46:46 compute-0 podman[270029]: 2026-02-02 15:46:46.125555701 +0000 UTC m=+0.114406217 container attach a105a08ef7843a3a30fefc987db4602e37b7d92eddfad5a83e99033019ed5538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb 02 15:46:46 compute-0 podman[270029]: 2026-02-02 15:46:46.036177115 +0000 UTC m=+0.025027621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:46:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:46:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:46:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:46:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:46:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:46:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:46:46 compute-0 vigilant_mirzakhani[270045]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:46:46 compute-0 vigilant_mirzakhani[270045]: --> All data devices are unavailable
Feb 02 15:46:46 compute-0 systemd[1]: libpod-a105a08ef7843a3a30fefc987db4602e37b7d92eddfad5a83e99033019ed5538.scope: Deactivated successfully.
Feb 02 15:46:46 compute-0 podman[270029]: 2026-02-02 15:46:46.574272733 +0000 UTC m=+0.563123249 container died a105a08ef7843a3a30fefc987db4602e37b7d92eddfad5a83e99033019ed5538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2e232d72341fca316c1a2c383a3b0986db8bd899a35aed26db5881d87dbd43c-merged.mount: Deactivated successfully.
Feb 02 15:46:46 compute-0 podman[270029]: 2026-02-02 15:46:46.611805127 +0000 UTC m=+0.600655623 container remove a105a08ef7843a3a30fefc987db4602e37b7d92eddfad5a83e99033019ed5538 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:46:46 compute-0 systemd[1]: libpod-conmon-a105a08ef7843a3a30fefc987db4602e37b7d92eddfad5a83e99033019ed5538.scope: Deactivated successfully.
Feb 02 15:46:46 compute-0 sudo[269952]: pam_unix(sudo:session): session closed for user root
Feb 02 15:46:46 compute-0 sudo[270078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:46:46 compute-0 sudo[270078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:46:46 compute-0 sudo[270078]: pam_unix(sudo:session): session closed for user root
Feb 02 15:46:46 compute-0 sudo[270103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:46:46 compute-0 sudo[270103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:46:47 compute-0 podman[270141]: 2026-02-02 15:46:47.009877368 +0000 UTC m=+0.031790076 container create 74d8a7237bf6557076f66f794529651e1c458a4b629704d32ac2ba6611066c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_proskuriakova, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:46:47 compute-0 systemd[1]: Started libpod-conmon-74d8a7237bf6557076f66f794529651e1c458a4b629704d32ac2ba6611066c34.scope.
Feb 02 15:46:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:46:47 compute-0 podman[270141]: 2026-02-02 15:46:47.064895567 +0000 UTC m=+0.086808295 container init 74d8a7237bf6557076f66f794529651e1c458a4b629704d32ac2ba6611066c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:46:47 compute-0 podman[270141]: 2026-02-02 15:46:47.070056383 +0000 UTC m=+0.091969091 container start 74d8a7237bf6557076f66f794529651e1c458a4b629704d32ac2ba6611066c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:46:47 compute-0 kind_proskuriakova[270158]: 167 167
Feb 02 15:46:47 compute-0 systemd[1]: libpod-74d8a7237bf6557076f66f794529651e1c458a4b629704d32ac2ba6611066c34.scope: Deactivated successfully.
Feb 02 15:46:47 compute-0 podman[270141]: 2026-02-02 15:46:47.07362423 +0000 UTC m=+0.095536968 container attach 74d8a7237bf6557076f66f794529651e1c458a4b629704d32ac2ba6611066c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_proskuriakova, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:46:47 compute-0 podman[270141]: 2026-02-02 15:46:47.074202283 +0000 UTC m=+0.096114991 container died 74d8a7237bf6557076f66f794529651e1c458a4b629704d32ac2ba6611066c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_proskuriakova, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-58035bffb9c4bc595dbdc8a114a179b627714a66bc37e92b53ac3ebe72460fce-merged.mount: Deactivated successfully.
Feb 02 15:46:47 compute-0 podman[270141]: 2026-02-02 15:46:46.99682365 +0000 UTC m=+0.018736378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:46:47 compute-0 podman[270141]: 2026-02-02 15:46:47.10528992 +0000 UTC m=+0.127202628 container remove 74d8a7237bf6557076f66f794529651e1c458a4b629704d32ac2ba6611066c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_proskuriakova, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:46:47 compute-0 systemd[1]: libpod-conmon-74d8a7237bf6557076f66f794529651e1c458a4b629704d32ac2ba6611066c34.scope: Deactivated successfully.
Feb 02 15:46:47 compute-0 podman[270180]: 2026-02-02 15:46:47.231308178 +0000 UTC m=+0.036042278 container create 183029dbb9b64f9c8c2f836b6765445276b3346be01273aaa750ba60c5a01e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:46:47 compute-0 systemd[1]: Started libpod-conmon-183029dbb9b64f9c8c2f836b6765445276b3346be01273aaa750ba60c5a01e6a.scope.
Feb 02 15:46:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ab823b02c3d9f46dd4ad2d517128ed94d9525c3d8c8c640044d79001047390/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ab823b02c3d9f46dd4ad2d517128ed94d9525c3d8c8c640044d79001047390/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ab823b02c3d9f46dd4ad2d517128ed94d9525c3d8c8c640044d79001047390/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ab823b02c3d9f46dd4ad2d517128ed94d9525c3d8c8c640044d79001047390/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:46:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132934020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:46:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:46:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132934020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:46:47 compute-0 podman[270180]: 2026-02-02 15:46:47.302936801 +0000 UTC m=+0.107670921 container init 183029dbb9b64f9c8c2f836b6765445276b3346be01273aaa750ba60c5a01e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:46:47 compute-0 podman[270180]: 2026-02-02 15:46:47.307651987 +0000 UTC m=+0.112386087 container start 183029dbb9b64f9c8c2f836b6765445276b3346be01273aaa750ba60c5a01e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kalam, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:46:47 compute-0 podman[270180]: 2026-02-02 15:46:47.310859995 +0000 UTC m=+0.115594115 container attach 183029dbb9b64f9c8c2f836b6765445276b3346be01273aaa750ba60c5a01e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:46:47 compute-0 podman[270180]: 2026-02-02 15:46:47.21699029 +0000 UTC m=+0.021724410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:46:47 compute-0 nova_compute[239545]: 2026-02-02 15:46:47.348 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:47 compute-0 ceph-mon[75334]: pgmap v1659: 305 pgs: 305 active+clean; 251 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Feb 02 15:46:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3132934020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:46:47 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3132934020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]: {
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:     "0": [
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:         {
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "devices": [
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "/dev/loop3"
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             ],
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_name": "ceph_lv0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_size": "21470642176",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "name": "ceph_lv0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "tags": {
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.cluster_name": "ceph",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.crush_device_class": "",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.encrypted": "0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.objectstore": "bluestore",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.osd_id": "0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.type": "block",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.vdo": "0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.with_tpm": "0"
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             },
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "type": "block",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "vg_name": "ceph_vg0"
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:         }
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:     ],
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:     "1": [
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:         {
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "devices": [
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "/dev/loop4"
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             ],
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_name": "ceph_lv1",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_size": "21470642176",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "name": "ceph_lv1",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "tags": {
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.cluster_name": "ceph",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.crush_device_class": "",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.encrypted": "0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.objectstore": "bluestore",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.osd_id": "1",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.type": "block",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.vdo": "0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.with_tpm": "0"
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             },
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "type": "block",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "vg_name": "ceph_vg1"
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:         }
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:     ],
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:     "2": [
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:         {
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "devices": [
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "/dev/loop5"
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             ],
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_name": "ceph_lv2",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_size": "21470642176",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "name": "ceph_lv2",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "tags": {
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.cluster_name": "ceph",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.crush_device_class": "",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.encrypted": "0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.objectstore": "bluestore",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.osd_id": "2",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.type": "block",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.vdo": "0",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:                 "ceph.with_tpm": "0"
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             },
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "type": "block",
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:             "vg_name": "ceph_vg2"
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:         }
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]:     ]
Feb 02 15:46:47 compute-0 flamboyant_kalam[270197]: }
Feb 02 15:46:47 compute-0 systemd[1]: libpod-183029dbb9b64f9c8c2f836b6765445276b3346be01273aaa750ba60c5a01e6a.scope: Deactivated successfully.
Feb 02 15:46:47 compute-0 podman[270180]: 2026-02-02 15:46:47.600184638 +0000 UTC m=+0.404918738 container died 183029dbb9b64f9c8c2f836b6765445276b3346be01273aaa750ba60c5a01e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kalam, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-60ab823b02c3d9f46dd4ad2d517128ed94d9525c3d8c8c640044d79001047390-merged.mount: Deactivated successfully.
Feb 02 15:46:47 compute-0 podman[270180]: 2026-02-02 15:46:47.637494266 +0000 UTC m=+0.442228366 container remove 183029dbb9b64f9c8c2f836b6765445276b3346be01273aaa750ba60c5a01e6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:46:47 compute-0 systemd[1]: libpod-conmon-183029dbb9b64f9c8c2f836b6765445276b3346be01273aaa750ba60c5a01e6a.scope: Deactivated successfully.
Feb 02 15:46:47 compute-0 sudo[270103]: pam_unix(sudo:session): session closed for user root
Feb 02 15:46:47 compute-0 sudo[270216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:46:47 compute-0 sudo[270216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:46:47 compute-0 sudo[270216]: pam_unix(sudo:session): session closed for user root
Feb 02 15:46:47 compute-0 sudo[270241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:46:47 compute-0 sudo[270241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:46:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 251 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 179 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Feb 02 15:46:48 compute-0 podman[270279]: 2026-02-02 15:46:48.055529653 +0000 UTC m=+0.059540721 container create 2babf00df2f771980c19ff215c17419aa0ef5dda148d2024d5df372946a1adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:46:48 compute-0 systemd[1]: Started libpod-conmon-2babf00df2f771980c19ff215c17419aa0ef5dda148d2024d5df372946a1adbe.scope.
Feb 02 15:46:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:46:48 compute-0 podman[270279]: 2026-02-02 15:46:48.038056978 +0000 UTC m=+0.042068096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:46:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Feb 02 15:46:48 compute-0 podman[270279]: 2026-02-02 15:46:48.196543426 +0000 UTC m=+0.200554524 container init 2babf00df2f771980c19ff215c17419aa0ef5dda148d2024d5df372946a1adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_payne, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:46:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Feb 02 15:46:48 compute-0 podman[270279]: 2026-02-02 15:46:48.201620419 +0000 UTC m=+0.205631537 container start 2babf00df2f771980c19ff215c17419aa0ef5dda148d2024d5df372946a1adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_payne, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:46:48 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Feb 02 15:46:48 compute-0 admiring_payne[270295]: 167 167
Feb 02 15:46:48 compute-0 podman[270279]: 2026-02-02 15:46:48.206373695 +0000 UTC m=+0.210384803 container attach 2babf00df2f771980c19ff215c17419aa0ef5dda148d2024d5df372946a1adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:46:48 compute-0 systemd[1]: libpod-2babf00df2f771980c19ff215c17419aa0ef5dda148d2024d5df372946a1adbe.scope: Deactivated successfully.
Feb 02 15:46:48 compute-0 podman[270279]: 2026-02-02 15:46:48.208522577 +0000 UTC m=+0.212533695 container died 2babf00df2f771980c19ff215c17419aa0ef5dda148d2024d5df372946a1adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_payne, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-848dbd803ec597117295130312f5e49781fc50a7a384b13797122e34971923fd-merged.mount: Deactivated successfully.
Feb 02 15:46:48 compute-0 podman[270279]: 2026-02-02 15:46:48.277877955 +0000 UTC m=+0.281889033 container remove 2babf00df2f771980c19ff215c17419aa0ef5dda148d2024d5df372946a1adbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_payne, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:46:48 compute-0 systemd[1]: libpod-conmon-2babf00df2f771980c19ff215c17419aa0ef5dda148d2024d5df372946a1adbe.scope: Deactivated successfully.
Feb 02 15:46:48 compute-0 podman[270321]: 2026-02-02 15:46:48.429204889 +0000 UTC m=+0.044507014 container create bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_blackwell, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:46:48 compute-0 systemd[1]: Started libpod-conmon-bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6.scope.
Feb 02 15:46:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39d690f0d99d05890f6cfebce2e2199c8f3e959fc9db37ed346bb9d90c3497e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39d690f0d99d05890f6cfebce2e2199c8f3e959fc9db37ed346bb9d90c3497e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39d690f0d99d05890f6cfebce2e2199c8f3e959fc9db37ed346bb9d90c3497e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39d690f0d99d05890f6cfebce2e2199c8f3e959fc9db37ed346bb9d90c3497e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:46:48 compute-0 podman[270321]: 2026-02-02 15:46:48.414200324 +0000 UTC m=+0.029502469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:46:48 compute-0 podman[270321]: 2026-02-02 15:46:48.51425768 +0000 UTC m=+0.129559845 container init bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:46:48 compute-0 podman[270321]: 2026-02-02 15:46:48.520648785 +0000 UTC m=+0.135950910 container start bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_blackwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:46:48 compute-0 podman[270321]: 2026-02-02 15:46:48.524283304 +0000 UTC m=+0.139585459 container attach bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:46:48 compute-0 podman[270334]: 2026-02-02 15:46:48.539477814 +0000 UTC m=+0.073788428 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Feb 02 15:46:48 compute-0 podman[270337]: 2026-02-02 15:46:48.553591487 +0000 UTC m=+0.084095118 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 02 15:46:49 compute-0 lvm[270458]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:46:49 compute-0 lvm[270458]: VG ceph_vg0 finished
Feb 02 15:46:49 compute-0 lvm[270461]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:46:49 compute-0 lvm[270461]: VG ceph_vg1 finished
Feb 02 15:46:49 compute-0 lvm[270463]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:46:49 compute-0 lvm[270463]: VG ceph_vg2 finished
Feb 02 15:46:49 compute-0 vigorous_blackwell[270338]: {}
Feb 02 15:46:49 compute-0 ceph-mon[75334]: pgmap v1660: 305 pgs: 305 active+clean; 251 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 179 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Feb 02 15:46:49 compute-0 ceph-mon[75334]: osdmap e468: 3 total, 3 up, 3 in
Feb 02 15:46:49 compute-0 systemd[1]: libpod-bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6.scope: Deactivated successfully.
Feb 02 15:46:49 compute-0 podman[270321]: 2026-02-02 15:46:49.2984673 +0000 UTC m=+0.913769465 container died bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_blackwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:46:49 compute-0 systemd[1]: libpod-bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6.scope: Consumed 1.054s CPU time.
Feb 02 15:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e39d690f0d99d05890f6cfebce2e2199c8f3e959fc9db37ed346bb9d90c3497e-merged.mount: Deactivated successfully.
Feb 02 15:46:49 compute-0 podman[270321]: 2026-02-02 15:46:49.341264222 +0000 UTC m=+0.956566357 container remove bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:46:49 compute-0 systemd[1]: libpod-conmon-bb206753524fb34012da24612724edf05d67d3ff34d171a0ed10be475d5447c6.scope: Deactivated successfully.
Feb 02 15:46:49 compute-0 sudo[270241]: pam_unix(sudo:session): session closed for user root
Feb 02 15:46:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:46:49 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:46:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:46:49 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:46:49 compute-0 nova_compute[239545]: 2026-02-02 15:46:49.447 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:49 compute-0 sudo[270478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:46:49 compute-0 sudo[270478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:46:49 compute-0 sudo[270478]: pam_unix(sudo:session): session closed for user root
Feb 02 15:46:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 228 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Feb 02 15:46:50 compute-0 nova_compute[239545]: 2026-02-02 15:46:50.146 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047195.145204, e8ad53dc-3c67-426c-8c27-6467369ab230 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:46:50 compute-0 nova_compute[239545]: 2026-02-02 15:46:50.146 239549 INFO nova.compute.manager [-] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] VM Stopped (Lifecycle Event)
Feb 02 15:46:50 compute-0 nova_compute[239545]: 2026-02-02 15:46:50.163 239549 DEBUG nova.compute.manager [None req-a9ee7e83-55ea-404c-b773-d1c0663df617 - - - - - -] [instance: e8ad53dc-3c67-426c-8c27-6467369ab230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:46:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:46:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:46:50 compute-0 ovn_controller[144995]: 2026-02-02T15:46:50Z|00256|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:46:50 compute-0 nova_compute[239545]: 2026-02-02 15:46:50.791 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:51 compute-0 ceph-mon[75334]: pgmap v1662: 305 pgs: 305 active+clean; 228 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Feb 02 15:46:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Feb 02 15:46:52 compute-0 nova_compute[239545]: 2026-02-02 15:46:52.352 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:53 compute-0 ceph-mon[75334]: pgmap v1663: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Feb 02 15:46:54 compute-0 nova_compute[239545]: 2026-02-02 15:46:54.449 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007629947379176145 of space, bias 1.0, pg target 0.22889842137528435 quantized to 32 (current 32)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003798404786008763 of space, bias 1.0, pg target 0.1139521435802629 quantized to 32 (current 32)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.5637177600675536e-06 of space, bias 1.0, pg target 0.0007691153280202661 quantized to 32 (current 32)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675924835334247 of space, bias 1.0, pg target 0.2002777450600274 quantized to 32 (current 32)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.456609125939044e-06 of space, bias 4.0, pg target 0.0017479309511268528 quantized to 16 (current 16)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:46:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:46:55 compute-0 ceph-mon[75334]: pgmap v1664: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Feb 02 15:46:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 715 B/s wr, 20 op/s
Feb 02 15:46:57 compute-0 nova_compute[239545]: 2026-02-02 15:46:57.324 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047202.323171, 589acca5-dd9e-4695-b32a-0235932283d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:46:57 compute-0 nova_compute[239545]: 2026-02-02 15:46:57.324 239549 INFO nova.compute.manager [-] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] VM Stopped (Lifecycle Event)
Feb 02 15:46:57 compute-0 nova_compute[239545]: 2026-02-02 15:46:57.356 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:57 compute-0 nova_compute[239545]: 2026-02-02 15:46:57.361 239549 DEBUG nova.compute.manager [None req-420b3bf1-26c9-438e-96dd-f588b5f333f7 - - - - - -] [instance: 589acca5-dd9e-4695-b32a-0235932283d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:46:57 compute-0 ceph-mon[75334]: pgmap v1665: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 715 B/s wr, 20 op/s
Feb 02 15:46:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 716 B/s wr, 20 op/s
Feb 02 15:46:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:46:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:59.257 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:46:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:59.257 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:46:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:46:59.258 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:46:59 compute-0 nova_compute[239545]: 2026-02-02 15:46:59.451 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:46:59 compute-0 ceph-mon[75334]: pgmap v1666: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 716 B/s wr, 20 op/s
Feb 02 15:47:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 606 B/s wr, 16 op/s
Feb 02 15:47:00 compute-0 nova_compute[239545]: 2026-02-02 15:47:00.379 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:47:00.380 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:47:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:47:00.381 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:47:00 compute-0 ovn_controller[144995]: 2026-02-02T15:47:00Z|00257|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:47:00 compute-0 nova_compute[239545]: 2026-02-02 15:47:00.556 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:01 compute-0 ceph-mon[75334]: pgmap v1667: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 606 B/s wr, 16 op/s
Feb 02 15:47:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Feb 02 15:47:02 compute-0 nova_compute[239545]: 2026-02-02 15:47:02.359 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:03 compute-0 ceph-mon[75334]: pgmap v1668: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Feb 02 15:47:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:04 compute-0 nova_compute[239545]: 2026-02-02 15:47:04.453 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:05 compute-0 ceph-mon[75334]: pgmap v1669: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:07 compute-0 nova_compute[239545]: 2026-02-02 15:47:07.363 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:07 compute-0 ceph-mon[75334]: pgmap v1670: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:07 compute-0 nova_compute[239545]: 2026-02-02 15:47:07.548 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:47:07 compute-0 nova_compute[239545]: 2026-02-02 15:47:07.548 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:47:07 compute-0 nova_compute[239545]: 2026-02-02 15:47:07.548 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:47:07 compute-0 nova_compute[239545]: 2026-02-02 15:47:07.886 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:47:07 compute-0 nova_compute[239545]: 2026-02-02 15:47:07.887 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:47:07 compute-0 nova_compute[239545]: 2026-02-02 15:47:07.887 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:47:07 compute-0 nova_compute[239545]: 2026-02-02 15:47:07.887 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:47:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:09 compute-0 nova_compute[239545]: 2026-02-02 15:47:09.368 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:47:09 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:47:09.384 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:47:09 compute-0 nova_compute[239545]: 2026-02-02 15:47:09.400 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:47:09 compute-0 nova_compute[239545]: 2026-02-02 15:47:09.401 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:47:09 compute-0 nova_compute[239545]: 2026-02-02 15:47:09.402 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:47:09 compute-0 nova_compute[239545]: 2026-02-02 15:47:09.458 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Feb 02 15:47:09 compute-0 ceph-mon[75334]: pgmap v1671: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Feb 02 15:47:09 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Feb 02 15:47:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:10 compute-0 ceph-mon[75334]: osdmap e469: 3 total, 3 up, 3 in
Feb 02 15:47:11 compute-0 ceph-mon[75334]: pgmap v1673: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 8.8 KiB/s rd, 818 B/s wr, 11 op/s
Feb 02 15:47:12 compute-0 nova_compute[239545]: 2026-02-02 15:47:12.365 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:47:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3234236261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:12 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:47:12 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3234236261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Feb 02 15:47:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Feb 02 15:47:13 compute-0 nova_compute[239545]: 2026-02-02 15:47:13.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:47:13 compute-0 nova_compute[239545]: 2026-02-02 15:47:13.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:47:13 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Feb 02 15:47:13 compute-0 ceph-mon[75334]: pgmap v1674: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 8.8 KiB/s rd, 818 B/s wr, 11 op/s
Feb 02 15:47:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3234236261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:13 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3234236261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.6 KiB/s wr, 43 op/s
Feb 02 15:47:14 compute-0 nova_compute[239545]: 2026-02-02 15:47:14.461 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:14 compute-0 nova_compute[239545]: 2026-02-02 15:47:14.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:47:14 compute-0 nova_compute[239545]: 2026-02-02 15:47:14.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:47:14 compute-0 nova_compute[239545]: 2026-02-02 15:47:14.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:47:14 compute-0 ceph-mon[75334]: osdmap e470: 3 total, 3 up, 3 in
Feb 02 15:47:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:47:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:47:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:47:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:47:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:47:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:47:15 compute-0 nova_compute[239545]: 2026-02-02 15:47:15.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:47:15 compute-0 nova_compute[239545]: 2026-02-02 15:47:15.600 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:47:15 compute-0 nova_compute[239545]: 2026-02-02 15:47:15.600 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:47:15 compute-0 nova_compute[239545]: 2026-02-02 15:47:15.601 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:47:15 compute-0 nova_compute[239545]: 2026-02-02 15:47:15.601 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:47:15 compute-0 nova_compute[239545]: 2026-02-02 15:47:15.601 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:47:15 compute-0 ceph-mon[75334]: pgmap v1676: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.6 KiB/s wr, 43 op/s
Feb 02 15:47:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 5.4 KiB/s wr, 100 op/s
Feb 02 15:47:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:47:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1297382298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.208 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.370 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.371 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.371 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.532 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.533 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4105MB free_disk=59.942510484717786GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.533 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.533 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.648 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.648 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.648 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.663 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing inventories for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.679 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating ProviderTree inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.679 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.691 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing aggregate associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.716 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing trait associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, traits: COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 15:47:16 compute-0 nova_compute[239545]: 2026-02-02 15:47:16.766 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:47:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1297382298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:47:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:47:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075001175' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:47:17 compute-0 nova_compute[239545]: 2026-02-02 15:47:17.339 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:47:17 compute-0 nova_compute[239545]: 2026-02-02 15:47:17.344 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:47:17 compute-0 nova_compute[239545]: 2026-02-02 15:47:17.366 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:17 compute-0 nova_compute[239545]: 2026-02-02 15:47:17.372 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:47:17 compute-0 nova_compute[239545]: 2026-02-02 15:47:17.531 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:47:17 compute-0 nova_compute[239545]: 2026-02-02 15:47:17.531 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:47:17 compute-0 ceph-mon[75334]: pgmap v1677: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 5.4 KiB/s wr, 100 op/s
Feb 02 15:47:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1075001175' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:47:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.0 KiB/s wr, 94 op/s
Feb 02 15:47:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:18 compute-0 nova_compute[239545]: 2026-02-02 15:47:18.531 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:47:18 compute-0 nova_compute[239545]: 2026-02-02 15:47:18.532 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:47:19 compute-0 podman[270550]: 2026-02-02 15:47:19.335360036 +0000 UTC m=+0.067187685 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Feb 02 15:47:19 compute-0 podman[270549]: 2026-02-02 15:47:19.400212656 +0000 UTC m=+0.132281372 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 02 15:47:19 compute-0 nova_compute[239545]: 2026-02-02 15:47:19.463 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:19 compute-0 ceph-mon[75334]: pgmap v1678: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.0 KiB/s wr, 94 op/s
Feb 02 15:47:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.8 KiB/s wr, 85 op/s
Feb 02 15:47:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Feb 02 15:47:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Feb 02 15:47:21 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Feb 02 15:47:21 compute-0 ceph-mon[75334]: pgmap v1679: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.8 KiB/s wr, 85 op/s
Feb 02 15:47:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 61 op/s
Feb 02 15:47:22 compute-0 nova_compute[239545]: 2026-02-02 15:47:22.368 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:22 compute-0 ceph-mon[75334]: osdmap e471: 3 total, 3 up, 3 in
Feb 02 15:47:22 compute-0 ceph-mon[75334]: pgmap v1681: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 61 op/s
Feb 02 15:47:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e471 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Feb 02 15:47:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Feb 02 15:47:23 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Feb 02 15:47:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.4 KiB/s wr, 9 op/s
Feb 02 15:47:24 compute-0 ceph-mon[75334]: osdmap e472: 3 total, 3 up, 3 in
Feb 02 15:47:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:47:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1539970344' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:47:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1539970344' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:24 compute-0 nova_compute[239545]: 2026-02-02 15:47:24.465 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:25 compute-0 ceph-mon[75334]: pgmap v1683: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.4 KiB/s wr, 9 op/s
Feb 02 15:47:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1539970344' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1539970344' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:47:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249745357' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:47:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249745357' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:47:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 112K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.70 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 32.94 MB, 0.05 MB/s
                                           Interval WAL: 12K writes, 5355 syncs, 2.37 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 15:47:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.5 KiB/s wr, 45 op/s
Feb 02 15:47:26 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4249745357' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:26 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4249745357' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:47:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2587072109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:47:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2587072109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:27 compute-0 ceph-mon[75334]: pgmap v1684: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.5 KiB/s wr, 45 op/s
Feb 02 15:47:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2587072109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2587072109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:27 compute-0 nova_compute[239545]: 2026-02-02 15:47:27.373 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.9 KiB/s wr, 39 op/s
Feb 02 15:47:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:47:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3603988299' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:47:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3603988299' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:47:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3115339729' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:47:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3115339729' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e472 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3603988299' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3603988299' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3115339729' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3115339729' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:47:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2964679086' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:47:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2964679086' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:29 compute-0 ceph-mon[75334]: pgmap v1685: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.9 KiB/s wr, 39 op/s
Feb 02 15:47:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:47:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 114K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 30K writes, 11K syncs, 2.71 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 38K keys, 11K commit groups, 1.0 writes per commit group, ingest: 25.65 MB, 0.04 MB/s
                                           Interval WAL: 11K writes, 5006 syncs, 2.31 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 15:47:29 compute-0 nova_compute[239545]: 2026-02-02 15:47:29.467 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 3.8 KiB/s wr, 41 op/s
Feb 02 15:47:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Feb 02 15:47:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Feb 02 15:47:30 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Feb 02 15:47:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2964679086' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2964679086' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:31 compute-0 ceph-mon[75334]: pgmap v1686: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 3.8 KiB/s wr, 41 op/s
Feb 02 15:47:31 compute-0 ceph-mon[75334]: osdmap e473: 3 total, 3 up, 3 in
Feb 02 15:47:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:47:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3835802836' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:47:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3835802836' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.2 KiB/s wr, 103 op/s
Feb 02 15:47:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3835802836' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3835802836' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:32 compute-0 nova_compute[239545]: 2026-02-02 15:47:32.376 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:32 compute-0 ovn_controller[144995]: 2026-02-02T15:47:32Z|00258|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Feb 02 15:47:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:47:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 21K writes, 86K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 7584 syncs, 2.83 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8613 writes, 28K keys, 8613 commit groups, 1.0 writes per commit group, ingest: 26.31 MB, 0.04 MB/s
                                           Interval WAL: 8613 writes, 3721 syncs, 2.31 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 15:47:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:33 compute-0 ceph-mon[75334]: pgmap v1688: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.2 KiB/s wr, 103 op/s
Feb 02 15:47:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.7 KiB/s wr, 96 op/s
Feb 02 15:47:34 compute-0 nova_compute[239545]: 2026-02-02 15:47:34.470 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:35 compute-0 ceph-mon[75334]: pgmap v1689: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.7 KiB/s wr, 96 op/s
Feb 02 15:47:35 compute-0 ceph-mgr[75628]: [devicehealth INFO root] Check health
Feb 02 15:47:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.1 KiB/s wr, 91 op/s
Feb 02 15:47:37 compute-0 ceph-mon[75334]: pgmap v1690: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.1 KiB/s wr, 91 op/s
Feb 02 15:47:37 compute-0 nova_compute[239545]: 2026-02-02 15:47:37.377 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.1 KiB/s wr, 91 op/s
Feb 02 15:47:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Feb 02 15:47:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Feb 02 15:47:38 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Feb 02 15:47:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:47:39 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/543423915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:47:39 compute-0 ceph-mon[75334]: pgmap v1691: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.1 KiB/s wr, 91 op/s
Feb 02 15:47:39 compute-0 ceph-mon[75334]: osdmap e474: 3 total, 3 up, 3 in
Feb 02 15:47:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/543423915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:47:39 compute-0 nova_compute[239545]: 2026-02-02 15:47:39.472 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.0 KiB/s wr, 57 op/s
Feb 02 15:47:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Feb 02 15:47:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Feb 02 15:47:40 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Feb 02 15:47:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Feb 02 15:47:41 compute-0 ceph-mon[75334]: pgmap v1693: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.0 KiB/s wr, 57 op/s
Feb 02 15:47:41 compute-0 ceph-mon[75334]: osdmap e475: 3 total, 3 up, 3 in
Feb 02 15:47:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Feb 02 15:47:41 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Feb 02 15:47:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Feb 02 15:47:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Feb 02 15:47:42 compute-0 ceph-mon[75334]: osdmap e476: 3 total, 3 up, 3 in
Feb 02 15:47:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Feb 02 15:47:42 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Feb 02 15:47:42 compute-0 nova_compute[239545]: 2026-02-02 15:47:42.381 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:47:42
Feb 02 15:47:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:47:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:47:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'backups', '.mgr', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.rgw.root']
Feb 02 15:47:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:47:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e477 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:43 compute-0 ceph-mon[75334]: pgmap v1696: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Feb 02 15:47:43 compute-0 ceph-mon[75334]: osdmap e477: 3 total, 3 up, 3 in
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 56 op/s
Feb 02 15:47:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Feb 02 15:47:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Feb 02 15:47:44 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Feb 02 15:47:44 compute-0 nova_compute[239545]: 2026-02-02 15:47:44.474 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:47:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:47:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4202493797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:44 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:47:44 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4202493797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:47:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:47:45 compute-0 ceph-mon[75334]: pgmap v1698: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 56 op/s
Feb 02 15:47:45 compute-0 ceph-mon[75334]: osdmap e478: 3 total, 3 up, 3 in
Feb 02 15:47:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4202493797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:47:45 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/4202493797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:47:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.5 KiB/s wr, 101 op/s
Feb 02 15:47:47 compute-0 nova_compute[239545]: 2026-02-02 15:47:47.383 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:47 compute-0 ceph-mon[75334]: pgmap v1700: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.5 KiB/s wr, 101 op/s
Feb 02 15:47:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.4 KiB/s wr, 76 op/s
Feb 02 15:47:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e478 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Feb 02 15:47:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Feb 02 15:47:48 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Feb 02 15:47:49 compute-0 ceph-mon[75334]: pgmap v1701: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.4 KiB/s wr, 76 op/s
Feb 02 15:47:49 compute-0 ceph-mon[75334]: osdmap e479: 3 total, 3 up, 3 in
Feb 02 15:47:49 compute-0 nova_compute[239545]: 2026-02-02 15:47:49.477 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:49 compute-0 sudo[270589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:47:49 compute-0 sudo[270589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:47:49 compute-0 sudo[270589]: pam_unix(sudo:session): session closed for user root
Feb 02 15:47:49 compute-0 sudo[270626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:47:49 compute-0 sudo[270626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:47:49 compute-0 podman[270614]: 2026-02-02 15:47:49.660916902 +0000 UTC m=+0.074893364 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:47:49 compute-0 podman[270613]: 2026-02-02 15:47:49.690080132 +0000 UTC m=+0.102935646 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 02 15:47:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.0 KiB/s wr, 71 op/s
Feb 02 15:47:50 compute-0 sudo[270626]: pam_unix(sudo:session): session closed for user root
Feb 02 15:47:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:47:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:47:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:47:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:47:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:47:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:47:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:47:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:47:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:47:50 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:47:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:47:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:47:50 compute-0 sudo[270711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:47:50 compute-0 sudo[270711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:47:50 compute-0 sudo[270711]: pam_unix(sudo:session): session closed for user root
Feb 02 15:47:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:47:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:47:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:47:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:47:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:47:50 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:47:50 compute-0 sudo[270736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:47:50 compute-0 sudo[270736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:47:50 compute-0 podman[270775]: 2026-02-02 15:47:50.5057892 +0000 UTC m=+0.051597908 container create 74de1340e1e93868d7d74d1ea6fe7040f7f2bcdd8cc69fa8aa9e4627fd00c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 15:47:50 compute-0 systemd[1]: Started libpod-conmon-74de1340e1e93868d7d74d1ea6fe7040f7f2bcdd8cc69fa8aa9e4627fd00c0d2.scope.
Feb 02 15:47:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:47:50 compute-0 podman[270775]: 2026-02-02 15:47:50.472868588 +0000 UTC m=+0.018677316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:47:50 compute-0 podman[270775]: 2026-02-02 15:47:50.57397929 +0000 UTC m=+0.119788018 container init 74de1340e1e93868d7d74d1ea6fe7040f7f2bcdd8cc69fa8aa9e4627fd00c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:47:50 compute-0 podman[270775]: 2026-02-02 15:47:50.582671781 +0000 UTC m=+0.128480489 container start 74de1340e1e93868d7d74d1ea6fe7040f7f2bcdd8cc69fa8aa9e4627fd00c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:47:50 compute-0 nice_galois[270792]: 167 167
Feb 02 15:47:50 compute-0 podman[270775]: 2026-02-02 15:47:50.58757509 +0000 UTC m=+0.133383828 container attach 74de1340e1e93868d7d74d1ea6fe7040f7f2bcdd8cc69fa8aa9e4627fd00c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:47:50 compute-0 podman[270775]: 2026-02-02 15:47:50.588338249 +0000 UTC m=+0.134146957 container died 74de1340e1e93868d7d74d1ea6fe7040f7f2bcdd8cc69fa8aa9e4627fd00c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:47:50 compute-0 systemd[1]: libpod-74de1340e1e93868d7d74d1ea6fe7040f7f2bcdd8cc69fa8aa9e4627fd00c0d2.scope: Deactivated successfully.
Feb 02 15:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-81fd6d46f27071ce7332c717dc9516e310a1c905d02f45c11e21ac87c1fd6eec-merged.mount: Deactivated successfully.
Feb 02 15:47:50 compute-0 podman[270775]: 2026-02-02 15:47:50.641584615 +0000 UTC m=+0.187393363 container remove 74de1340e1e93868d7d74d1ea6fe7040f7f2bcdd8cc69fa8aa9e4627fd00c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 15:47:50 compute-0 systemd[1]: libpod-conmon-74de1340e1e93868d7d74d1ea6fe7040f7f2bcdd8cc69fa8aa9e4627fd00c0d2.scope: Deactivated successfully.
Feb 02 15:47:50 compute-0 podman[270816]: 2026-02-02 15:47:50.764339824 +0000 UTC m=+0.037580506 container create 95086520ebaa8e4a61bc228bde5765e48f90ca31baaacbb31430bee9f47ed36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_keller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:47:50 compute-0 systemd[1]: Started libpod-conmon-95086520ebaa8e4a61bc228bde5765e48f90ca31baaacbb31430bee9f47ed36b.scope.
Feb 02 15:47:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:47:50 compute-0 podman[270816]: 2026-02-02 15:47:50.748782164 +0000 UTC m=+0.022022866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b803f00b6fc0255dd10dc1a34aa029b3a998c3b83903202972f476fca29cd21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b803f00b6fc0255dd10dc1a34aa029b3a998c3b83903202972f476fca29cd21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b803f00b6fc0255dd10dc1a34aa029b3a998c3b83903202972f476fca29cd21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b803f00b6fc0255dd10dc1a34aa029b3a998c3b83903202972f476fca29cd21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b803f00b6fc0255dd10dc1a34aa029b3a998c3b83903202972f476fca29cd21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:50 compute-0 podman[270816]: 2026-02-02 15:47:50.863404915 +0000 UTC m=+0.136645627 container init 95086520ebaa8e4a61bc228bde5765e48f90ca31baaacbb31430bee9f47ed36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_keller, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:47:50 compute-0 podman[270816]: 2026-02-02 15:47:50.873257015 +0000 UTC m=+0.146497697 container start 95086520ebaa8e4a61bc228bde5765e48f90ca31baaacbb31430bee9f47ed36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_keller, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 15:47:50 compute-0 podman[270816]: 2026-02-02 15:47:50.877018356 +0000 UTC m=+0.150259038 container attach 95086520ebaa8e4a61bc228bde5765e48f90ca31baaacbb31430bee9f47ed36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_keller, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:47:51 compute-0 ceph-mon[75334]: pgmap v1703: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.0 KiB/s wr, 71 op/s
Feb 02 15:47:51 compute-0 eager_keller[270832]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:47:51 compute-0 eager_keller[270832]: --> All data devices are unavailable
Feb 02 15:47:51 compute-0 systemd[1]: libpod-95086520ebaa8e4a61bc228bde5765e48f90ca31baaacbb31430bee9f47ed36b.scope: Deactivated successfully.
Feb 02 15:47:51 compute-0 podman[270816]: 2026-02-02 15:47:51.301381527 +0000 UTC m=+0.574622209 container died 95086520ebaa8e4a61bc228bde5765e48f90ca31baaacbb31430bee9f47ed36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b803f00b6fc0255dd10dc1a34aa029b3a998c3b83903202972f476fca29cd21-merged.mount: Deactivated successfully.
Feb 02 15:47:51 compute-0 podman[270816]: 2026-02-02 15:47:51.345991263 +0000 UTC m=+0.619231945 container remove 95086520ebaa8e4a61bc228bde5765e48f90ca31baaacbb31430bee9f47ed36b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:47:51 compute-0 systemd[1]: libpod-conmon-95086520ebaa8e4a61bc228bde5765e48f90ca31baaacbb31430bee9f47ed36b.scope: Deactivated successfully.
Feb 02 15:47:51 compute-0 sudo[270736]: pam_unix(sudo:session): session closed for user root
Feb 02 15:47:51 compute-0 sudo[270866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:47:51 compute-0 sudo[270866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:47:51 compute-0 sudo[270866]: pam_unix(sudo:session): session closed for user root
Feb 02 15:47:51 compute-0 sudo[270892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:47:51 compute-0 sudo[270892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:47:51 compute-0 podman[270929]: 2026-02-02 15:47:51.789593302 +0000 UTC m=+0.043175462 container create 94e931919ab59700bec6d1aba898b79014c7bbb54a2911e5f0662dd84e53c123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:47:51 compute-0 systemd[1]: Started libpod-conmon-94e931919ab59700bec6d1aba898b79014c7bbb54a2911e5f0662dd84e53c123.scope.
Feb 02 15:47:51 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:47:51 compute-0 podman[270929]: 2026-02-02 15:47:51.860300573 +0000 UTC m=+0.113882813 container init 94e931919ab59700bec6d1aba898b79014c7bbb54a2911e5f0662dd84e53c123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_blackburn, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:47:51 compute-0 podman[270929]: 2026-02-02 15:47:51.766764016 +0000 UTC m=+0.020346206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:47:51 compute-0 podman[270929]: 2026-02-02 15:47:51.867777115 +0000 UTC m=+0.121359305 container start 94e931919ab59700bec6d1aba898b79014c7bbb54a2911e5f0662dd84e53c123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:47:51 compute-0 podman[270929]: 2026-02-02 15:47:51.8717018 +0000 UTC m=+0.125284000 container attach 94e931919ab59700bec6d1aba898b79014c7bbb54a2911e5f0662dd84e53c123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_blackburn, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:47:51 compute-0 modest_blackburn[270946]: 167 167
Feb 02 15:47:51 compute-0 systemd[1]: libpod-94e931919ab59700bec6d1aba898b79014c7bbb54a2911e5f0662dd84e53c123.scope: Deactivated successfully.
Feb 02 15:47:51 compute-0 podman[270929]: 2026-02-02 15:47:51.876120029 +0000 UTC m=+0.129702189 container died 94e931919ab59700bec6d1aba898b79014c7bbb54a2911e5f0662dd84e53c123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_blackburn, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e67bdd5d798c6516a824708db4df6df1765c8e4d0e5d8cc1aab93918c3d246e0-merged.mount: Deactivated successfully.
Feb 02 15:47:51 compute-0 podman[270929]: 2026-02-02 15:47:51.919584467 +0000 UTC m=+0.173166627 container remove 94e931919ab59700bec6d1aba898b79014c7bbb54a2911e5f0662dd84e53c123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 15:47:51 compute-0 systemd[1]: libpod-conmon-94e931919ab59700bec6d1aba898b79014c7bbb54a2911e5f0662dd84e53c123.scope: Deactivated successfully.
Feb 02 15:47:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Feb 02 15:47:52 compute-0 podman[270970]: 2026-02-02 15:47:52.070678325 +0000 UTC m=+0.044458894 container create af484822b3796a6543a1d9e4ca8fe4188900790dfd194a97f13d877e4b2588ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:47:52 compute-0 systemd[1]: Started libpod-conmon-af484822b3796a6543a1d9e4ca8fe4188900790dfd194a97f13d877e4b2588ba.scope.
Feb 02 15:47:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b6fd63a2e449757ab6eb937d9941c4443bec84b3046ac01c1156fd6137f643/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b6fd63a2e449757ab6eb937d9941c4443bec84b3046ac01c1156fd6137f643/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b6fd63a2e449757ab6eb937d9941c4443bec84b3046ac01c1156fd6137f643/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b6fd63a2e449757ab6eb937d9941c4443bec84b3046ac01c1156fd6137f643/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:52 compute-0 podman[270970]: 2026-02-02 15:47:52.054353347 +0000 UTC m=+0.028133936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:47:52 compute-0 podman[270970]: 2026-02-02 15:47:52.160901721 +0000 UTC m=+0.134682320 container init af484822b3796a6543a1d9e4ca8fe4188900790dfd194a97f13d877e4b2588ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bhaskara, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:47:52 compute-0 podman[270970]: 2026-02-02 15:47:52.166859136 +0000 UTC m=+0.140639715 container start af484822b3796a6543a1d9e4ca8fe4188900790dfd194a97f13d877e4b2588ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:47:52 compute-0 podman[270970]: 2026-02-02 15:47:52.171402046 +0000 UTC m=+0.145182615 container attach af484822b3796a6543a1d9e4ca8fe4188900790dfd194a97f13d877e4b2588ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:47:52 compute-0 nova_compute[239545]: 2026-02-02 15:47:52.386 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]: {
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:     "0": [
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:         {
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "devices": [
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "/dev/loop3"
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             ],
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_name": "ceph_lv0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_size": "21470642176",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "name": "ceph_lv0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "tags": {
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.cluster_name": "ceph",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.crush_device_class": "",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.encrypted": "0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.objectstore": "bluestore",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.osd_id": "0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.type": "block",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.vdo": "0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.with_tpm": "0"
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             },
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "type": "block",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "vg_name": "ceph_vg0"
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:         }
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:     ],
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:     "1": [
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:         {
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "devices": [
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "/dev/loop4"
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             ],
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_name": "ceph_lv1",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_size": "21470642176",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "name": "ceph_lv1",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "tags": {
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.cluster_name": "ceph",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.crush_device_class": "",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.encrypted": "0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.objectstore": "bluestore",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.osd_id": "1",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.type": "block",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.vdo": "0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.with_tpm": "0"
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             },
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "type": "block",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "vg_name": "ceph_vg1"
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:         }
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:     ],
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:     "2": [
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:         {
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "devices": [
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "/dev/loop5"
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             ],
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_name": "ceph_lv2",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_size": "21470642176",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "name": "ceph_lv2",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "tags": {
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.cluster_name": "ceph",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.crush_device_class": "",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.encrypted": "0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.objectstore": "bluestore",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.osd_id": "2",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.type": "block",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.vdo": "0",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:                 "ceph.with_tpm": "0"
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             },
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "type": "block",
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:             "vg_name": "ceph_vg2"
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:         }
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]:     ]
Feb 02 15:47:52 compute-0 tender_bhaskara[270986]: }
Feb 02 15:47:52 compute-0 systemd[1]: libpod-af484822b3796a6543a1d9e4ca8fe4188900790dfd194a97f13d877e4b2588ba.scope: Deactivated successfully.
Feb 02 15:47:52 compute-0 podman[270970]: 2026-02-02 15:47:52.477634731 +0000 UTC m=+0.451415320 container died af484822b3796a6543a1d9e4ca8fe4188900790dfd194a97f13d877e4b2588ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4b6fd63a2e449757ab6eb937d9941c4443bec84b3046ac01c1156fd6137f643-merged.mount: Deactivated successfully.
Feb 02 15:47:52 compute-0 podman[270970]: 2026-02-02 15:47:52.523784264 +0000 UTC m=+0.497564853 container remove af484822b3796a6543a1d9e4ca8fe4188900790dfd194a97f13d877e4b2588ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 02 15:47:52 compute-0 systemd[1]: libpod-conmon-af484822b3796a6543a1d9e4ca8fe4188900790dfd194a97f13d877e4b2588ba.scope: Deactivated successfully.
Feb 02 15:47:52 compute-0 sudo[270892]: pam_unix(sudo:session): session closed for user root
Feb 02 15:47:52 compute-0 sudo[271006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:47:52 compute-0 sudo[271006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:47:52 compute-0 sudo[271006]: pam_unix(sudo:session): session closed for user root
Feb 02 15:47:52 compute-0 sudo[271031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:47:52 compute-0 sudo[271031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:47:52 compute-0 podman[271068]: 2026-02-02 15:47:52.959589973 +0000 UTC m=+0.041712095 container create 5f418c74e849055ee57b2c76403a574650a088c66a3537b957883a3c9dc936f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:47:53 compute-0 systemd[1]: Started libpod-conmon-5f418c74e849055ee57b2c76403a574650a088c66a3537b957883a3c9dc936f8.scope.
Feb 02 15:47:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:47:53 compute-0 podman[271068]: 2026-02-02 15:47:52.942115048 +0000 UTC m=+0.024237190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:47:53 compute-0 podman[271068]: 2026-02-02 15:47:53.045189878 +0000 UTC m=+0.127312020 container init 5f418c74e849055ee57b2c76403a574650a088c66a3537b957883a3c9dc936f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:47:53 compute-0 podman[271068]: 2026-02-02 15:47:53.050608429 +0000 UTC m=+0.132730551 container start 5f418c74e849055ee57b2c76403a574650a088c66a3537b957883a3c9dc936f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_nightingale, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:47:53 compute-0 silly_nightingale[271085]: 167 167
Feb 02 15:47:53 compute-0 podman[271068]: 2026-02-02 15:47:53.054527765 +0000 UTC m=+0.136649917 container attach 5f418c74e849055ee57b2c76403a574650a088c66a3537b957883a3c9dc936f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_nightingale, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:47:53 compute-0 systemd[1]: libpod-5f418c74e849055ee57b2c76403a574650a088c66a3537b957883a3c9dc936f8.scope: Deactivated successfully.
Feb 02 15:47:53 compute-0 podman[271068]: 2026-02-02 15:47:53.055746555 +0000 UTC m=+0.137868667 container died 5f418c74e849055ee57b2c76403a574650a088c66a3537b957883a3c9dc936f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:47:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-934c7c8f9389ca9790366c7af0445f53c98624ac6f6eda25a0e73e9feb395a72-merged.mount: Deactivated successfully.
Feb 02 15:47:53 compute-0 podman[271068]: 2026-02-02 15:47:53.101857317 +0000 UTC m=+0.183979459 container remove 5f418c74e849055ee57b2c76403a574650a088c66a3537b957883a3c9dc936f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:47:53 compute-0 systemd[1]: libpod-conmon-5f418c74e849055ee57b2c76403a574650a088c66a3537b957883a3c9dc936f8.scope: Deactivated successfully.
Feb 02 15:47:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e479 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Feb 02 15:47:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Feb 02 15:47:53 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Feb 02 15:47:53 compute-0 podman[271111]: 2026-02-02 15:47:53.234304812 +0000 UTC m=+0.043448769 container create a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:47:53 compute-0 ceph-mon[75334]: pgmap v1704: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Feb 02 15:47:53 compute-0 ceph-mon[75334]: osdmap e480: 3 total, 3 up, 3 in
Feb 02 15:47:53 compute-0 systemd[1]: Started libpod-conmon-a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600.scope.
Feb 02 15:47:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8cb2cbb62fbdfcac2ec92006c909b9b49c98c2a8b800c179243ace7fb546386/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8cb2cbb62fbdfcac2ec92006c909b9b49c98c2a8b800c179243ace7fb546386/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8cb2cbb62fbdfcac2ec92006c909b9b49c98c2a8b800c179243ace7fb546386/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8cb2cbb62fbdfcac2ec92006c909b9b49c98c2a8b800c179243ace7fb546386/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:47:53 compute-0 podman[271111]: 2026-02-02 15:47:53.215035882 +0000 UTC m=+0.024179919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:47:53 compute-0 podman[271111]: 2026-02-02 15:47:53.317612229 +0000 UTC m=+0.126756216 container init a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:47:53 compute-0 podman[271111]: 2026-02-02 15:47:53.323726668 +0000 UTC m=+0.132870625 container start a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:47:53 compute-0 podman[271111]: 2026-02-02 15:47:53.327809747 +0000 UTC m=+0.136953704 container attach a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:47:53 compute-0 lvm[271204]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:47:53 compute-0 lvm[271204]: VG ceph_vg0 finished
Feb 02 15:47:53 compute-0 lvm[271207]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:47:53 compute-0 lvm[271207]: VG ceph_vg1 finished
Feb 02 15:47:53 compute-0 lvm[271209]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:47:53 compute-0 lvm[271209]: VG ceph_vg2 finished
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 4 op/s
Feb 02 15:47:54 compute-0 blissful_kilby[271128]: {}
Feb 02 15:47:54 compute-0 systemd[1]: libpod-a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600.scope: Deactivated successfully.
Feb 02 15:47:54 compute-0 podman[271111]: 2026-02-02 15:47:54.109882806 +0000 UTC m=+0.919026803 container died a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:47:54 compute-0 systemd[1]: libpod-a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600.scope: Consumed 1.106s CPU time.
Feb 02 15:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8cb2cbb62fbdfcac2ec92006c909b9b49c98c2a8b800c179243ace7fb546386-merged.mount: Deactivated successfully.
Feb 02 15:47:54 compute-0 podman[271111]: 2026-02-02 15:47:54.163823769 +0000 UTC m=+0.972967726 container remove a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:47:54 compute-0 systemd[1]: libpod-conmon-a50c51d75f8ffea947e93f3efe4f1f10b6324716b39a9831ffd8f807861f9600.scope: Deactivated successfully.
Feb 02 15:47:54 compute-0 sudo[271031]: pam_unix(sudo:session): session closed for user root
Feb 02 15:47:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:47:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:47:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:47:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:47:54 compute-0 sudo[271223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:47:54 compute-0 sudo[271223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:47:54 compute-0 sudo[271223]: pam_unix(sudo:session): session closed for user root
Feb 02 15:47:54 compute-0 nova_compute[239545]: 2026-02-02 15:47:54.479 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000762988745238606 of space, bias 1.0, pg target 0.2288966235715818 quantized to 32 (current 32)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003843396609048557 of space, bias 1.0, pg target 0.11530189827145672 quantized to 32 (current 32)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.2283462273032045e-06 of space, bias 1.0, pg target 0.0009685038681909613 quantized to 32 (current 32)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677660383485433 of space, bias 1.0, pg target 0.20032981150456297 quantized to 32 (current 32)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4207772524893854e-06 of space, bias 4.0, pg target 0.0017049327029872625 quantized to 16 (current 16)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:47:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:47:55 compute-0 ceph-mon[75334]: pgmap v1706: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 4 op/s
Feb 02 15:47:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:47:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:47:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 4 op/s
Feb 02 15:47:57 compute-0 ceph-mon[75334]: pgmap v1707: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 4 op/s
Feb 02 15:47:57 compute-0 nova_compute[239545]: 2026-02-02 15:47:57.390 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:47:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:47:59 compute-0 ceph-mon[75334]: pgmap v1708: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:47:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:47:59.258 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:47:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:47:59.258 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:47:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:47:59.259 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:47:59 compute-0 nova_compute[239545]: 2026-02-02 15:47:59.481 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:01 compute-0 ceph-mon[75334]: pgmap v1709: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:48:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688010032' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:48:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688010032' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 204 B/s wr, 0 op/s
Feb 02 15:48:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3688010032' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:02 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3688010032' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:02 compute-0 nova_compute[239545]: 2026-02-02 15:48:02.394 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:03 compute-0 ceph-mon[75334]: pgmap v1710: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 204 B/s wr, 0 op/s
Feb 02 15:48:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:48:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1779637663' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:48:03 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1779637663' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 567 B/s wr, 12 op/s
Feb 02 15:48:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1779637663' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:04 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1779637663' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:04 compute-0 nova_compute[239545]: 2026-02-02 15:48:04.483 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:05 compute-0 ceph-mon[75334]: pgmap v1711: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 567 B/s wr, 12 op/s
Feb 02 15:48:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Feb 02 15:48:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:48:06.672 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:48:06 compute-0 nova_compute[239545]: 2026-02-02 15:48:06.672 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:06 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:48:06.673 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:48:07 compute-0 ceph-mon[75334]: pgmap v1712: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Feb 02 15:48:07 compute-0 nova_compute[239545]: 2026-02-02 15:48:07.396 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:07 compute-0 nova_compute[239545]: 2026-02-02 15:48:07.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:07 compute-0 nova_compute[239545]: 2026-02-02 15:48:07.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:48:07 compute-0 nova_compute[239545]: 2026-02-02 15:48:07.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:48:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Feb 02 15:48:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:08 compute-0 nova_compute[239545]: 2026-02-02 15:48:08.317 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:48:08 compute-0 nova_compute[239545]: 2026-02-02 15:48:08.318 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:48:08 compute-0 nova_compute[239545]: 2026-02-02 15:48:08.318 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:48:08 compute-0 nova_compute[239545]: 2026-02-02 15:48:08.318 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:48:09 compute-0 ceph-mon[75334]: pgmap v1713: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.347635) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047289347677, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2159, "num_deletes": 255, "total_data_size": 3331578, "memory_usage": 3386320, "flush_reason": "Manual Compaction"}
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047289359968, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3271897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32095, "largest_seqno": 34253, "table_properties": {"data_size": 3262059, "index_size": 6204, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20708, "raw_average_key_size": 20, "raw_value_size": 3242223, "raw_average_value_size": 3235, "num_data_blocks": 272, "num_entries": 1002, "num_filter_entries": 1002, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770047092, "oldest_key_time": 1770047092, "file_creation_time": 1770047289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 12380 microseconds, and 5343 cpu microseconds.
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.360019) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3271897 bytes OK
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.360040) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.362370) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.362388) EVENT_LOG_v1 {"time_micros": 1770047289362383, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.362408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3322450, prev total WAL file size 3322450, number of live WAL files 2.
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.362997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3195KB)], [65(10MB)]
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047289363045, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 14137190, "oldest_snapshot_seqno": -1}
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6800 keys, 12301531 bytes, temperature: kUnknown
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047289444145, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 12301531, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12247718, "index_size": 35717, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 170185, "raw_average_key_size": 25, "raw_value_size": 12117396, "raw_average_value_size": 1781, "num_data_blocks": 1433, "num_entries": 6800, "num_filter_entries": 6800, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770047289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.444521) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 12301531 bytes
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.445843) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.1 rd, 151.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.4 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 7324, records dropped: 524 output_compression: NoCompression
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.445869) EVENT_LOG_v1 {"time_micros": 1770047289445855, "job": 36, "event": "compaction_finished", "compaction_time_micros": 81216, "compaction_time_cpu_micros": 26524, "output_level": 6, "num_output_files": 1, "total_output_size": 12301531, "num_input_records": 7324, "num_output_records": 6800, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047289446528, "job": 36, "event": "table_file_deletion", "file_number": 67}
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047289448412, "job": 36, "event": "table_file_deletion", "file_number": 65}
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.362923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.448488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.448498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.448501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.448505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:48:09 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:48:09.448508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:48:09 compute-0 nova_compute[239545]: 2026-02-02 15:48:09.485 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:09 compute-0 nova_compute[239545]: 2026-02-02 15:48:09.824 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:48:09 compute-0 nova_compute[239545]: 2026-02-02 15:48:09.843 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:48:09 compute-0 nova_compute[239545]: 2026-02-02 15:48:09.843 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:48:09 compute-0 nova_compute[239545]: 2026-02-02 15:48:09.844 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 15:48:11 compute-0 ceph-mon[75334]: pgmap v1714: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 15:48:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 15:48:12 compute-0 nova_compute[239545]: 2026-02-02 15:48:12.399 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:13 compute-0 ceph-mon[75334]: pgmap v1715: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 15:48:13 compute-0 nova_compute[239545]: 2026-02-02 15:48:13.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:13 compute-0 nova_compute[239545]: 2026-02-02 15:48:13.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:13 compute-0 nova_compute[239545]: 2026-02-02 15:48:13.569 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 28 op/s
Feb 02 15:48:14 compute-0 nova_compute[239545]: 2026-02-02 15:48:14.487 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:14 compute-0 nova_compute[239545]: 2026-02-02 15:48:14.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:48:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:48:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:48:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:48:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:48:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:48:15 compute-0 ceph-mon[75334]: pgmap v1716: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 28 op/s
Feb 02 15:48:15 compute-0 nova_compute[239545]: 2026-02-02 15:48:15.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:15 compute-0 nova_compute[239545]: 2026-02-02 15:48:15.571 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:48:15 compute-0 nova_compute[239545]: 2026-02-02 15:48:15.571 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:48:15 compute-0 nova_compute[239545]: 2026-02-02 15:48:15.571 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:48:15 compute-0 nova_compute[239545]: 2026-02-02 15:48:15.572 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:48:15 compute-0 nova_compute[239545]: 2026-02-02 15:48:15.572 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:48:15 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:48:15.674 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:48:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:48:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524962161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:48:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 17 op/s
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.065 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.234 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.235 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.235 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.367 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.368 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4112MB free_disk=59.9425110751763GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.368 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.368 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.443 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.443 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.443 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:48:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1524962161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:48:16 compute-0 nova_compute[239545]: 2026-02-02 15:48:16.513 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:48:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:48:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790510813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:48:17 compute-0 nova_compute[239545]: 2026-02-02 15:48:17.004 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:48:17 compute-0 nova_compute[239545]: 2026-02-02 15:48:17.008 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:48:17 compute-0 nova_compute[239545]: 2026-02-02 15:48:17.024 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:48:17 compute-0 nova_compute[239545]: 2026-02-02 15:48:17.026 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:48:17 compute-0 nova_compute[239545]: 2026-02-02 15:48:17.027 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:48:17 compute-0 nova_compute[239545]: 2026-02-02 15:48:17.402 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:17 compute-0 ceph-mon[75334]: pgmap v1717: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 17 op/s
Feb 02 15:48:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3790510813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:48:18 compute-0 nova_compute[239545]: 2026-02-02 15:48:18.027 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:18 compute-0 nova_compute[239545]: 2026-02-02 15:48:18.028 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:18 compute-0 nova_compute[239545]: 2026-02-02 15:48:18.028 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:48:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 15:48:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:18 compute-0 nova_compute[239545]: 2026-02-02 15:48:18.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:48:19 compute-0 ceph-mon[75334]: pgmap v1718: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 15:48:19 compute-0 nova_compute[239545]: 2026-02-02 15:48:19.488 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 15:48:20 compute-0 podman[271293]: 2026-02-02 15:48:20.308519696 +0000 UTC m=+0.048300366 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 02 15:48:20 compute-0 podman[271292]: 2026-02-02 15:48:20.324414833 +0000 UTC m=+0.064262745 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:48:21 compute-0 ceph-mon[75334]: pgmap v1719: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 15:48:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:22 compute-0 nova_compute[239545]: 2026-02-02 15:48:22.405 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:23 compute-0 ceph-mon[75334]: pgmap v1720: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 02 15:48:24 compute-0 nova_compute[239545]: 2026-02-02 15:48:24.490 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:25 compute-0 ceph-mon[75334]: pgmap v1721: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 02 15:48:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Feb 02 15:48:27 compute-0 nova_compute[239545]: 2026-02-02 15:48:27.409 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:27 compute-0 ceph-mon[75334]: pgmap v1722: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Feb 02 15:48:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Feb 02 15:48:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:48:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1962523639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:48:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1962523639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1962523639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1962523639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:29 compute-0 nova_compute[239545]: 2026-02-02 15:48:29.226 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:29 compute-0 nova_compute[239545]: 2026-02-02 15:48:29.492 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:29 compute-0 ceph-mon[75334]: pgmap v1723: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Feb 02 15:48:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:48:31 compute-0 ceph-mon[75334]: pgmap v1724: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:48:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:48:32 compute-0 nova_compute[239545]: 2026-02-02 15:48:32.343 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:32 compute-0 nova_compute[239545]: 2026-02-02 15:48:32.411 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:33 compute-0 ceph-mon[75334]: pgmap v1725: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:48:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:48:34 compute-0 nova_compute[239545]: 2026-02-02 15:48:34.493 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:35 compute-0 ceph-mon[75334]: pgmap v1726: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:48:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Feb 02 15:48:37 compute-0 nova_compute[239545]: 2026-02-02 15:48:37.414 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:37 compute-0 ceph-mon[75334]: pgmap v1727: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Feb 02 15:48:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Feb 02 15:48:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:38 compute-0 nova_compute[239545]: 2026-02-02 15:48:38.357 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:39 compute-0 nova_compute[239545]: 2026-02-02 15:48:39.495 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:39 compute-0 ceph-mon[75334]: pgmap v1728: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Feb 02 15:48:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Feb 02 15:48:41 compute-0 ceph-mon[75334]: pgmap v1729: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Feb 02 15:48:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:42 compute-0 nova_compute[239545]: 2026-02-02 15:48:42.416 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:48:42
Feb 02 15:48:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:48:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:48:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log']
Feb 02 15:48:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:48:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:43 compute-0 ceph-mon[75334]: pgmap v1730: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:44 compute-0 nova_compute[239545]: 2026-02-02 15:48:44.498 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:44 compute-0 nova_compute[239545]: 2026-02-02 15:48:44.673 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:48:44 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:48:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:48:45 compute-0 ceph-mon[75334]: pgmap v1731: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:47 compute-0 nova_compute[239545]: 2026-02-02 15:48:47.419 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:47 compute-0 ceph-mon[75334]: pgmap v1732: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:49 compute-0 nova_compute[239545]: 2026-02-02 15:48:49.501 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Feb 02 15:48:49 compute-0 ceph-mon[75334]: pgmap v1733: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:48:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Feb 02 15:48:49 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Feb 02 15:48:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 102 B/s wr, 0 op/s
Feb 02 15:48:50 compute-0 ceph-mon[75334]: osdmap e481: 3 total, 3 up, 3 in
Feb 02 15:48:51 compute-0 podman[271338]: 2026-02-02 15:48:51.301416369 +0000 UTC m=+0.043217123 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:48:51 compute-0 podman[271337]: 2026-02-02 15:48:51.330048876 +0000 UTC m=+0.073765696 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Feb 02 15:48:51 compute-0 ceph-mon[75334]: pgmap v1735: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 102 B/s wr, 0 op/s
Feb 02 15:48:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Feb 02 15:48:52 compute-0 nova_compute[239545]: 2026-02-02 15:48:52.422 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Feb 02 15:48:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Feb 02 15:48:53 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Feb 02 15:48:53 compute-0 ceph-mon[75334]: pgmap v1736: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Feb 02 15:48:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:48:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2212854102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:48:53 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2212854102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.5 KiB/s wr, 43 op/s
Feb 02 15:48:54 compute-0 sudo[271383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:48:54 compute-0 sudo[271383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:54 compute-0 sudo[271383]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:54 compute-0 sudo[271408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 02 15:48:54 compute-0 sudo[271408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:54 compute-0 nova_compute[239545]: 2026-02-02 15:48:54.504 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007629827525595975 of space, bias 1.0, pg target 0.22889482576787926 quantized to 32 (current 32)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003838691890274647 of space, bias 1.0, pg target 0.11516075670823941 quantized to 32 (current 32)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.905922607215205e-06 of space, bias 1.0, pg target 0.0011717767821645617 quantized to 32 (current 32)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677631817347154 of space, bias 1.0, pg target 0.20032895452041463 quantized to 32 (current 32)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4229818131608983e-06 of space, bias 4.0, pg target 0.001707578175793078 quantized to 16 (current 16)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:48:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:48:54 compute-0 sudo[271408]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:48:54 compute-0 ceph-mon[75334]: osdmap e482: 3 total, 3 up, 3 in
Feb 02 15:48:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2212854102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:54 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2212854102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:48:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:48:54 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:48:54 compute-0 sudo[271453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:48:54 compute-0 sudo[271453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:54 compute-0 sudo[271453]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:54 compute-0 sudo[271478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:48:54 compute-0 sudo[271478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:48:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/720080481' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:48:54 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/720080481' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:55 compute-0 sudo[271478]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:48:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:48:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:48:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:48:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:48:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:48:55 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:48:55 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:48:55 compute-0 sudo[271534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:48:55 compute-0 sudo[271534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:55 compute-0 sudo[271534]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:55 compute-0 sudo[271559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:48:55 compute-0 sudo[271559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:55 compute-0 podman[271596]: 2026-02-02 15:48:55.620781829 +0000 UTC m=+0.042678441 container create f305962d202f9b5adf307060049f78b1b1a528b07cb14a8ed099d72a5400a153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_grothendieck, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:48:55 compute-0 systemd[1]: Started libpod-conmon-f305962d202f9b5adf307060049f78b1b1a528b07cb14a8ed099d72a5400a153.scope.
Feb 02 15:48:55 compute-0 podman[271596]: 2026-02-02 15:48:55.598213469 +0000 UTC m=+0.020110111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:48:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:48:55 compute-0 ceph-mon[75334]: pgmap v1738: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.5 KiB/s wr, 43 op/s
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/720080481' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/720080481' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:48:55 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:48:55 compute-0 podman[271596]: 2026-02-02 15:48:55.731592786 +0000 UTC m=+0.153489428 container init f305962d202f9b5adf307060049f78b1b1a528b07cb14a8ed099d72a5400a153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_grothendieck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:48:55 compute-0 podman[271596]: 2026-02-02 15:48:55.738024833 +0000 UTC m=+0.159921485 container start f305962d202f9b5adf307060049f78b1b1a528b07cb14a8ed099d72a5400a153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:48:55 compute-0 competent_grothendieck[271612]: 167 167
Feb 02 15:48:55 compute-0 systemd[1]: libpod-f305962d202f9b5adf307060049f78b1b1a528b07cb14a8ed099d72a5400a153.scope: Deactivated successfully.
Feb 02 15:48:55 compute-0 podman[271596]: 2026-02-02 15:48:55.77490354 +0000 UTC m=+0.196800162 container attach f305962d202f9b5adf307060049f78b1b1a528b07cb14a8ed099d72a5400a153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_grothendieck, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:48:55 compute-0 podman[271596]: 2026-02-02 15:48:55.775370711 +0000 UTC m=+0.197267353 container died f305962d202f9b5adf307060049f78b1b1a528b07cb14a8ed099d72a5400a153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 15:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-35cb556d8a254d37056eb9c0c1c6d673659191f67747eecddd1c4c9540298ac4-merged.mount: Deactivated successfully.
Feb 02 15:48:55 compute-0 podman[271596]: 2026-02-02 15:48:55.982912044 +0000 UTC m=+0.404808656 container remove f305962d202f9b5adf307060049f78b1b1a528b07cb14a8ed099d72a5400a153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 02 15:48:55 compute-0 systemd[1]: libpod-conmon-f305962d202f9b5adf307060049f78b1b1a528b07cb14a8ed099d72a5400a153.scope: Deactivated successfully.
Feb 02 15:48:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 KiB/s wr, 65 op/s
Feb 02 15:48:56 compute-0 podman[271636]: 2026-02-02 15:48:56.174882607 +0000 UTC m=+0.067930175 container create 75928ebd1d0a4379ed307938f58bfc4459bfbdf29fba3a94e55aeab7a721a617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:48:56 compute-0 podman[271636]: 2026-02-02 15:48:56.132360972 +0000 UTC m=+0.025408570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:48:56 compute-0 systemd[1]: Started libpod-conmon-75928ebd1d0a4379ed307938f58bfc4459bfbdf29fba3a94e55aeab7a721a617.scope.
Feb 02 15:48:56 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/485c9a5cbf590633ae751a4d4d1ba237c6e756b224ac6443fb9137717a84f60a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/485c9a5cbf590633ae751a4d4d1ba237c6e756b224ac6443fb9137717a84f60a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/485c9a5cbf590633ae751a4d4d1ba237c6e756b224ac6443fb9137717a84f60a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/485c9a5cbf590633ae751a4d4d1ba237c6e756b224ac6443fb9137717a84f60a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/485c9a5cbf590633ae751a4d4d1ba237c6e756b224ac6443fb9137717a84f60a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:56 compute-0 podman[271636]: 2026-02-02 15:48:56.275672981 +0000 UTC m=+0.168720559 container init 75928ebd1d0a4379ed307938f58bfc4459bfbdf29fba3a94e55aeab7a721a617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:48:56 compute-0 podman[271636]: 2026-02-02 15:48:56.281272687 +0000 UTC m=+0.174320255 container start 75928ebd1d0a4379ed307938f58bfc4459bfbdf29fba3a94e55aeab7a721a617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:48:56 compute-0 podman[271636]: 2026-02-02 15:48:56.286120695 +0000 UTC m=+0.179168283 container attach 75928ebd1d0a4379ed307938f58bfc4459bfbdf29fba3a94e55aeab7a721a617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:48:56 compute-0 amazing_davinci[271652]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:48:56 compute-0 amazing_davinci[271652]: --> All data devices are unavailable
Feb 02 15:48:56 compute-0 systemd[1]: libpod-75928ebd1d0a4379ed307938f58bfc4459bfbdf29fba3a94e55aeab7a721a617.scope: Deactivated successfully.
Feb 02 15:48:56 compute-0 podman[271636]: 2026-02-02 15:48:56.734182392 +0000 UTC m=+0.627230020 container died 75928ebd1d0a4379ed307938f58bfc4459bfbdf29fba3a94e55aeab7a721a617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-485c9a5cbf590633ae751a4d4d1ba237c6e756b224ac6443fb9137717a84f60a-merged.mount: Deactivated successfully.
Feb 02 15:48:56 compute-0 podman[271636]: 2026-02-02 15:48:56.780544811 +0000 UTC m=+0.673592379 container remove 75928ebd1d0a4379ed307938f58bfc4459bfbdf29fba3a94e55aeab7a721a617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 02 15:48:56 compute-0 systemd[1]: libpod-conmon-75928ebd1d0a4379ed307938f58bfc4459bfbdf29fba3a94e55aeab7a721a617.scope: Deactivated successfully.
Feb 02 15:48:56 compute-0 sudo[271559]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:56 compute-0 sudo[271685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:48:56 compute-0 sudo[271685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:56 compute-0 sudo[271685]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:56 compute-0 sudo[271710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:48:56 compute-0 sudo[271710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:57 compute-0 podman[271746]: 2026-02-02 15:48:57.165538273 +0000 UTC m=+0.039575804 container create fa0a02ae2c7e8c9c5173fc150c4d8cec427f2991ffb0c468221e6753021ab269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:48:57 compute-0 systemd[1]: Started libpod-conmon-fa0a02ae2c7e8c9c5173fc150c4d8cec427f2991ffb0c468221e6753021ab269.scope.
Feb 02 15:48:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:48:57 compute-0 podman[271746]: 2026-02-02 15:48:57.237343891 +0000 UTC m=+0.111381512 container init fa0a02ae2c7e8c9c5173fc150c4d8cec427f2991ffb0c468221e6753021ab269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_feynman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:48:57 compute-0 podman[271746]: 2026-02-02 15:48:57.244011463 +0000 UTC m=+0.118048994 container start fa0a02ae2c7e8c9c5173fc150c4d8cec427f2991ffb0c468221e6753021ab269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_feynman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:48:57 compute-0 podman[271746]: 2026-02-02 15:48:57.149985555 +0000 UTC m=+0.024023106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:48:57 compute-0 podman[271746]: 2026-02-02 15:48:57.247434927 +0000 UTC m=+0.121472478 container attach fa0a02ae2c7e8c9c5173fc150c4d8cec427f2991ffb0c468221e6753021ab269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_feynman, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:48:57 compute-0 beautiful_feynman[271762]: 167 167
Feb 02 15:48:57 compute-0 systemd[1]: libpod-fa0a02ae2c7e8c9c5173fc150c4d8cec427f2991ffb0c468221e6753021ab269.scope: Deactivated successfully.
Feb 02 15:48:57 compute-0 podman[271746]: 2026-02-02 15:48:57.250459331 +0000 UTC m=+0.124496862 container died fa0a02ae2c7e8c9c5173fc150c4d8cec427f2991ffb0c468221e6753021ab269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c41c55841d12f92c7805ff743dc67bd0632e0376459b3c574c703b091a00d5be-merged.mount: Deactivated successfully.
Feb 02 15:48:57 compute-0 podman[271746]: 2026-02-02 15:48:57.292891033 +0000 UTC m=+0.166928584 container remove fa0a02ae2c7e8c9c5173fc150c4d8cec427f2991ffb0c468221e6753021ab269 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_feynman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:48:57 compute-0 systemd[1]: libpod-conmon-fa0a02ae2c7e8c9c5173fc150c4d8cec427f2991ffb0c468221e6753021ab269.scope: Deactivated successfully.
Feb 02 15:48:57 compute-0 nova_compute[239545]: 2026-02-02 15:48:57.425 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:57 compute-0 podman[271787]: 2026-02-02 15:48:57.452084959 +0000 UTC m=+0.052052268 container create 5b9cdf80563a27d729a02b44abc854e08213d5d83bc7e9936ebcf366f170641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:48:57 compute-0 systemd[1]: Started libpod-conmon-5b9cdf80563a27d729a02b44abc854e08213d5d83bc7e9936ebcf366f170641a.scope.
Feb 02 15:48:57 compute-0 podman[271787]: 2026-02-02 15:48:57.428918475 +0000 UTC m=+0.028885804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:48:57 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7a1cebccbfcdd3b63b54b6387e87c0b83930e5bbe98a7f5a4bd834d5f7e16e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7a1cebccbfcdd3b63b54b6387e87c0b83930e5bbe98a7f5a4bd834d5f7e16e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7a1cebccbfcdd3b63b54b6387e87c0b83930e5bbe98a7f5a4bd834d5f7e16e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7a1cebccbfcdd3b63b54b6387e87c0b83930e5bbe98a7f5a4bd834d5f7e16e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:57 compute-0 podman[271787]: 2026-02-02 15:48:57.550383522 +0000 UTC m=+0.150350891 container init 5b9cdf80563a27d729a02b44abc854e08213d5d83bc7e9936ebcf366f170641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_spence, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:48:57 compute-0 podman[271787]: 2026-02-02 15:48:57.557658399 +0000 UTC m=+0.157625728 container start 5b9cdf80563a27d729a02b44abc854e08213d5d83bc7e9936ebcf366f170641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_spence, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:48:57 compute-0 podman[271787]: 2026-02-02 15:48:57.561653416 +0000 UTC m=+0.161620755 container attach 5b9cdf80563a27d729a02b44abc854e08213d5d83bc7e9936ebcf366f170641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 02 15:48:57 compute-0 ceph-mon[75334]: pgmap v1739: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 KiB/s wr, 65 op/s
Feb 02 15:48:57 compute-0 laughing_spence[271804]: {
Feb 02 15:48:57 compute-0 laughing_spence[271804]:     "0": [
Feb 02 15:48:57 compute-0 laughing_spence[271804]:         {
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "devices": [
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "/dev/loop3"
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             ],
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_name": "ceph_lv0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_size": "21470642176",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "name": "ceph_lv0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "tags": {
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.cluster_name": "ceph",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.crush_device_class": "",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.encrypted": "0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.objectstore": "bluestore",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.osd_id": "0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.type": "block",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.vdo": "0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.with_tpm": "0"
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             },
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "type": "block",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "vg_name": "ceph_vg0"
Feb 02 15:48:57 compute-0 laughing_spence[271804]:         }
Feb 02 15:48:57 compute-0 laughing_spence[271804]:     ],
Feb 02 15:48:57 compute-0 laughing_spence[271804]:     "1": [
Feb 02 15:48:57 compute-0 laughing_spence[271804]:         {
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "devices": [
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "/dev/loop4"
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             ],
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_name": "ceph_lv1",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_size": "21470642176",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "name": "ceph_lv1",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "tags": {
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.cluster_name": "ceph",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.crush_device_class": "",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.encrypted": "0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.objectstore": "bluestore",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.osd_id": "1",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.type": "block",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.vdo": "0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.with_tpm": "0"
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             },
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "type": "block",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "vg_name": "ceph_vg1"
Feb 02 15:48:57 compute-0 laughing_spence[271804]:         }
Feb 02 15:48:57 compute-0 laughing_spence[271804]:     ],
Feb 02 15:48:57 compute-0 laughing_spence[271804]:     "2": [
Feb 02 15:48:57 compute-0 laughing_spence[271804]:         {
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "devices": [
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "/dev/loop5"
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             ],
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_name": "ceph_lv2",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_size": "21470642176",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "name": "ceph_lv2",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "tags": {
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.cluster_name": "ceph",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.crush_device_class": "",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.encrypted": "0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.objectstore": "bluestore",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.osd_id": "2",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.type": "block",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.vdo": "0",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:                 "ceph.with_tpm": "0"
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             },
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "type": "block",
Feb 02 15:48:57 compute-0 laughing_spence[271804]:             "vg_name": "ceph_vg2"
Feb 02 15:48:57 compute-0 laughing_spence[271804]:         }
Feb 02 15:48:57 compute-0 laughing_spence[271804]:     ]
Feb 02 15:48:57 compute-0 laughing_spence[271804]: }
Feb 02 15:48:57 compute-0 systemd[1]: libpod-5b9cdf80563a27d729a02b44abc854e08213d5d83bc7e9936ebcf366f170641a.scope: Deactivated successfully.
Feb 02 15:48:57 compute-0 podman[271787]: 2026-02-02 15:48:57.837401819 +0000 UTC m=+0.437369138 container died 5b9cdf80563a27d729a02b44abc854e08213d5d83bc7e9936ebcf366f170641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_spence, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-de7a1cebccbfcdd3b63b54b6387e87c0b83930e5bbe98a7f5a4bd834d5f7e16e-merged.mount: Deactivated successfully.
Feb 02 15:48:57 compute-0 podman[271787]: 2026-02-02 15:48:57.88551257 +0000 UTC m=+0.485479889 container remove 5b9cdf80563a27d729a02b44abc854e08213d5d83bc7e9936ebcf366f170641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_spence, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:48:57 compute-0 systemd[1]: libpod-conmon-5b9cdf80563a27d729a02b44abc854e08213d5d83bc7e9936ebcf366f170641a.scope: Deactivated successfully.
Feb 02 15:48:57 compute-0 sudo[271710]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:57 compute-0 sudo[271828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:48:57 compute-0 sudo[271828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:57 compute-0 sudo[271828]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:58 compute-0 sudo[271853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:48:58 compute-0 sudo[271853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.3 KiB/s wr, 61 op/s
Feb 02 15:48:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:48:58 compute-0 podman[271890]: 2026-02-02 15:48:58.291538664 +0000 UTC m=+0.037545204 container create 2c1131ca3b0b9fc831844f9fc7e24600703e604fadb45cb591ec8b6dfd50ce9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lederberg, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:48:58 compute-0 systemd[1]: Started libpod-conmon-2c1131ca3b0b9fc831844f9fc7e24600703e604fadb45cb591ec8b6dfd50ce9d.scope.
Feb 02 15:48:58 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:48:58 compute-0 podman[271890]: 2026-02-02 15:48:58.363940537 +0000 UTC m=+0.109947097 container init 2c1131ca3b0b9fc831844f9fc7e24600703e604fadb45cb591ec8b6dfd50ce9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lederberg, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:48:58 compute-0 podman[271890]: 2026-02-02 15:48:58.369101523 +0000 UTC m=+0.115108063 container start 2c1131ca3b0b9fc831844f9fc7e24600703e604fadb45cb591ec8b6dfd50ce9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:48:58 compute-0 podman[271890]: 2026-02-02 15:48:58.274983152 +0000 UTC m=+0.020989722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:48:58 compute-0 podman[271890]: 2026-02-02 15:48:58.372106636 +0000 UTC m=+0.118113206 container attach 2c1131ca3b0b9fc831844f9fc7e24600703e604fadb45cb591ec8b6dfd50ce9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:48:58 compute-0 sad_lederberg[271906]: 167 167
Feb 02 15:48:58 compute-0 systemd[1]: libpod-2c1131ca3b0b9fc831844f9fc7e24600703e604fadb45cb591ec8b6dfd50ce9d.scope: Deactivated successfully.
Feb 02 15:48:58 compute-0 podman[271890]: 2026-02-02 15:48:58.374155905 +0000 UTC m=+0.120162455 container died 2c1131ca3b0b9fc831844f9fc7e24600703e604fadb45cb591ec8b6dfd50ce9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lederberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:48:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cd18b49ea885c3aad9babd2851f18c3da4b38334c5567aa947fd17cd35d0a24-merged.mount: Deactivated successfully.
Feb 02 15:48:58 compute-0 podman[271890]: 2026-02-02 15:48:58.412491839 +0000 UTC m=+0.158498379 container remove 2c1131ca3b0b9fc831844f9fc7e24600703e604fadb45cb591ec8b6dfd50ce9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lederberg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:48:58 compute-0 systemd[1]: libpod-conmon-2c1131ca3b0b9fc831844f9fc7e24600703e604fadb45cb591ec8b6dfd50ce9d.scope: Deactivated successfully.
Feb 02 15:48:58 compute-0 podman[271930]: 2026-02-02 15:48:58.545928287 +0000 UTC m=+0.052348245 container create 501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_proskuriakova, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:48:58 compute-0 systemd[1]: Started libpod-conmon-501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd.scope.
Feb 02 15:48:58 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:48:58 compute-0 podman[271930]: 2026-02-02 15:48:58.523247595 +0000 UTC m=+0.029667563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1885fd81c680419e0948a5a3d056a1f29e31f655552a069f37a5da07fbd2410/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1885fd81c680419e0948a5a3d056a1f29e31f655552a069f37a5da07fbd2410/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1885fd81c680419e0948a5a3d056a1f29e31f655552a069f37a5da07fbd2410/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1885fd81c680419e0948a5a3d056a1f29e31f655552a069f37a5da07fbd2410/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:48:58 compute-0 podman[271930]: 2026-02-02 15:48:58.63517923 +0000 UTC m=+0.141599188 container init 501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:48:58 compute-0 podman[271930]: 2026-02-02 15:48:58.643751148 +0000 UTC m=+0.150171066 container start 501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_proskuriakova, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:48:58 compute-0 podman[271930]: 2026-02-02 15:48:58.64748173 +0000 UTC m=+0.153901678 container attach 501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:48:58 compute-0 ovn_controller[144995]: 2026-02-02T15:48:58Z|00259|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:48:58 compute-0 nova_compute[239545]: 2026-02-02 15:48:58.973 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:59 compute-0 lvm[272025]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:48:59 compute-0 lvm[272025]: VG ceph_vg1 finished
Feb 02 15:48:59 compute-0 lvm[272023]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:48:59 compute-0 lvm[272023]: VG ceph_vg0 finished
Feb 02 15:48:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:48:59.259 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:48:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:48:59.259 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:48:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:48:59.261 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:48:59 compute-0 lvm[272027]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:48:59 compute-0 lvm[272027]: VG ceph_vg2 finished
Feb 02 15:48:59 compute-0 blissful_proskuriakova[271946]: {}
Feb 02 15:48:59 compute-0 systemd[1]: libpod-501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd.scope: Deactivated successfully.
Feb 02 15:48:59 compute-0 systemd[1]: libpod-501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd.scope: Consumed 1.089s CPU time.
Feb 02 15:48:59 compute-0 podman[271930]: 2026-02-02 15:48:59.370978359 +0000 UTC m=+0.877398307 container died 501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_proskuriakova, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1885fd81c680419e0948a5a3d056a1f29e31f655552a069f37a5da07fbd2410-merged.mount: Deactivated successfully.
Feb 02 15:48:59 compute-0 podman[271930]: 2026-02-02 15:48:59.412645767 +0000 UTC m=+0.919065685 container remove 501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:48:59 compute-0 systemd[1]: libpod-conmon-501b704128926d6a24f82934b4d66737c0ab5069f72e2acc0dd4098c2ea3a7bd.scope: Deactivated successfully.
Feb 02 15:48:59 compute-0 sudo[271853]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:48:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:48:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:48:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:48:59 compute-0 nova_compute[239545]: 2026-02-02 15:48:59.505 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:48:59 compute-0 sudo[272043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:48:59 compute-0 sudo[272043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:48:59 compute-0 sudo[272043]: pam_unix(sudo:session): session closed for user root
Feb 02 15:48:59 compute-0 ceph-mon[75334]: pgmap v1740: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.3 KiB/s wr, 61 op/s
Feb 02 15:48:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:48:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:49:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.2 KiB/s wr, 89 op/s
Feb 02 15:49:01 compute-0 nova_compute[239545]: 2026-02-02 15:49:01.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:01 compute-0 nova_compute[239545]: 2026-02-02 15:49:01.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 15:49:01 compute-0 ceph-mon[75334]: pgmap v1741: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.2 KiB/s wr, 89 op/s
Feb 02 15:49:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.8 KiB/s wr, 86 op/s
Feb 02 15:49:02 compute-0 nova_compute[239545]: 2026-02-02 15:49:02.428 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:02 compute-0 ovn_controller[144995]: 2026-02-02T15:49:02Z|00260|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:49:02 compute-0 nova_compute[239545]: 2026-02-02 15:49:02.838 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Feb 02 15:49:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Feb 02 15:49:03 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Feb 02 15:49:03 compute-0 ceph-mon[75334]: pgmap v1742: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.8 KiB/s wr, 86 op/s
Feb 02 15:49:03 compute-0 ceph-mon[75334]: osdmap e483: 3 total, 3 up, 3 in
Feb 02 15:49:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.3 KiB/s wr, 54 op/s
Feb 02 15:49:04 compute-0 nova_compute[239545]: 2026-02-02 15:49:04.506 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:05 compute-0 ceph-mon[75334]: pgmap v1744: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.3 KiB/s wr, 54 op/s
Feb 02 15:49:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 409 B/s wr, 37 op/s
Feb 02 15:49:07 compute-0 nova_compute[239545]: 2026-02-02 15:49:07.432 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:07 compute-0 nova_compute[239545]: 2026-02-02 15:49:07.564 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:07 compute-0 nova_compute[239545]: 2026-02-02 15:49:07.565 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:49:07 compute-0 nova_compute[239545]: 2026-02-02 15:49:07.565 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:49:07 compute-0 ceph-mon[75334]: pgmap v1745: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 409 B/s wr, 37 op/s
Feb 02 15:49:07 compute-0 nova_compute[239545]: 2026-02-02 15:49:07.940 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:49:07 compute-0 nova_compute[239545]: 2026-02-02 15:49:07.940 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:49:07 compute-0 nova_compute[239545]: 2026-02-02 15:49:07.941 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:49:07 compute-0 nova_compute[239545]: 2026-02-02 15:49:07.941 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:49:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 409 B/s wr, 37 op/s
Feb 02 15:49:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:09 compute-0 nova_compute[239545]: 2026-02-02 15:49:09.508 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:09 compute-0 nova_compute[239545]: 2026-02-02 15:49:09.685 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:49:09 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:09.697 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:49:09 compute-0 nova_compute[239545]: 2026-02-02 15:49:09.698 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:09 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:09.699 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:49:09 compute-0 nova_compute[239545]: 2026-02-02 15:49:09.701 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:49:09 compute-0 nova_compute[239545]: 2026-02-02 15:49:09.702 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:49:09 compute-0 nova_compute[239545]: 2026-02-02 15:49:09.702 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:09 compute-0 ceph-mon[75334]: pgmap v1746: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 409 B/s wr, 37 op/s
Feb 02 15:49:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:11 compute-0 ceph-mon[75334]: pgmap v1747: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:12 compute-0 nova_compute[239545]: 2026-02-02 15:49:12.433 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:12 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:12.701 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:49:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:13 compute-0 nova_compute[239545]: 2026-02-02 15:49:13.614 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:13 compute-0 ceph-mon[75334]: pgmap v1748: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:14 compute-0 nova_compute[239545]: 2026-02-02 15:49:14.510 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:14 compute-0 nova_compute[239545]: 2026-02-02 15:49:14.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:14 compute-0 nova_compute[239545]: 2026-02-02 15:49:14.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:14 compute-0 nova_compute[239545]: 2026-02-02 15:49:14.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:49:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:49:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:49:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:49:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:49:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:49:15 compute-0 ceph-mon[75334]: pgmap v1749: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:15 compute-0 nova_compute[239545]: 2026-02-02 15:49:15.945 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:17 compute-0 nova_compute[239545]: 2026-02-02 15:49:17.436 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:17 compute-0 nova_compute[239545]: 2026-02-02 15:49:17.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:17 compute-0 nova_compute[239545]: 2026-02-02 15:49:17.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:49:17 compute-0 nova_compute[239545]: 2026-02-02 15:49:17.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:49:17 compute-0 nova_compute[239545]: 2026-02-02 15:49:17.574 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:49:17 compute-0 nova_compute[239545]: 2026-02-02 15:49:17.574 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:49:17 compute-0 nova_compute[239545]: 2026-02-02 15:49:17.574 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:49:17 compute-0 ceph-mon[75334]: pgmap v1750: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:49:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263625255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.119 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.189 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.189 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.189 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:49:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.352 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.353 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4124MB free_disk=59.94251137692481GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.353 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.353 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.554 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.555 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.556 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:49:18 compute-0 nova_compute[239545]: 2026-02-02 15:49:18.711 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:49:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3263625255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:49:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:49:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3972215593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.252 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.257 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.271 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.272 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.272 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.512 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:49:19 compute-0 nova_compute[239545]: 2026-02-02 15:49:19.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:19 compute-0 ceph-mon[75334]: pgmap v1751: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3972215593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:49:20 compute-0 nova_compute[239545]: 2026-02-02 15:49:20.059 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:21 compute-0 ceph-mon[75334]: pgmap v1752: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:22 compute-0 podman[272114]: 2026-02-02 15:49:22.333955897 +0000 UTC m=+0.063942709 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 02 15:49:22 compute-0 podman[272113]: 2026-02-02 15:49:22.408480911 +0000 UTC m=+0.138483344 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Feb 02 15:49:22 compute-0 nova_compute[239545]: 2026-02-02 15:49:22.438 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.242953) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047363242985, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 889, "num_deletes": 251, "total_data_size": 1220244, "memory_usage": 1239528, "flush_reason": "Manual Compaction"}
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047363250501, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 784680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34254, "largest_seqno": 35142, "table_properties": {"data_size": 780976, "index_size": 1420, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9902, "raw_average_key_size": 20, "raw_value_size": 772993, "raw_average_value_size": 1630, "num_data_blocks": 64, "num_entries": 474, "num_filter_entries": 474, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770047290, "oldest_key_time": 1770047290, "file_creation_time": 1770047363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 7610 microseconds, and 2122 cpu microseconds.
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.250557) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 784680 bytes OK
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.250578) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.254906) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.254939) EVENT_LOG_v1 {"time_micros": 1770047363254931, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.254963) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1215894, prev total WAL file size 1215894, number of live WAL files 2.
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.255599) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303031' seq:72057594037927935, type:22 .. '6D6772737461740031323532' seq:0, type:0; will stop at (end)
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(766KB)], [68(11MB)]
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047363255653, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 13086211, "oldest_snapshot_seqno": -1}
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6787 keys, 10155085 bytes, temperature: kUnknown
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047363307650, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 10155085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10105082, "index_size": 31907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 170112, "raw_average_key_size": 25, "raw_value_size": 9978661, "raw_average_value_size": 1470, "num_data_blocks": 1276, "num_entries": 6787, "num_filter_entries": 6787, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770047363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.308308) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 10155085 bytes
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.312828) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 250.0 rd, 194.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.7 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(29.6) write-amplify(12.9) OK, records in: 7274, records dropped: 487 output_compression: NoCompression
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.312874) EVENT_LOG_v1 {"time_micros": 1770047363312856, "job": 38, "event": "compaction_finished", "compaction_time_micros": 52339, "compaction_time_cpu_micros": 19544, "output_level": 6, "num_output_files": 1, "total_output_size": 10155085, "num_input_records": 7274, "num_output_records": 6787, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047363313277, "job": 38, "event": "table_file_deletion", "file_number": 70}
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047363315153, "job": 38, "event": "table_file_deletion", "file_number": 68}
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.255484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.315321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.315329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.315331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.315333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:49:23 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:49:23.315335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:49:23 compute-0 ceph-mon[75334]: pgmap v1753: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:24 compute-0 nova_compute[239545]: 2026-02-02 15:49:24.514 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:25 compute-0 ceph-mon[75334]: pgmap v1754: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:27 compute-0 nova_compute[239545]: 2026-02-02 15:49:27.440 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:27 compute-0 ceph-mon[75334]: pgmap v1755: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:28 compute-0 nova_compute[239545]: 2026-02-02 15:49:28.069 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:49:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3876077107' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:49:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:49:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3876077107' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:49:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:28 compute-0 nova_compute[239545]: 2026-02-02 15:49:28.561 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:49:28 compute-0 nova_compute[239545]: 2026-02-02 15:49:28.562 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 15:49:28 compute-0 nova_compute[239545]: 2026-02-02 15:49:28.578 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 15:49:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3876077107' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:49:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3876077107' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:49:29 compute-0 nova_compute[239545]: 2026-02-02 15:49:29.516 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:29 compute-0 ceph-mon[75334]: pgmap v1756: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:31 compute-0 ceph-mon[75334]: pgmap v1757: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:32 compute-0 nova_compute[239545]: 2026-02-02 15:49:32.442 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:33 compute-0 ceph-mon[75334]: pgmap v1758: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:34 compute-0 nova_compute[239545]: 2026-02-02 15:49:34.519 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:35 compute-0 ceph-mon[75334]: pgmap v1759: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:49:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:49:37 compute-0 nova_compute[239545]: 2026-02-02 15:49:37.444 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:37 compute-0 ceph-mon[75334]: pgmap v1760: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:49:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:49:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Feb 02 15:49:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Feb 02 15:49:39 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Feb 02 15:49:39 compute-0 nova_compute[239545]: 2026-02-02 15:49:39.521 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:40 compute-0 ceph-mon[75334]: pgmap v1761: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:49:40 compute-0 ceph-mon[75334]: osdmap e484: 3 total, 3 up, 3 in
Feb 02 15:49:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 26 KiB/s wr, 5 op/s
Feb 02 15:49:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:49:40 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2074800605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:49:41 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2074800605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:49:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 27 KiB/s wr, 23 op/s
Feb 02 15:49:42 compute-0 ceph-mon[75334]: pgmap v1763: 305 pgs: 305 active+clean; 170 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 26 KiB/s wr, 5 op/s
Feb 02 15:49:42 compute-0 nova_compute[239545]: 2026-02-02 15:49:42.446 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:49:42
Feb 02 15:49:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:49:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:49:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['volumes', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control']
Feb 02 15:49:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:49:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:43 compute-0 ceph-mon[75334]: pgmap v1764: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 27 KiB/s wr, 23 op/s
Feb 02 15:49:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 28 KiB/s wr, 30 op/s
Feb 02 15:49:44 compute-0 nova_compute[239545]: 2026-02-02 15:49:44.523 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:49:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:49:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:49:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:49:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:49:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:49:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:49:45 compute-0 ceph-mon[75334]: pgmap v1765: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 28 KiB/s wr, 30 op/s
Feb 02 15:49:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Feb 02 15:49:47 compute-0 ceph-mon[75334]: pgmap v1766: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Feb 02 15:49:47 compute-0 nova_compute[239545]: 2026-02-02 15:49:47.493 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Feb 02 15:49:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:49 compute-0 ceph-mon[75334]: pgmap v1767: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Feb 02 15:49:49 compute-0 nova_compute[239545]: 2026-02-02 15:49:49.525 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.123 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.123 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.147 239549 DEBUG nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.233 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.233 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.242 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.242 239549 INFO nova.compute.claims [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:49:51 compute-0 ceph-mon[75334]: pgmap v1768: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.385 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:49:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:49:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262750385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.924 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.928 239549 DEBUG nova.compute.provider_tree [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.951 239549 DEBUG nova.scheduler.client.report [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.980 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:49:51 compute-0 nova_compute[239545]: 2026-02-02 15:49:51.982 239549 DEBUG nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.045 239549 DEBUG nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.045 239549 DEBUG nova.network.neutron [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.076 239549 INFO nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.101 239549 DEBUG nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:49:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.207 239549 DEBUG nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.209 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.210 239549 INFO nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Creating image(s)
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.242 239549 DEBUG nova.storage.rbd_utils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.273 239549 DEBUG nova.storage.rbd_utils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:49:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4262750385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.304 239549 DEBUG nova.storage.rbd_utils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.307 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.366 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.367 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "e549e1d4a799e21648bb967f475c246d2a533bcb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.368 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.368 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "e549e1d4a799e21648bb967f475c246d2a533bcb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.388 239549 DEBUG nova.storage.rbd_utils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.391 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.497 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.579 239549 DEBUG nova.policy [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16b55bfc98574e0096db4f19bcdcbb2e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6e1abae6c1404ce2b24265e7136ffe6a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.601 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e549e1d4a799e21648bb967f475c246d2a533bcb 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.660 239549 DEBUG nova.storage.rbd_utils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] resizing rbd image 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.734 239549 DEBUG nova.objects.instance [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'migration_context' on Instance uuid 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.748 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.749 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Ensure instance console log exists: /var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.749 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.750 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:49:52 compute-0 nova_compute[239545]: 2026-02-02 15:49:52.750 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:49:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:53 compute-0 ceph-mon[75334]: pgmap v1769: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Feb 02 15:49:53 compute-0 podman[272349]: 2026-02-02 15:49:53.322323739 +0000 UTC m=+0.056459171 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 02 15:49:53 compute-0 podman[272348]: 2026-02-02 15:49:53.357607306 +0000 UTC m=+0.097017851 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 185 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 661 KiB/s wr, 22 op/s
Feb 02 15:49:54 compute-0 nova_compute[239545]: 2026-02-02 15:49:54.528 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008889520945334455 of space, bias 1.0, pg target 0.2666856283600336 quantized to 32 (current 32)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00038818599827363946 of space, bias 1.0, pg target 0.11645579948209184 quantized to 32 (current 32)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.882308967909776e-06 of space, bias 1.0, pg target 0.0011646926903729328 quantized to 32 (current 32)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677592228405518 of space, bias 1.0, pg target 0.20032776685216552 quantized to 32 (current 32)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4240219931960486e-06 of space, bias 4.0, pg target 0.0017088263918352583 quantized to 16 (current 16)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:49:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:49:54 compute-0 nova_compute[239545]: 2026-02-02 15:49:54.647 239549 DEBUG nova.network.neutron [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Successfully created port: bb84195f-05e7-45f3-871b-6bd27abe7803 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:49:55 compute-0 ceph-mon[75334]: pgmap v1770: 305 pgs: 305 active+clean; 185 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 661 KiB/s wr, 22 op/s
Feb 02 15:49:55 compute-0 nova_compute[239545]: 2026-02-02 15:49:55.891 239549 DEBUG nova.network.neutron [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Successfully updated port: bb84195f-05e7-45f3-871b-6bd27abe7803 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:49:55 compute-0 nova_compute[239545]: 2026-02-02 15:49:55.921 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "refresh_cache-2d2eca14-3fbd-4b14-89c7-1222669b1ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:49:55 compute-0 nova_compute[239545]: 2026-02-02 15:49:55.921 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquired lock "refresh_cache-2d2eca14-3fbd-4b14-89c7-1222669b1ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:49:55 compute-0 nova_compute[239545]: 2026-02-02 15:49:55.921 239549 DEBUG nova.network.neutron [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:49:55 compute-0 nova_compute[239545]: 2026-02-02 15:49:55.976 239549 DEBUG nova.compute.manager [req-79f324a8-7d31-4a5a-aedb-bae2c72e2766 req-5d1841fc-f407-402b-8b4f-f633dd67877e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received event network-changed-bb84195f-05e7-45f3-871b-6bd27abe7803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:49:55 compute-0 nova_compute[239545]: 2026-02-02 15:49:55.976 239549 DEBUG nova.compute.manager [req-79f324a8-7d31-4a5a-aedb-bae2c72e2766 req-5d1841fc-f407-402b-8b4f-f633dd67877e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Refreshing instance network info cache due to event network-changed-bb84195f-05e7-45f3-871b-6bd27abe7803. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:49:55 compute-0 nova_compute[239545]: 2026-02-02 15:49:55.976 239549 DEBUG oslo_concurrency.lockutils [req-79f324a8-7d31-4a5a-aedb-bae2c72e2766 req-5d1841fc-f407-402b-8b4f-f633dd67877e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-2d2eca14-3fbd-4b14-89c7-1222669b1ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:49:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 02 15:49:56 compute-0 nova_compute[239545]: 2026-02-02 15:49:56.579 239549 DEBUG nova.network.neutron [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:49:57 compute-0 ceph-mon[75334]: pgmap v1771: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.548 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.866 239549 DEBUG nova.network.neutron [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Updating instance_info_cache with network_info: [{"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.894 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Releasing lock "refresh_cache-2d2eca14-3fbd-4b14-89c7-1222669b1ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.895 239549 DEBUG nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Instance network_info: |[{"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.895 239549 DEBUG oslo_concurrency.lockutils [req-79f324a8-7d31-4a5a-aedb-bae2c72e2766 req-5d1841fc-f407-402b-8b4f-f633dd67877e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-2d2eca14-3fbd-4b14-89c7-1222669b1ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.896 239549 DEBUG nova.network.neutron [req-79f324a8-7d31-4a5a-aedb-bae2c72e2766 req-5d1841fc-f407-402b-8b4f-f633dd67877e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Refreshing network info cache for port bb84195f-05e7-45f3-871b-6bd27abe7803 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.900 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Start _get_guest_xml network_info=[{"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'boot_index': 0, 'disk_bus': 'virtio', 'guest_format': None, 'encryption_options': None, 'encryption_format': None, 'encrypted': False, 'size': 0, 'encryption_secret_uuid': None, 'image_id': '271bf15b-9e9a-428a-a098-dcc68b158a7a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.906 239549 WARNING nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.914 239549 DEBUG nova.virt.libvirt.host [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.914 239549 DEBUG nova.virt.libvirt.host [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.922 239549 DEBUG nova.virt.libvirt.host [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.922 239549 DEBUG nova.virt.libvirt.host [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.923 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.923 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T15:29:18Z,direct_url=<?>,disk_format='qcow2',id=271bf15b-9e9a-428a-a098-dcc68b158a7a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='36c10c66ac7b49c798cd06678a3a8645',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T15:29:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.924 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.924 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.924 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.925 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.925 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.925 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.925 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.926 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.926 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.926 239549 DEBUG nova.virt.hardware [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:49:57 compute-0 nova_compute[239545]: 2026-02-02 15:49:57.930 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:49:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 02 15:49:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:49:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:49:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501133103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:49:58 compute-0 nova_compute[239545]: 2026-02-02 15:49:58.463 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:49:58 compute-0 nova_compute[239545]: 2026-02-02 15:49:58.493 239549 DEBUG nova.storage.rbd_utils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:49:58 compute-0 nova_compute[239545]: 2026-02-02 15:49:58.498 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:49:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:49:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3062925624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.050 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.051 239549 DEBUG nova.virt.libvirt.vif [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:49:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1819877545',display_name='tempest-TestEncryptedCinderVolumes-server-1819877545',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1819877545',id=27,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDaOlyxmAyuy76oWtAo15Hl9/4Q0MejG5dv89gID1Tzgvaqktd9FkvkddJltkvqWs4rflVs9BoIu+pRNya2tmUA0lRv7tq9xh7HtYLzpAbqRmAQM05vadKi0bX2BV/0beQ==',key_name='tempest-keypair-199719867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e1abae6c1404ce2b24265e7136ffe6a',ramdisk_id='',reservation_id='r-fezb31rz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-987785960',owner_user_name='tempest-TestEncryptedCinderVolumes-987785960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:49:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16b55bfc98574e0096db4f19bcdcbb2e',uuid=2d2eca14-3fbd-4b14-89c7-1222669b1ce0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.052 239549 DEBUG nova.network.os_vif_util [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converting VIF {"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.053 239549 DEBUG nova.network.os_vif_util [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:01:ba,bridge_name='br-int',has_traffic_filtering=True,id=bb84195f-05e7-45f3-871b-6bd27abe7803,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb84195f-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.054 239549 DEBUG nova.objects.instance [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.070 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <uuid>2d2eca14-3fbd-4b14-89c7-1222669b1ce0</uuid>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <name>instance-0000001b</name>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1819877545</nova:name>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:49:57</nova:creationTime>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <nova:user uuid="16b55bfc98574e0096db4f19bcdcbb2e">tempest-TestEncryptedCinderVolumes-987785960-project-member</nova:user>
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <nova:project uuid="6e1abae6c1404ce2b24265e7136ffe6a">tempest-TestEncryptedCinderVolumes-987785960</nova:project>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <nova:root type="image" uuid="271bf15b-9e9a-428a-a098-dcc68b158a7a"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <nova:port uuid="bb84195f-05e7-45f3-871b-6bd27abe7803">
Feb 02 15:49:59 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <system>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <entry name="serial">2d2eca14-3fbd-4b14-89c7-1222669b1ce0</entry>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <entry name="uuid">2d2eca14-3fbd-4b14-89c7-1222669b1ce0</entry>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     </system>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <os>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   </os>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <features>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   </features>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk">
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       </source>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk.config">
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       </source>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:49:59 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:e2:01:ba"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <target dev="tapbb84195f-05"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0/console.log" append="off"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <video>
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     </video>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:49:59 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:49:59 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:49:59 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:49:59 compute-0 nova_compute[239545]: </domain>
Feb 02 15:49:59 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.071 239549 DEBUG nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Preparing to wait for external event network-vif-plugged-bb84195f-05e7-45f3-871b-6bd27abe7803 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.071 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.072 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.072 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.073 239549 DEBUG nova.virt.libvirt.vif [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:49:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1819877545',display_name='tempest-TestEncryptedCinderVolumes-server-1819877545',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1819877545',id=27,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDaOlyxmAyuy76oWtAo15Hl9/4Q0MejG5dv89gID1Tzgvaqktd9FkvkddJltkvqWs4rflVs9BoIu+pRNya2tmUA0lRv7tq9xh7HtYLzpAbqRmAQM05vadKi0bX2BV/0beQ==',key_name='tempest-keypair-199719867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e1abae6c1404ce2b24265e7136ffe6a',ramdisk_id='',reservation_id='r-fezb31rz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-987785960',owner_user_name='tempest-TestEncryptedCinderVolumes-987785960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:49:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16b55bfc98574e0096db4f19bcdcbb2e',uuid=2d2eca14-3fbd-4b14-89c7-1222669b1ce0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.073 239549 DEBUG nova.network.os_vif_util [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converting VIF {"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.074 239549 DEBUG nova.network.os_vif_util [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:01:ba,bridge_name='br-int',has_traffic_filtering=True,id=bb84195f-05e7-45f3-871b-6bd27abe7803,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb84195f-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.074 239549 DEBUG os_vif [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:01:ba,bridge_name='br-int',has_traffic_filtering=True,id=bb84195f-05e7-45f3-871b-6bd27abe7803,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb84195f-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.075 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.075 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.076 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.080 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.080 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb84195f-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.081 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbb84195f-05, col_values=(('external_ids', {'iface-id': 'bb84195f-05e7-45f3-871b-6bd27abe7803', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:01:ba', 'vm-uuid': '2d2eca14-3fbd-4b14-89c7-1222669b1ce0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.082 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:59 compute-0 NetworkManager[49171]: <info>  [1770047399.0836] manager: (tapbb84195f-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.084 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.090 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.091 239549 INFO os_vif [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:01:ba,bridge_name='br-int',has_traffic_filtering=True,id=bb84195f-05e7-45f3-871b-6bd27abe7803,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb84195f-05')
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.138 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.139 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.139 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No VIF found with MAC fa:16:3e:e2:01:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.140 239549 INFO nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Using config drive
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.158 239549 DEBUG nova.storage.rbd_utils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.260 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.261 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.261 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:49:59 compute-0 ceph-mon[75334]: pgmap v1772: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 02 15:49:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1501133103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:49:59 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3062925624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.482 239549 INFO nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Creating config drive at /var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0/disk.config
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.486 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6xi3psw4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.529 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:59 compute-0 sudo[272480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:49:59 compute-0 sudo[272480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:49:59 compute-0 sudo[272480]: pam_unix(sudo:session): session closed for user root
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.584 239549 DEBUG nova.network.neutron [req-79f324a8-7d31-4a5a-aedb-bae2c72e2766 req-5d1841fc-f407-402b-8b4f-f633dd67877e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Updated VIF entry in instance network info cache for port bb84195f-05e7-45f3-871b-6bd27abe7803. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.585 239549 DEBUG nova.network.neutron [req-79f324a8-7d31-4a5a-aedb-bae2c72e2766 req-5d1841fc-f407-402b-8b4f-f633dd67877e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Updating instance_info_cache with network_info: [{"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.601 239549 DEBUG oslo_concurrency.lockutils [req-79f324a8-7d31-4a5a-aedb-bae2c72e2766 req-5d1841fc-f407-402b-8b4f-f633dd67877e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-2d2eca14-3fbd-4b14-89c7-1222669b1ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.607 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6xi3psw4" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:49:59 compute-0 sudo[272505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:49:59 compute-0 sudo[272505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.625 239549 DEBUG nova.storage.rbd_utils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.628 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0/disk.config 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.729 239549 DEBUG oslo_concurrency.processutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0/disk.config 2d2eca14-3fbd-4b14-89c7-1222669b1ce0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.731 239549 INFO nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Deleting local config drive /var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0/disk.config because it was imported into RBD.
Feb 02 15:49:59 compute-0 kernel: tapbb84195f-05: entered promiscuous mode
Feb 02 15:49:59 compute-0 NetworkManager[49171]: <info>  [1770047399.7746] manager: (tapbb84195f-05): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Feb 02 15:49:59 compute-0 ovn_controller[144995]: 2026-02-02T15:49:59Z|00261|binding|INFO|Claiming lport bb84195f-05e7-45f3-871b-6bd27abe7803 for this chassis.
Feb 02 15:49:59 compute-0 ovn_controller[144995]: 2026-02-02T15:49:59Z|00262|binding|INFO|bb84195f-05e7-45f3-871b-6bd27abe7803: Claiming fa:16:3e:e2:01:ba 10.100.0.9
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.777 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.786 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:01:ba 10.100.0.9'], port_security=['fa:16:3e:e2:01:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2d2eca14-3fbd-4b14-89c7-1222669b1ce0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-571a8d26-1b08-4233-a158-71a28cbbf88c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e1abae6c1404ce2b24265e7136ffe6a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f1fa883-939f-4034-a7da-27482c2d1bd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7394ccd-eb0f-47a9-85af-ffa4a04fcde8, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=bb84195f-05e7-45f3-871b-6bd27abe7803) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:49:59 compute-0 ovn_controller[144995]: 2026-02-02T15:49:59Z|00263|binding|INFO|Setting lport bb84195f-05e7-45f3-871b-6bd27abe7803 up in Southbound
Feb 02 15:49:59 compute-0 ovn_controller[144995]: 2026-02-02T15:49:59Z|00264|binding|INFO|Setting lport bb84195f-05e7-45f3-871b-6bd27abe7803 ovn-installed in OVS
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.788 154982 INFO neutron.agent.ovn.metadata.agent [-] Port bb84195f-05e7-45f3-871b-6bd27abe7803 in datapath 571a8d26-1b08-4233-a158-71a28cbbf88c bound to our chassis
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.790 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 571a8d26-1b08-4233-a158-71a28cbbf88c
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.792 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.797 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f719e1-88bf-45c6-a077-841d72b2770f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.798 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.798 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap571a8d26-11 in ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.799 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap571a8d26-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.799 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[b76bb46f-d9df-49a6-8d6d-78426ab2e6d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.800 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[20a26639-1df9-4ad3-adda-84e42f3e57e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 systemd-machined[207609]: New machine qemu-27-instance-0000001b.
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.808 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cbb56d-18a1-4445-8897-5515cbe9dfc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.828 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[946a5c8c-290a-4c0c-9dd1-dd003c5a91d6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 systemd-udevd[272596]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:49:59 compute-0 NetworkManager[49171]: <info>  [1770047399.8499] device (tapbb84195f-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:49:59 compute-0 NetworkManager[49171]: <info>  [1770047399.8503] device (tapbb84195f-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.854 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[acd02726-8656-447b-824f-51e9464a9bd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.859 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[64ac5836-82ed-438e-a7b2-cda479ab2594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 NetworkManager[49171]: <info>  [1770047399.8601] manager: (tap571a8d26-10): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Feb 02 15:49:59 compute-0 systemd-udevd[272599]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.881 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5c0a5c-d4fc-43c3-88c9-03ccce8803e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.890 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[53705018-6361-4913-9f20-873f78b0e677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 NetworkManager[49171]: <info>  [1770047399.9042] device (tap571a8d26-10): carrier: link connected
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.908 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8627b1-b034-4529-a603-557b56eb92a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.919 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[05371df4-cab4-4223-a7be-163ab56ce4fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap571a8d26-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:4f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488582, 'reachable_time': 29196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272626, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.929 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[00e844a7-f425-4008-a037-ab06a0204648]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:4fa3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488582, 'tstamp': 488582}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272629, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.938 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3ff7da-bfdf-4899-a058-8c24ddd6ee44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap571a8d26-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:4f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488582, 'reachable_time': 29196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272630, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.956 239549 DEBUG nova.compute.manager [req-52708751-5eb9-4fbc-bc2f-9a5d7725d466 req-1cd6e34c-2c8e-46a9-99b0-131d74b6702b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received event network-vif-plugged-bb84195f-05e7-45f3-871b-6bd27abe7803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.956 239549 DEBUG oslo_concurrency.lockutils [req-52708751-5eb9-4fbc-bc2f-9a5d7725d466 req-1cd6e34c-2c8e-46a9-99b0-131d74b6702b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.956 239549 DEBUG oslo_concurrency.lockutils [req-52708751-5eb9-4fbc-bc2f-9a5d7725d466 req-1cd6e34c-2c8e-46a9-99b0-131d74b6702b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.956 239549 DEBUG oslo_concurrency.lockutils [req-52708751-5eb9-4fbc-bc2f-9a5d7725d466 req-1cd6e34c-2c8e-46a9-99b0-131d74b6702b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:49:59 compute-0 nova_compute[239545]: 2026-02-02 15:49:59.957 239549 DEBUG nova.compute.manager [req-52708751-5eb9-4fbc-bc2f-9a5d7725d466 req-1cd6e34c-2c8e-46a9-99b0-131d74b6702b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Processing event network-vif-plugged-bb84195f-05e7-45f3-871b-6bd27abe7803 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:49:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:49:59.959 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd2f985-0bb7-4309-abbe-1157ab69c24c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:00.006 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[da9880e8-0481-4b7c-9a77-79007c7774c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:00.009 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap571a8d26-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:00.009 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:00.009 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap571a8d26-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.012 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:00 compute-0 kernel: tap571a8d26-10: entered promiscuous mode
Feb 02 15:50:00 compute-0 NetworkManager[49171]: <info>  [1770047400.0130] manager: (tap571a8d26-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.014 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:00.016 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap571a8d26-10, col_values=(('external_ids', {'iface-id': '394690c2-9066-491c-bd5b-f924947b57f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.017 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.018 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:00 compute-0 ovn_controller[144995]: 2026-02-02T15:50:00Z|00265|binding|INFO|Releasing lport 394690c2-9066-491c-bd5b-f924947b57f3 from this chassis (sb_readonly=0)
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:00.018 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/571a8d26-1b08-4233-a158-71a28cbbf88c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/571a8d26-1b08-4233-a158-71a28cbbf88c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:00.019 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1c2a76-e950-480f-86e7-80b25c9668ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:00.020 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-571a8d26-1b08-4233-a158-71a28cbbf88c
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/571a8d26-1b08-4233-a158-71a28cbbf88c.pid.haproxy
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 571a8d26-1b08-4233-a158-71a28cbbf88c
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:50:00 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:00.022 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'env', 'PROCESS_TAG=haproxy-571a8d26-1b08-4233-a158-71a28cbbf88c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/571a8d26-1b08-4233-a158-71a28cbbf88c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.022 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:00 compute-0 sudo[272505]: pam_unix(sudo:session): session closed for user root
Feb 02 15:50:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 15:50:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 02 15:50:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:50:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:50:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:50:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:50:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:50:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:50:00 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:50:00 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:50:00 compute-0 sudo[272694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:50:00 compute-0 sudo[272694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:50:00 compute-0 sudo[272694]: pam_unix(sudo:session): session closed for user root
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.199 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047400.1994338, 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.200 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] VM Started (Lifecycle Event)
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.202 239549 DEBUG nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.206 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.210 239549 INFO nova.virt.libvirt.driver [-] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Instance spawned successfully.
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.210 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.217 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.221 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:50:00 compute-0 sudo[272721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:50:00 compute-0 sudo[272721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.230 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.230 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.231 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.231 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.232 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.232 239549 DEBUG nova.virt.libvirt.driver [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.236 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.237 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047400.2019405, 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.237 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] VM Paused (Lifecycle Event)
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.270 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.273 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047400.2055888, 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.273 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] VM Resumed (Lifecycle Event)
Feb 02 15:50:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:50:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:50:00 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:50:00 compute-0 podman[272768]: 2026-02-02 15:50:00.390070878 +0000 UTC m=+0.055388744 container create 42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.409 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.413 239549 INFO nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Took 8.21 seconds to spawn the instance on the hypervisor.
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.414 239549 DEBUG nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.420 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:50:00 compute-0 systemd[1]: Started libpod-conmon-42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b.scope.
Feb 02 15:50:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e5ec2c1b519b649aa64e1adcbfe207c4cc73169cda48e5563dbac3415bb3d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:00 compute-0 podman[272768]: 2026-02-02 15:50:00.453900134 +0000 UTC m=+0.119218190 container init 42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.454 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:50:00 compute-0 podman[272768]: 2026-02-02 15:50:00.361664674 +0000 UTC m=+0.026982720 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:50:00 compute-0 podman[272768]: 2026-02-02 15:50:00.459103344 +0000 UTC m=+0.124421210 container start 42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.477 239549 INFO nova.compute.manager [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Took 9.28 seconds to build instance.
Feb 02 15:50:00 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[272784]: [NOTICE]   (272799) : New worker (272803) forked
Feb 02 15:50:00 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[272784]: [NOTICE]   (272799) : Loading success.
Feb 02 15:50:00 compute-0 nova_compute[239545]: 2026-02-02 15:50:00.499 239549 DEBUG oslo_concurrency.lockutils [None req-cda65ee2-f401-4b80-b2ef-1499e521a2a4 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:00 compute-0 podman[272801]: 2026-02-02 15:50:00.51777738 +0000 UTC m=+0.035515564 container create e9f12daa17b531bc2f7481391afa57b0db29e0f55d6d8b38933eb5e549357a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_newton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:50:00 compute-0 systemd[1]: Started libpod-conmon-e9f12daa17b531bc2f7481391afa57b0db29e0f55d6d8b38933eb5e549357a0b.scope.
Feb 02 15:50:00 compute-0 podman[272801]: 2026-02-02 15:50:00.502437424 +0000 UTC m=+0.020175638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:50:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:50:00 compute-0 podman[272801]: 2026-02-02 15:50:00.63108081 +0000 UTC m=+0.148819034 container init e9f12daa17b531bc2f7481391afa57b0db29e0f55d6d8b38933eb5e549357a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:50:00 compute-0 podman[272801]: 2026-02-02 15:50:00.637995604 +0000 UTC m=+0.155733798 container start e9f12daa17b531bc2f7481391afa57b0db29e0f55d6d8b38933eb5e549357a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_newton, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:50:00 compute-0 podman[272801]: 2026-02-02 15:50:00.641464982 +0000 UTC m=+0.159203246 container attach e9f12daa17b531bc2f7481391afa57b0db29e0f55d6d8b38933eb5e549357a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_newton, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:50:00 compute-0 flamboyant_newton[272826]: 167 167
Feb 02 15:50:00 compute-0 systemd[1]: libpod-e9f12daa17b531bc2f7481391afa57b0db29e0f55d6d8b38933eb5e549357a0b.scope: Deactivated successfully.
Feb 02 15:50:00 compute-0 podman[272801]: 2026-02-02 15:50:00.648111848 +0000 UTC m=+0.165850042 container died e9f12daa17b531bc2f7481391afa57b0db29e0f55d6d8b38933eb5e549357a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0667ea08b34461bf7ec8e8cf7462876ba7729724f8bf1e57cfcc0f61730bd689-merged.mount: Deactivated successfully.
Feb 02 15:50:00 compute-0 podman[272801]: 2026-02-02 15:50:00.692832473 +0000 UTC m=+0.210570667 container remove e9f12daa17b531bc2f7481391afa57b0db29e0f55d6d8b38933eb5e549357a0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_newton, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:50:00 compute-0 systemd[1]: libpod-conmon-e9f12daa17b531bc2f7481391afa57b0db29e0f55d6d8b38933eb5e549357a0b.scope: Deactivated successfully.
Feb 02 15:50:00 compute-0 podman[272851]: 2026-02-02 15:50:00.842502437 +0000 UTC m=+0.045704701 container create 5bab8397849f7fd8622bcbacf707960a92df25a5fc99301d7d2164365160028e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_brahmagupta, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:50:00 compute-0 systemd[1]: Started libpod-conmon-5bab8397849f7fd8622bcbacf707960a92df25a5fc99301d7d2164365160028e.scope.
Feb 02 15:50:00 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:50:00 compute-0 podman[272851]: 2026-02-02 15:50:00.822041802 +0000 UTC m=+0.025244086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2660900dd876d281f0b88c1707e13f82a383b5caadced76f1670b598543f1856/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2660900dd876d281f0b88c1707e13f82a383b5caadced76f1670b598543f1856/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2660900dd876d281f0b88c1707e13f82a383b5caadced76f1670b598543f1856/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2660900dd876d281f0b88c1707e13f82a383b5caadced76f1670b598543f1856/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2660900dd876d281f0b88c1707e13f82a383b5caadced76f1670b598543f1856/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:00 compute-0 podman[272851]: 2026-02-02 15:50:00.940098022 +0000 UTC m=+0.143300286 container init 5bab8397849f7fd8622bcbacf707960a92df25a5fc99301d7d2164365160028e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_brahmagupta, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Feb 02 15:50:00 compute-0 podman[272851]: 2026-02-02 15:50:00.946225956 +0000 UTC m=+0.149428220 container start 5bab8397849f7fd8622bcbacf707960a92df25a5fc99301d7d2164365160028e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_brahmagupta, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:50:00 compute-0 podman[272851]: 2026-02-02 15:50:00.94954871 +0000 UTC m=+0.152750974 container attach 5bab8397849f7fd8622bcbacf707960a92df25a5fc99301d7d2164365160028e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_brahmagupta, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 02 15:50:01 compute-0 ceph-mon[75334]: pgmap v1773: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 02 15:50:01 compute-0 modest_brahmagupta[272869]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:50:01 compute-0 modest_brahmagupta[272869]: --> All data devices are unavailable
Feb 02 15:50:01 compute-0 systemd[1]: libpod-5bab8397849f7fd8622bcbacf707960a92df25a5fc99301d7d2164365160028e.scope: Deactivated successfully.
Feb 02 15:50:01 compute-0 podman[272851]: 2026-02-02 15:50:01.364014734 +0000 UTC m=+0.567216998 container died 5bab8397849f7fd8622bcbacf707960a92df25a5fc99301d7d2164365160028e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_brahmagupta, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2660900dd876d281f0b88c1707e13f82a383b5caadced76f1670b598543f1856-merged.mount: Deactivated successfully.
Feb 02 15:50:01 compute-0 podman[272851]: 2026-02-02 15:50:01.396390158 +0000 UTC m=+0.599592422 container remove 5bab8397849f7fd8622bcbacf707960a92df25a5fc99301d7d2164365160028e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:50:01 compute-0 systemd[1]: libpod-conmon-5bab8397849f7fd8622bcbacf707960a92df25a5fc99301d7d2164365160028e.scope: Deactivated successfully.
Feb 02 15:50:01 compute-0 sudo[272721]: pam_unix(sudo:session): session closed for user root
Feb 02 15:50:01 compute-0 sudo[272901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:50:01 compute-0 sudo[272901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:50:01 compute-0 sudo[272901]: pam_unix(sudo:session): session closed for user root
Feb 02 15:50:01 compute-0 sudo[272926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:50:01 compute-0 sudo[272926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:50:01 compute-0 podman[272962]: 2026-02-02 15:50:01.79128331 +0000 UTC m=+0.045722941 container create f02255f18687c150c5566e7a98562c9b4097e1e5927e0acf3aed3bb3d24020e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jemison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:50:01 compute-0 systemd[1]: Started libpod-conmon-f02255f18687c150c5566e7a98562c9b4097e1e5927e0acf3aed3bb3d24020e9.scope.
Feb 02 15:50:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:50:01 compute-0 podman[272962]: 2026-02-02 15:50:01.77101614 +0000 UTC m=+0.025455791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:50:01 compute-0 podman[272962]: 2026-02-02 15:50:01.873727944 +0000 UTC m=+0.128167605 container init f02255f18687c150c5566e7a98562c9b4097e1e5927e0acf3aed3bb3d24020e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jemison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:50:01 compute-0 podman[272962]: 2026-02-02 15:50:01.879371526 +0000 UTC m=+0.133811167 container start f02255f18687c150c5566e7a98562c9b4097e1e5927e0acf3aed3bb3d24020e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:50:01 compute-0 festive_jemison[272979]: 167 167
Feb 02 15:50:01 compute-0 podman[272962]: 2026-02-02 15:50:01.882510575 +0000 UTC m=+0.136950226 container attach f02255f18687c150c5566e7a98562c9b4097e1e5927e0acf3aed3bb3d24020e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:50:01 compute-0 systemd[1]: libpod-f02255f18687c150c5566e7a98562c9b4097e1e5927e0acf3aed3bb3d24020e9.scope: Deactivated successfully.
Feb 02 15:50:01 compute-0 podman[272962]: 2026-02-02 15:50:01.88474788 +0000 UTC m=+0.139187511 container died f02255f18687c150c5566e7a98562c9b4097e1e5927e0acf3aed3bb3d24020e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-119761b89423929721e15afa131a7522b8cb028a56bca83fd6241c90cba9e309-merged.mount: Deactivated successfully.
Feb 02 15:50:01 compute-0 podman[272962]: 2026-02-02 15:50:01.92092562 +0000 UTC m=+0.175365251 container remove f02255f18687c150c5566e7a98562c9b4097e1e5927e0acf3aed3bb3d24020e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:50:01 compute-0 systemd[1]: libpod-conmon-f02255f18687c150c5566e7a98562c9b4097e1e5927e0acf3aed3bb3d24020e9.scope: Deactivated successfully.
Feb 02 15:50:02 compute-0 nova_compute[239545]: 2026-02-02 15:50:02.023 239549 DEBUG nova.compute.manager [req-80c7fc14-00ad-4ea9-9c76-a62d96935fd7 req-926699c2-e39e-4b06-b8d8-05661aa6235e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received event network-vif-plugged-bb84195f-05e7-45f3-871b-6bd27abe7803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:50:02 compute-0 nova_compute[239545]: 2026-02-02 15:50:02.024 239549 DEBUG oslo_concurrency.lockutils [req-80c7fc14-00ad-4ea9-9c76-a62d96935fd7 req-926699c2-e39e-4b06-b8d8-05661aa6235e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:02 compute-0 nova_compute[239545]: 2026-02-02 15:50:02.024 239549 DEBUG oslo_concurrency.lockutils [req-80c7fc14-00ad-4ea9-9c76-a62d96935fd7 req-926699c2-e39e-4b06-b8d8-05661aa6235e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:02 compute-0 nova_compute[239545]: 2026-02-02 15:50:02.024 239549 DEBUG oslo_concurrency.lockutils [req-80c7fc14-00ad-4ea9-9c76-a62d96935fd7 req-926699c2-e39e-4b06-b8d8-05661aa6235e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:02 compute-0 nova_compute[239545]: 2026-02-02 15:50:02.025 239549 DEBUG nova.compute.manager [req-80c7fc14-00ad-4ea9-9c76-a62d96935fd7 req-926699c2-e39e-4b06-b8d8-05661aa6235e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] No waiting events found dispatching network-vif-plugged-bb84195f-05e7-45f3-871b-6bd27abe7803 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:50:02 compute-0 nova_compute[239545]: 2026-02-02 15:50:02.025 239549 WARNING nova.compute.manager [req-80c7fc14-00ad-4ea9-9c76-a62d96935fd7 req-926699c2-e39e-4b06-b8d8-05661aa6235e d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received unexpected event network-vif-plugged-bb84195f-05e7-45f3-871b-6bd27abe7803 for instance with vm_state active and task_state None.
Feb 02 15:50:02 compute-0 podman[273003]: 2026-02-02 15:50:02.06880718 +0000 UTC m=+0.050958402 container create 127eec9bf5ae910c592fa88f42a04803a6ce8b78b4803d69ccf122ef39df6b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 02 15:50:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 02 15:50:02 compute-0 systemd[1]: Started libpod-conmon-127eec9bf5ae910c592fa88f42a04803a6ce8b78b4803d69ccf122ef39df6b0b.scope.
Feb 02 15:50:02 compute-0 podman[273003]: 2026-02-02 15:50:02.048741986 +0000 UTC m=+0.030893228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:50:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665a3d561f4ad868d4986f3bc416b8c2f688b22a7839d345af8d1dabaff60390/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665a3d561f4ad868d4986f3bc416b8c2f688b22a7839d345af8d1dabaff60390/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665a3d561f4ad868d4986f3bc416b8c2f688b22a7839d345af8d1dabaff60390/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665a3d561f4ad868d4986f3bc416b8c2f688b22a7839d345af8d1dabaff60390/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:02 compute-0 podman[273003]: 2026-02-02 15:50:02.176883398 +0000 UTC m=+0.159034650 container init 127eec9bf5ae910c592fa88f42a04803a6ce8b78b4803d69ccf122ef39df6b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_murdock, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:50:02 compute-0 podman[273003]: 2026-02-02 15:50:02.184090339 +0000 UTC m=+0.166241581 container start 127eec9bf5ae910c592fa88f42a04803a6ce8b78b4803d69ccf122ef39df6b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_murdock, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:50:02 compute-0 podman[273003]: 2026-02-02 15:50:02.187837474 +0000 UTC m=+0.169988726 container attach 127eec9bf5ae910c592fa88f42a04803a6ce8b78b4803d69ccf122ef39df6b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_murdock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:50:02 compute-0 practical_murdock[273020]: {
Feb 02 15:50:02 compute-0 practical_murdock[273020]:     "0": [
Feb 02 15:50:02 compute-0 practical_murdock[273020]:         {
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "devices": [
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "/dev/loop3"
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             ],
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_name": "ceph_lv0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_size": "21470642176",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "name": "ceph_lv0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "tags": {
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.cluster_name": "ceph",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.crush_device_class": "",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.encrypted": "0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.objectstore": "bluestore",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.osd_id": "0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.type": "block",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.vdo": "0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.with_tpm": "0"
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             },
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "type": "block",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "vg_name": "ceph_vg0"
Feb 02 15:50:02 compute-0 practical_murdock[273020]:         }
Feb 02 15:50:02 compute-0 practical_murdock[273020]:     ],
Feb 02 15:50:02 compute-0 practical_murdock[273020]:     "1": [
Feb 02 15:50:02 compute-0 practical_murdock[273020]:         {
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "devices": [
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "/dev/loop4"
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             ],
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_name": "ceph_lv1",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_size": "21470642176",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "name": "ceph_lv1",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "tags": {
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.cluster_name": "ceph",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.crush_device_class": "",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.encrypted": "0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.objectstore": "bluestore",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.osd_id": "1",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.type": "block",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.vdo": "0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.with_tpm": "0"
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             },
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "type": "block",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "vg_name": "ceph_vg1"
Feb 02 15:50:02 compute-0 practical_murdock[273020]:         }
Feb 02 15:50:02 compute-0 practical_murdock[273020]:     ],
Feb 02 15:50:02 compute-0 practical_murdock[273020]:     "2": [
Feb 02 15:50:02 compute-0 practical_murdock[273020]:         {
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "devices": [
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "/dev/loop5"
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             ],
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_name": "ceph_lv2",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_size": "21470642176",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "name": "ceph_lv2",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "tags": {
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.cluster_name": "ceph",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.crush_device_class": "",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.encrypted": "0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.objectstore": "bluestore",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.osd_id": "2",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.type": "block",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.vdo": "0",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:                 "ceph.with_tpm": "0"
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             },
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "type": "block",
Feb 02 15:50:02 compute-0 practical_murdock[273020]:             "vg_name": "ceph_vg2"
Feb 02 15:50:02 compute-0 practical_murdock[273020]:         }
Feb 02 15:50:02 compute-0 practical_murdock[273020]:     ]
Feb 02 15:50:02 compute-0 practical_murdock[273020]: }
Feb 02 15:50:02 compute-0 systemd[1]: libpod-127eec9bf5ae910c592fa88f42a04803a6ce8b78b4803d69ccf122ef39df6b0b.scope: Deactivated successfully.
Feb 02 15:50:02 compute-0 podman[273003]: 2026-02-02 15:50:02.490092226 +0000 UTC m=+0.472243478 container died 127eec9bf5ae910c592fa88f42a04803a6ce8b78b4803d69ccf122ef39df6b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_murdock, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 15:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-665a3d561f4ad868d4986f3bc416b8c2f688b22a7839d345af8d1dabaff60390-merged.mount: Deactivated successfully.
Feb 02 15:50:02 compute-0 podman[273003]: 2026-02-02 15:50:02.756325342 +0000 UTC m=+0.738476554 container remove 127eec9bf5ae910c592fa88f42a04803a6ce8b78b4803d69ccf122ef39df6b0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 02 15:50:02 compute-0 systemd[1]: libpod-conmon-127eec9bf5ae910c592fa88f42a04803a6ce8b78b4803d69ccf122ef39df6b0b.scope: Deactivated successfully.
Feb 02 15:50:02 compute-0 sudo[272926]: pam_unix(sudo:session): session closed for user root
Feb 02 15:50:02 compute-0 sudo[273043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:50:02 compute-0 sudo[273043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:50:02 compute-0 sudo[273043]: pam_unix(sudo:session): session closed for user root
Feb 02 15:50:02 compute-0 sudo[273068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:50:02 compute-0 sudo[273068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:50:03 compute-0 podman[273104]: 2026-02-02 15:50:03.117473105 +0000 UTC m=+0.032840038 container create 452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mcnulty, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Feb 02 15:50:03 compute-0 systemd[1]: Started libpod-conmon-452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80.scope.
Feb 02 15:50:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:50:03 compute-0 podman[273104]: 2026-02-02 15:50:03.181096385 +0000 UTC m=+0.096463358 container init 452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mcnulty, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:50:03 compute-0 podman[273104]: 2026-02-02 15:50:03.186263185 +0000 UTC m=+0.101630118 container start 452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mcnulty, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:50:03 compute-0 podman[273104]: 2026-02-02 15:50:03.189575018 +0000 UTC m=+0.104942031 container attach 452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mcnulty, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:50:03 compute-0 elegant_mcnulty[273120]: 167 167
Feb 02 15:50:03 compute-0 systemd[1]: libpod-452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80.scope: Deactivated successfully.
Feb 02 15:50:03 compute-0 conmon[273120]: conmon 452db3a2c8db8ff55c13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80.scope/container/memory.events
Feb 02 15:50:03 compute-0 podman[273104]: 2026-02-02 15:50:03.19245857 +0000 UTC m=+0.107825513 container died 452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mcnulty, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:50:03 compute-0 podman[273104]: 2026-02-02 15:50:03.10417474 +0000 UTC m=+0.019541703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:50:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfff3b0e0aa3d86b9e8ee7272fe2fa3b063b2138348a04f4bbfdd6b7eae32da6-merged.mount: Deactivated successfully.
Feb 02 15:50:03 compute-0 podman[273104]: 2026-02-02 15:50:03.232791485 +0000 UTC m=+0.148158418 container remove 452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_mcnulty, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:50:03 compute-0 systemd[1]: libpod-conmon-452db3a2c8db8ff55c134ffa9071b4d3b46f9eadf87072e27af8a1ace84b3f80.scope: Deactivated successfully.
Feb 02 15:50:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:03 compute-0 ceph-mon[75334]: pgmap v1774: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 02 15:50:03 compute-0 podman[273143]: 2026-02-02 15:50:03.369386311 +0000 UTC m=+0.035229747 container create ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:50:03 compute-0 systemd[1]: Started libpod-conmon-ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d.scope.
Feb 02 15:50:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210b5744d45be1279f2067afd35addb2851254e5c20279f1c299b4c11b77c78d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210b5744d45be1279f2067afd35addb2851254e5c20279f1c299b4c11b77c78d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210b5744d45be1279f2067afd35addb2851254e5c20279f1c299b4c11b77c78d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210b5744d45be1279f2067afd35addb2851254e5c20279f1c299b4c11b77c78d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:50:03 compute-0 podman[273143]: 2026-02-02 15:50:03.444652534 +0000 UTC m=+0.110495980 container init ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:50:03 compute-0 podman[273143]: 2026-02-02 15:50:03.449531856 +0000 UTC m=+0.115375302 container start ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:50:03 compute-0 podman[273143]: 2026-02-02 15:50:03.35502306 +0000 UTC m=+0.020866516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:50:03 compute-0 podman[273143]: 2026-02-02 15:50:03.452721936 +0000 UTC m=+0.118565372 container attach ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:50:04 compute-0 lvm[273238]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:50:04 compute-0 lvm[273236]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:50:04 compute-0 lvm[273236]: VG ceph_vg0 finished
Feb 02 15:50:04 compute-0 nova_compute[239545]: 2026-02-02 15:50:04.083 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:04 compute-0 lvm[273238]: VG ceph_vg1 finished
Feb 02 15:50:04 compute-0 lvm[273240]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:50:04 compute-0 lvm[273240]: VG ceph_vg2 finished
Feb 02 15:50:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 726 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 02 15:50:04 compute-0 awesome_almeida[273159]: {}
Feb 02 15:50:04 compute-0 systemd[1]: libpod-ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d.scope: Deactivated successfully.
Feb 02 15:50:04 compute-0 systemd[1]: libpod-ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d.scope: Consumed 1.002s CPU time.
Feb 02 15:50:04 compute-0 podman[273143]: 2026-02-02 15:50:04.221596294 +0000 UTC m=+0.887439750 container died ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 02 15:50:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-210b5744d45be1279f2067afd35addb2851254e5c20279f1c299b4c11b77c78d-merged.mount: Deactivated successfully.
Feb 02 15:50:04 compute-0 podman[273143]: 2026-02-02 15:50:04.260360549 +0000 UTC m=+0.926203985 container remove ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:50:04 compute-0 systemd[1]: libpod-conmon-ecaeea5b5fb2d0bdd4edeb45930321cce1e964668580e8045f9b992774b3f95d.scope: Deactivated successfully.
Feb 02 15:50:04 compute-0 sudo[273068]: pam_unix(sudo:session): session closed for user root
Feb 02 15:50:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:50:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:50:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:50:04 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:50:04 compute-0 sudo[273255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:50:04 compute-0 sudo[273255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:50:04 compute-0 sudo[273255]: pam_unix(sudo:session): session closed for user root
Feb 02 15:50:04 compute-0 nova_compute[239545]: 2026-02-02 15:50:04.532 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:05 compute-0 ceph-mon[75334]: pgmap v1775: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 726 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 02 15:50:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:50:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:50:05 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:05.922 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:50:05 compute-0 nova_compute[239545]: 2026-02-02 15:50:05.922 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:05 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:05.924 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:50:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 88 op/s
Feb 02 15:50:06 compute-0 nova_compute[239545]: 2026-02-02 15:50:06.215 239549 DEBUG nova.compute.manager [req-2f64fab9-7a47-4030-a610-f30c6c8b2cbc req-23f1c575-0807-45b8-b963-47da7a81f74d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received event network-changed-bb84195f-05e7-45f3-871b-6bd27abe7803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:50:06 compute-0 nova_compute[239545]: 2026-02-02 15:50:06.215 239549 DEBUG nova.compute.manager [req-2f64fab9-7a47-4030-a610-f30c6c8b2cbc req-23f1c575-0807-45b8-b963-47da7a81f74d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Refreshing instance network info cache due to event network-changed-bb84195f-05e7-45f3-871b-6bd27abe7803. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:50:06 compute-0 nova_compute[239545]: 2026-02-02 15:50:06.216 239549 DEBUG oslo_concurrency.lockutils [req-2f64fab9-7a47-4030-a610-f30c6c8b2cbc req-23f1c575-0807-45b8-b963-47da7a81f74d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-2d2eca14-3fbd-4b14-89c7-1222669b1ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:50:06 compute-0 nova_compute[239545]: 2026-02-02 15:50:06.216 239549 DEBUG oslo_concurrency.lockutils [req-2f64fab9-7a47-4030-a610-f30c6c8b2cbc req-23f1c575-0807-45b8-b963-47da7a81f74d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-2d2eca14-3fbd-4b14-89c7-1222669b1ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:50:06 compute-0 nova_compute[239545]: 2026-02-02 15:50:06.216 239549 DEBUG nova.network.neutron [req-2f64fab9-7a47-4030-a610-f30c6c8b2cbc req-23f1c575-0807-45b8-b963-47da7a81f74d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Refreshing network info cache for port bb84195f-05e7-45f3-871b-6bd27abe7803 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:50:07 compute-0 nova_compute[239545]: 2026-02-02 15:50:07.312 239549 DEBUG nova.network.neutron [req-2f64fab9-7a47-4030-a610-f30c6c8b2cbc req-23f1c575-0807-45b8-b963-47da7a81f74d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Updated VIF entry in instance network info cache for port bb84195f-05e7-45f3-871b-6bd27abe7803. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:50:07 compute-0 nova_compute[239545]: 2026-02-02 15:50:07.313 239549 DEBUG nova.network.neutron [req-2f64fab9-7a47-4030-a610-f30c6c8b2cbc req-23f1c575-0807-45b8-b963-47da7a81f74d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Updating instance_info_cache with network_info: [{"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:50:07 compute-0 nova_compute[239545]: 2026-02-02 15:50:07.336 239549 DEBUG oslo_concurrency.lockutils [req-2f64fab9-7a47-4030-a610-f30c6c8b2cbc req-23f1c575-0807-45b8-b963-47da7a81f74d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-2d2eca14-3fbd-4b14-89c7-1222669b1ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:50:07 compute-0 ceph-mon[75334]: pgmap v1776: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 88 op/s
Feb 02 15:50:07 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:07.926 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:50:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb 02 15:50:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:09 compute-0 nova_compute[239545]: 2026-02-02 15:50:09.085 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:09 compute-0 ceph-mon[75334]: pgmap v1777: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb 02 15:50:09 compute-0 nova_compute[239545]: 2026-02-02 15:50:09.535 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:09 compute-0 nova_compute[239545]: 2026-02-02 15:50:09.563 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:09 compute-0 nova_compute[239545]: 2026-02-02 15:50:09.564 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:50:09 compute-0 nova_compute[239545]: 2026-02-02 15:50:09.565 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:50:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb 02 15:50:10 compute-0 nova_compute[239545]: 2026-02-02 15:50:10.578 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:50:10 compute-0 nova_compute[239545]: 2026-02-02 15:50:10.579 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:50:10 compute-0 nova_compute[239545]: 2026-02-02 15:50:10.579 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:50:10 compute-0 nova_compute[239545]: 2026-02-02 15:50:10.580 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:50:11 compute-0 ceph-mon[75334]: pgmap v1778: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb 02 15:50:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 02 15:50:12 compute-0 nova_compute[239545]: 2026-02-02 15:50:12.847 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:50:12 compute-0 nova_compute[239545]: 2026-02-02 15:50:12.864 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:50:12 compute-0 nova_compute[239545]: 2026-02-02 15:50:12.864 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:50:12 compute-0 nova_compute[239545]: 2026-02-02 15:50:12.864 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:13 compute-0 ovn_controller[144995]: 2026-02-02T15:50:13Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:01:ba 10.100.0.9
Feb 02 15:50:13 compute-0 ovn_controller[144995]: 2026-02-02T15:50:13Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:01:ba 10.100.0.9
Feb 02 15:50:13 compute-0 ceph-mon[75334]: pgmap v1779: 305 pgs: 305 active+clean; 216 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 02 15:50:14 compute-0 nova_compute[239545]: 2026-02-02 15:50:14.088 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 223 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 583 KiB/s wr, 77 op/s
Feb 02 15:50:14 compute-0 nova_compute[239545]: 2026-02-02 15:50:14.537 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:14 compute-0 nova_compute[239545]: 2026-02-02 15:50:14.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:14 compute-0 nova_compute[239545]: 2026-02-02 15:50:14.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:50:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:50:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:50:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:50:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:50:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:50:15 compute-0 ceph-mon[75334]: pgmap v1780: 305 pgs: 305 active+clean; 223 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 583 KiB/s wr, 77 op/s
Feb 02 15:50:15 compute-0 nova_compute[239545]: 2026-02-02 15:50:15.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Feb 02 15:50:17 compute-0 ceph-mon[75334]: pgmap v1781: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Feb 02 15:50:17 compute-0 nova_compute[239545]: 2026-02-02 15:50:17.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:17 compute-0 nova_compute[239545]: 2026-02-02 15:50:17.560 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:17 compute-0 nova_compute[239545]: 2026-02-02 15:50:17.587 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:17 compute-0 nova_compute[239545]: 2026-02-02 15:50:17.587 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:17 compute-0 nova_compute[239545]: 2026-02-02 15:50:17.587 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:17 compute-0 nova_compute[239545]: 2026-02-02 15:50:17.587 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:50:17 compute-0 nova_compute[239545]: 2026-02-02 15:50:17.588 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:50:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 15:50:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:50:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550346802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.146 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.226 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.227 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.230 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.230 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.230 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:50:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.400 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.401 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3805MB free_disk=59.897080768831074GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.402 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.402 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1550346802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.502 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.503 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.503 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.503 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:50:18 compute-0 nova_compute[239545]: 2026-02-02 15:50:18.560 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:50:19 compute-0 nova_compute[239545]: 2026-02-02 15:50:19.091 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:50:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155210286' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:50:19 compute-0 nova_compute[239545]: 2026-02-02 15:50:19.134 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:50:19 compute-0 nova_compute[239545]: 2026-02-02 15:50:19.140 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:50:19 compute-0 nova_compute[239545]: 2026-02-02 15:50:19.157 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:50:19 compute-0 nova_compute[239545]: 2026-02-02 15:50:19.187 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:50:19 compute-0 nova_compute[239545]: 2026-02-02 15:50:19.188 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:19 compute-0 ceph-mon[75334]: pgmap v1782: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 15:50:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/155210286' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:50:19 compute-0 nova_compute[239545]: 2026-02-02 15:50:19.539 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 15:50:20 compute-0 nova_compute[239545]: 2026-02-02 15:50:20.174 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:20 compute-0 nova_compute[239545]: 2026-02-02 15:50:20.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:20 compute-0 nova_compute[239545]: 2026-02-02 15:50:20.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:50:20 compute-0 nova_compute[239545]: 2026-02-02 15:50:20.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:50:21 compute-0 ceph-mon[75334]: pgmap v1783: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 15:50:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.169 239549 DEBUG oslo_concurrency.lockutils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.169 239549 DEBUG oslo_concurrency.lockutils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.195 239549 DEBUG nova.objects.instance [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'flavor' on Instance uuid 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.237 239549 DEBUG oslo_concurrency.lockutils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.455 239549 DEBUG oslo_concurrency.lockutils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.455 239549 DEBUG oslo_concurrency.lockutils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.455 239549 INFO nova.compute.manager [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Attaching volume 596184f8-0722-4a4e-9d05-d2841287fe8f to /dev/vdb
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.598 239549 DEBUG os_brick.utils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.599 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.608 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.608 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[d77177af-13f4-45b4-a64a-fecd5021a7b9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.609 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.615 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.616 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[a09fe5f8-a69e-4327-b590-aa5eb13f941d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.617 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.623 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.623 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[8e6945bc-b5ae-44bc-a122-348d53f28467]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.625 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[86b090c4-f36b-4a3c-af10-2a79312ef004]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.626 239549 DEBUG oslo_concurrency.processutils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.646 239549 DEBUG oslo_concurrency.processutils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.648 239549 DEBUG os_brick.initiator.connectors.lightos [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.649 239549 DEBUG os_brick.initiator.connectors.lightos [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.649 239549 DEBUG os_brick.initiator.connectors.lightos [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.649 239549 DEBUG os_brick.utils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] <== get_connector_properties: return (51ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:50:22 compute-0 nova_compute[239545]: 2026-02-02 15:50:22.650 239549 DEBUG nova.virt.block_device [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Updating existing volume attachment record: f1ef898c-1bfa-4f6f-80dd-8adf57d006da _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:50:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:50:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/978251117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:50:23 compute-0 ceph-mon[75334]: pgmap v1784: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 02 15:50:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/978251117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.553 239549 DEBUG os_brick.encryptors [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Using volume encryption metadata '{'encryption_key_id': 'fc0081f3-0ac6-491f-a28c-9f3e1968eb55', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-596184f8-0722-4a4e-9d05-d2841287fe8f', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '596184f8-0722-4a4e-9d05-d2841287fe8f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '2d2eca14-3fbd-4b14-89c7-1222669b1ce0', 'attached_at': '', 'detached_at': '', 'volume_id': '596184f8-0722-4a4e-9d05-d2841287fe8f', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.561 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.575 239549 DEBUG barbicanclient.v1.secrets [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.576 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.605 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.605 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.635 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.635 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.653 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.654 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.672 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.672 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.692 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.692 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.786 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.787 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.805 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.806 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.824 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.824 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.841 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.842 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.858 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.858 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.877 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.877 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.906 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.906 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.924 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.924 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.954 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.954 239549 INFO barbicanclient.base [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/fc0081f3-0ac6-491f-a28c-9f3e1968eb55
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.971 239549 DEBUG barbicanclient.client [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.971 239549 DEBUG nova.virt.libvirt.host [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb 02 15:50:23 compute-0 nova_compute[239545]:   <usage type="volume">
Feb 02 15:50:23 compute-0 nova_compute[239545]:     <volume>596184f8-0722-4a4e-9d05-d2841287fe8f</volume>
Feb 02 15:50:23 compute-0 nova_compute[239545]:   </usage>
Feb 02 15:50:23 compute-0 nova_compute[239545]: </secret>
Feb 02 15:50:23 compute-0 nova_compute[239545]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Feb 02 15:50:23 compute-0 nova_compute[239545]: 2026-02-02 15:50:23.981 239549 DEBUG nova.objects.instance [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'flavor' on Instance uuid 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:50:24 compute-0 nova_compute[239545]: 2026-02-02 15:50:24.004 239549 DEBUG nova.virt.libvirt.driver [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Attempting to attach volume 596184f8-0722-4a4e-9d05-d2841287fe8f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Feb 02 15:50:24 compute-0 nova_compute[239545]: 2026-02-02 15:50:24.006 239549 DEBUG nova.virt.libvirt.guest [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] attach device xml: <disk type="network" device="disk">
Feb 02 15:50:24 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:50:24 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-596184f8-0722-4a4e-9d05-d2841287fe8f">
Feb 02 15:50:24 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:50:24 compute-0 nova_compute[239545]:   </source>
Feb 02 15:50:24 compute-0 nova_compute[239545]:   <auth username="openstack">
Feb 02 15:50:24 compute-0 nova_compute[239545]:     <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:50:24 compute-0 nova_compute[239545]:   </auth>
Feb 02 15:50:24 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:50:24 compute-0 nova_compute[239545]:   <serial>596184f8-0722-4a4e-9d05-d2841287fe8f</serial>
Feb 02 15:50:24 compute-0 nova_compute[239545]:   <encryption format="luks">
Feb 02 15:50:24 compute-0 nova_compute[239545]:     <secret type="passphrase" uuid="b611ba03-f09e-479c-8a11-a1adbaa6eef6"/>
Feb 02 15:50:24 compute-0 nova_compute[239545]:   </encryption>
Feb 02 15:50:24 compute-0 nova_compute[239545]: </disk>
Feb 02 15:50:24 compute-0 nova_compute[239545]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 02 15:50:24 compute-0 nova_compute[239545]: 2026-02-02 15:50:24.094 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 15:50:24 compute-0 podman[273356]: 2026-02-02 15:50:24.311475451 +0000 UTC m=+0.049838655 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:50:24 compute-0 podman[273355]: 2026-02-02 15:50:24.333407623 +0000 UTC m=+0.071590713 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 02 15:50:24 compute-0 nova_compute[239545]: 2026-02-02 15:50:24.540 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:25 compute-0 ceph-mon[75334]: pgmap v1785: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 02 15:50:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.6 MiB/s wr, 56 op/s
Feb 02 15:50:26 compute-0 nova_compute[239545]: 2026-02-02 15:50:26.341 239549 DEBUG nova.virt.libvirt.driver [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:50:26 compute-0 nova_compute[239545]: 2026-02-02 15:50:26.341 239549 DEBUG nova.virt.libvirt.driver [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:50:26 compute-0 nova_compute[239545]: 2026-02-02 15:50:26.341 239549 DEBUG nova.virt.libvirt.driver [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:50:26 compute-0 nova_compute[239545]: 2026-02-02 15:50:26.341 239549 DEBUG nova.virt.libvirt.driver [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No VIF found with MAC fa:16:3e:e2:01:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:50:26 compute-0 nova_compute[239545]: 2026-02-02 15:50:26.642 239549 DEBUG oslo_concurrency.lockutils [None req-93562270-9cd1-4ea3-9381-8e75734f0fd8 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.408 239549 DEBUG oslo_concurrency.lockutils [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.409 239549 DEBUG oslo_concurrency.lockutils [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.425 239549 INFO nova.compute.manager [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Detaching volume 596184f8-0722-4a4e-9d05-d2841287fe8f
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.543 239549 INFO nova.virt.block_device [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Attempting to driver detach volume 596184f8-0722-4a4e-9d05-d2841287fe8f from mountpoint /dev/vdb
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.663 239549 DEBUG os_brick.encryptors [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Using volume encryption metadata '{'encryption_key_id': 'fc0081f3-0ac6-491f-a28c-9f3e1968eb55', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-596184f8-0722-4a4e-9d05-d2841287fe8f', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '596184f8-0722-4a4e-9d05-d2841287fe8f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '2d2eca14-3fbd-4b14-89c7-1222669b1ce0', 'attached_at': '', 'detached_at': '', 'volume_id': '596184f8-0722-4a4e-9d05-d2841287fe8f', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.669 239549 DEBUG nova.virt.libvirt.driver [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Attempting to detach device vdb from instance 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.670 239549 DEBUG nova.virt.libvirt.guest [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-596184f8-0722-4a4e-9d05-d2841287fe8f">
Feb 02 15:50:27 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   </source>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <serial>596184f8-0722-4a4e-9d05-d2841287fe8f</serial>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <encryption format="luks">
Feb 02 15:50:27 compute-0 nova_compute[239545]:     <secret type="passphrase" uuid="b611ba03-f09e-479c-8a11-a1adbaa6eef6"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   </encryption>
Feb 02 15:50:27 compute-0 nova_compute[239545]: </disk>
Feb 02 15:50:27 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.676 239549 INFO nova.virt.libvirt.driver [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Successfully detached device vdb from instance 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 from the persistent domain config.
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.676 239549 DEBUG nova.virt.libvirt.driver [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.677 239549 DEBUG nova.virt.libvirt.guest [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] detach device xml: <disk type="network" device="disk">
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <source protocol="rbd" name="volumes/volume-596184f8-0722-4a4e-9d05-d2841287fe8f">
Feb 02 15:50:27 compute-0 nova_compute[239545]:     <host name="192.168.122.100" port="6789"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   </source>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <target dev="vdb" bus="virtio"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <serial>596184f8-0722-4a4e-9d05-d2841287fe8f</serial>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   <encryption format="luks">
Feb 02 15:50:27 compute-0 nova_compute[239545]:     <secret type="passphrase" uuid="b611ba03-f09e-479c-8a11-a1adbaa6eef6"/>
Feb 02 15:50:27 compute-0 nova_compute[239545]:   </encryption>
Feb 02 15:50:27 compute-0 nova_compute[239545]: </disk>
Feb 02 15:50:27 compute-0 nova_compute[239545]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 02 15:50:27 compute-0 ceph-mon[75334]: pgmap v1786: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.6 MiB/s wr, 56 op/s
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.777 239549 DEBUG nova.virt.libvirt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Received event <DeviceRemovedEvent: 1770047427.7770739, 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.778 239549 DEBUG nova.virt.libvirt.driver [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.780 239549 INFO nova.virt.libvirt.driver [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Successfully detached device vdb from instance 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 from the live domain config.
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.963 239549 DEBUG nova.objects.instance [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'flavor' on Instance uuid 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:50:27 compute-0 nova_compute[239545]: 2026-02-02 15:50:27.996 239549 DEBUG oslo_concurrency.lockutils [None req-d97e1392-4466-4db0-9af2-0ad27348affb 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 13 KiB/s wr, 3 op/s
Feb 02 15:50:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:50:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/569443808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:50:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:50:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/569443808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:50:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/569443808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:50:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/569443808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.891 239549 DEBUG oslo_concurrency.lockutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.891 239549 DEBUG oslo_concurrency.lockutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.891 239549 DEBUG oslo_concurrency.lockutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.892 239549 DEBUG oslo_concurrency.lockutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.892 239549 DEBUG oslo_concurrency.lockutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.893 239549 INFO nova.compute.manager [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Terminating instance
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.894 239549 DEBUG nova.compute.manager [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:50:28 compute-0 kernel: tapbb84195f-05 (unregistering): left promiscuous mode
Feb 02 15:50:28 compute-0 NetworkManager[49171]: <info>  [1770047428.9535] device (tapbb84195f-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.965 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:28 compute-0 ovn_controller[144995]: 2026-02-02T15:50:28Z|00266|binding|INFO|Releasing lport bb84195f-05e7-45f3-871b-6bd27abe7803 from this chassis (sb_readonly=0)
Feb 02 15:50:28 compute-0 ovn_controller[144995]: 2026-02-02T15:50:28Z|00267|binding|INFO|Setting lport bb84195f-05e7-45f3-871b-6bd27abe7803 down in Southbound
Feb 02 15:50:28 compute-0 ovn_controller[144995]: 2026-02-02T15:50:28Z|00268|binding|INFO|Removing iface tapbb84195f-05 ovn-installed in OVS
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.968 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:28 compute-0 nova_compute[239545]: 2026-02-02 15:50:28.972 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:28.974 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:01:ba 10.100.0.9'], port_security=['fa:16:3e:e2:01:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2d2eca14-3fbd-4b14-89c7-1222669b1ce0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-571a8d26-1b08-4233-a158-71a28cbbf88c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e1abae6c1404ce2b24265e7136ffe6a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f1fa883-939f-4034-a7da-27482c2d1bd4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7394ccd-eb0f-47a9-85af-ffa4a04fcde8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=bb84195f-05e7-45f3-871b-6bd27abe7803) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:50:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:28.976 154982 INFO neutron.agent.ovn.metadata.agent [-] Port bb84195f-05e7-45f3-871b-6bd27abe7803 in datapath 571a8d26-1b08-4233-a158-71a28cbbf88c unbound from our chassis
Feb 02 15:50:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:28.977 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 571a8d26-1b08-4233-a158-71a28cbbf88c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:50:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:28.978 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c2fd537f-9797-4617-b116-57edbac3bce1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:28 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:28.978 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c namespace which is not needed anymore
Feb 02 15:50:29 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Feb 02 15:50:29 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 15.066s CPU time.
Feb 02 15:50:29 compute-0 systemd-machined[207609]: Machine qemu-27-instance-0000001b terminated.
Feb 02 15:50:29 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[272784]: [NOTICE]   (272799) : haproxy version is 2.8.14-c23fe91
Feb 02 15:50:29 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[272784]: [NOTICE]   (272799) : path to executable is /usr/sbin/haproxy
Feb 02 15:50:29 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[272784]: [WARNING]  (272799) : Exiting Master process...
Feb 02 15:50:29 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[272784]: [ALERT]    (272799) : Current worker (272803) exited with code 143 (Terminated)
Feb 02 15:50:29 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[272784]: [WARNING]  (272799) : All workers exited. Exiting... (0)
Feb 02 15:50:29 compute-0 systemd[1]: libpod-42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b.scope: Deactivated successfully.
Feb 02 15:50:29 compute-0 podman[273424]: 2026-02-02 15:50:29.087578573 +0000 UTC m=+0.039964065 container died 42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.095 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.109 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b-userdata-shm.mount: Deactivated successfully.
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.116 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-13e5ec2c1b519b649aa64e1adcbfe207c4cc73169cda48e5563dbac3415bb3d4-merged.mount: Deactivated successfully.
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.121 239549 INFO nova.virt.libvirt.driver [-] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Instance destroyed successfully.
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.121 239549 DEBUG nova.objects.instance [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'resources' on Instance uuid 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.138 239549 DEBUG nova.virt.libvirt.vif [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:49:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1819877545',display_name='tempest-TestEncryptedCinderVolumes-server-1819877545',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1819877545',id=27,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDaOlyxmAyuy76oWtAo15Hl9/4Q0MejG5dv89gID1Tzgvaqktd9FkvkddJltkvqWs4rflVs9BoIu+pRNya2tmUA0lRv7tq9xh7HtYLzpAbqRmAQM05vadKi0bX2BV/0beQ==',key_name='tempest-keypair-199719867',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:50:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6e1abae6c1404ce2b24265e7136ffe6a',ramdisk_id='',reservation_id='r-fezb31rz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-987785960',owner_user_name='tempest-TestEncryptedCinderVolumes-987785960-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:50:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16b55bfc98574e0096db4f19bcdcbb2e',uuid=2d2eca14-3fbd-4b14-89c7-1222669b1ce0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.139 239549 DEBUG nova.network.os_vif_util [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converting VIF {"id": "bb84195f-05e7-45f3-871b-6bd27abe7803", "address": "fa:16:3e:e2:01:ba", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb84195f-05", "ovs_interfaceid": "bb84195f-05e7-45f3-871b-6bd27abe7803", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:50:29 compute-0 podman[273424]: 2026-02-02 15:50:29.139614622 +0000 UTC m=+0.092000124 container cleanup 42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.139 239549 DEBUG nova.network.os_vif_util [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:01:ba,bridge_name='br-int',has_traffic_filtering=True,id=bb84195f-05e7-45f3-871b-6bd27abe7803,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb84195f-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.140 239549 DEBUG os_vif [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:01:ba,bridge_name='br-int',has_traffic_filtering=True,id=bb84195f-05e7-45f3-871b-6bd27abe7803,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb84195f-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.141 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.141 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb84195f-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.143 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 systemd[1]: libpod-conmon-42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b.scope: Deactivated successfully.
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.145 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.147 239549 INFO os_vif [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:01:ba,bridge_name='br-int',has_traffic_filtering=True,id=bb84195f-05e7-45f3-871b-6bd27abe7803,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb84195f-05')
Feb 02 15:50:29 compute-0 podman[273463]: 2026-02-02 15:50:29.213109001 +0000 UTC m=+0.057736733 container remove 42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 15:50:29 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:29.217 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e7247be8-8a95-463c-b762-13ddafd201ce]: (4, ('Mon Feb  2 03:50:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c (42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b)\n42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b\nMon Feb  2 03:50:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c (42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b)\n42097d03f345bfc5722f9ecad1384d57d0e75cecd6a0c6b2a8a070214ccb1c1b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:29 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:29.219 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[984975b7-d300-46ed-ba21-ca6beb1bf6e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:29 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:29.221 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap571a8d26-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.223 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 kernel: tap571a8d26-10: left promiscuous mode
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.229 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.230 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:29.233 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc57f47-9e2b-4341-b41d-dc61fb392677]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:29 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:29.257 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d63f36-93ca-4ac0-be0f-3a465424387b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:29 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:29.259 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[77ce84a6-76f8-43b3-ad29-fd1bd2030a05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:29 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:29.273 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[266f471e-25e5-43e1-962d-d520321d4396]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488576, 'reachable_time': 16432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273493, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:29 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:29.277 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:50:29 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:29.277 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[b5052881-de59-4df7-a1ca-afe6361342bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:50:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d571a8d26\x2d1b08\x2d4233\x2da158\x2d71a28cbbf88c.mount: Deactivated successfully.
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.543 239549 DEBUG nova.compute.manager [req-58657922-c285-4f18-8702-daa8c572334e req-0582a9c7-16ac-4457-b822-f5fd056bebcf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received event network-vif-unplugged-bb84195f-05e7-45f3-871b-6bd27abe7803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.544 239549 DEBUG oslo_concurrency.lockutils [req-58657922-c285-4f18-8702-daa8c572334e req-0582a9c7-16ac-4457-b822-f5fd056bebcf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.544 239549 DEBUG oslo_concurrency.lockutils [req-58657922-c285-4f18-8702-daa8c572334e req-0582a9c7-16ac-4457-b822-f5fd056bebcf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.544 239549 DEBUG oslo_concurrency.lockutils [req-58657922-c285-4f18-8702-daa8c572334e req-0582a9c7-16ac-4457-b822-f5fd056bebcf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.544 239549 DEBUG nova.compute.manager [req-58657922-c285-4f18-8702-daa8c572334e req-0582a9c7-16ac-4457-b822-f5fd056bebcf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] No waiting events found dispatching network-vif-unplugged-bb84195f-05e7-45f3-871b-6bd27abe7803 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.545 239549 DEBUG nova.compute.manager [req-58657922-c285-4f18-8702-daa8c572334e req-0582a9c7-16ac-4457-b822-f5fd056bebcf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received event network-vif-unplugged-bb84195f-05e7-45f3-871b-6bd27abe7803 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.545 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.552 239549 INFO nova.virt.libvirt.driver [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Deleting instance files /var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0_del
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.552 239549 INFO nova.virt.libvirt.driver [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Deletion of /var/lib/nova/instances/2d2eca14-3fbd-4b14-89c7-1222669b1ce0_del complete
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.640 239549 INFO nova.compute.manager [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Took 0.75 seconds to destroy the instance on the hypervisor.
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.641 239549 DEBUG oslo.service.loopingcall [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.641 239549 DEBUG nova.compute.manager [-] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:50:29 compute-0 nova_compute[239545]: 2026-02-02 15:50:29.641 239549 DEBUG nova.network.neutron [-] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:50:29 compute-0 ceph-mon[75334]: pgmap v1787: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 13 KiB/s wr, 3 op/s
Feb 02 15:50:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 13 KiB/s wr, 6 op/s
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.677 239549 DEBUG nova.compute.manager [req-d48b01af-de21-4dc1-b485-b8f4be1eac14 req-0cf69e33-5780-45c9-b59e-e8a2ecc0e936 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received event network-vif-plugged-bb84195f-05e7-45f3-871b-6bd27abe7803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.677 239549 DEBUG oslo_concurrency.lockutils [req-d48b01af-de21-4dc1-b485-b8f4be1eac14 req-0cf69e33-5780-45c9-b59e-e8a2ecc0e936 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.677 239549 DEBUG oslo_concurrency.lockutils [req-d48b01af-de21-4dc1-b485-b8f4be1eac14 req-0cf69e33-5780-45c9-b59e-e8a2ecc0e936 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.678 239549 DEBUG oslo_concurrency.lockutils [req-d48b01af-de21-4dc1-b485-b8f4be1eac14 req-0cf69e33-5780-45c9-b59e-e8a2ecc0e936 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.678 239549 DEBUG nova.compute.manager [req-d48b01af-de21-4dc1-b485-b8f4be1eac14 req-0cf69e33-5780-45c9-b59e-e8a2ecc0e936 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] No waiting events found dispatching network-vif-plugged-bb84195f-05e7-45f3-871b-6bd27abe7803 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.678 239549 WARNING nova.compute.manager [req-d48b01af-de21-4dc1-b485-b8f4be1eac14 req-0cf69e33-5780-45c9-b59e-e8a2ecc0e936 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received unexpected event network-vif-plugged-bb84195f-05e7-45f3-871b-6bd27abe7803 for instance with vm_state active and task_state deleting.
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.715 239549 DEBUG nova.network.neutron [-] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.734 239549 INFO nova.compute.manager [-] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Took 2.09 seconds to deallocate network for instance.
Feb 02 15:50:31 compute-0 ceph-mon[75334]: pgmap v1788: 305 pgs: 305 active+clean; 249 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 13 KiB/s wr, 6 op/s
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.812 239549 DEBUG oslo_concurrency.lockutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.812 239549 DEBUG oslo_concurrency.lockutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.831 239549 DEBUG nova.compute.manager [req-7443f282-6812-4203-84a6-0b8070dbb07a req-2c7fcc76-69dc-4570-9445-db7584b9476a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Received event network-vif-deleted-bb84195f-05e7-45f3-871b-6bd27abe7803 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:50:31 compute-0 nova_compute[239545]: 2026-02-02 15:50:31.873 239549 DEBUG oslo_concurrency.processutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:50:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 192 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 25 op/s
Feb 02 15:50:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:50:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070227419' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:50:32 compute-0 nova_compute[239545]: 2026-02-02 15:50:32.422 239549 INFO nova.compute.manager [None req-3f8c3ab5-f83b-47b7-8488-70b836bee041 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Get console output
Feb 02 15:50:32 compute-0 nova_compute[239545]: 2026-02-02 15:50:32.432 239549 INFO oslo.privsep.daemon [None req-3f8c3ab5-f83b-47b7-8488-70b836bee041 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpfm0hx76t/privsep.sock']
Feb 02 15:50:32 compute-0 nova_compute[239545]: 2026-02-02 15:50:32.452 239549 DEBUG oslo_concurrency.processutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:50:32 compute-0 nova_compute[239545]: 2026-02-02 15:50:32.459 239549 DEBUG nova.compute.provider_tree [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:50:32 compute-0 nova_compute[239545]: 2026-02-02 15:50:32.499 239549 DEBUG nova.scheduler.client.report [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:50:32 compute-0 nova_compute[239545]: 2026-02-02 15:50:32.534 239549 DEBUG oslo_concurrency.lockutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:32 compute-0 nova_compute[239545]: 2026-02-02 15:50:32.583 239549 INFO nova.scheduler.client.report [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Deleted allocations for instance 2d2eca14-3fbd-4b14-89c7-1222669b1ce0
Feb 02 15:50:32 compute-0 nova_compute[239545]: 2026-02-02 15:50:32.662 239549 DEBUG oslo_concurrency.lockutils [None req-96213446-eb8d-4e6b-9c57-2140260f9d06 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "2d2eca14-3fbd-4b14-89c7-1222669b1ce0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:32 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2070227419' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:50:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:33 compute-0 nova_compute[239545]: 2026-02-02 15:50:33.495 239549 INFO oslo.privsep.daemon [None req-3f8c3ab5-f83b-47b7-8488-70b836bee041 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Spawned new privsep daemon via rootwrap
Feb 02 15:50:33 compute-0 nova_compute[239545]: 2026-02-02 15:50:33.387 273521 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 02 15:50:33 compute-0 nova_compute[239545]: 2026-02-02 15:50:33.390 273521 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 02 15:50:33 compute-0 nova_compute[239545]: 2026-02-02 15:50:33.392 273521 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 02 15:50:33 compute-0 nova_compute[239545]: 2026-02-02 15:50:33.392 273521 INFO oslo.privsep.daemon [-] privsep daemon running as pid 273521
Feb 02 15:50:33 compute-0 nova_compute[239545]: 2026-02-02 15:50:33.611 273521 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 02 15:50:33 compute-0 ceph-mon[75334]: pgmap v1789: 305 pgs: 305 active+clean; 192 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 25 op/s
Feb 02 15:50:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 170 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 33 op/s
Feb 02 15:50:34 compute-0 nova_compute[239545]: 2026-02-02 15:50:34.144 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:34 compute-0 nova_compute[239545]: 2026-02-02 15:50:34.542 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Feb 02 15:50:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Feb 02 15:50:35 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Feb 02 15:50:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:50:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3927220449' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:50:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:50:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3927220449' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:50:36 compute-0 ceph-mon[75334]: pgmap v1790: 305 pgs: 305 active+clean; 170 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 33 op/s
Feb 02 15:50:36 compute-0 ceph-mon[75334]: osdmap e485: 3 total, 3 up, 3 in
Feb 02 15:50:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3927220449' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:50:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3927220449' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:50:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.4 KiB/s wr, 59 op/s
Feb 02 15:50:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Feb 02 15:50:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Feb 02 15:50:37 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Feb 02 15:50:37 compute-0 ceph-mon[75334]: pgmap v1792: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.4 KiB/s wr, 59 op/s
Feb 02 15:50:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:50:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774791496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:50:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:50:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774791496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:50:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.5 KiB/s wr, 71 op/s
Feb 02 15:50:38 compute-0 ceph-mon[75334]: osdmap e486: 3 total, 3 up, 3 in
Feb 02 15:50:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/774791496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:50:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/774791496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:50:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e486 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.259285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047438259319, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 920, "num_deletes": 258, "total_data_size": 1210418, "memory_usage": 1234256, "flush_reason": "Manual Compaction"}
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047438266443, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1198592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35143, "largest_seqno": 36062, "table_properties": {"data_size": 1194001, "index_size": 2179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10122, "raw_average_key_size": 19, "raw_value_size": 1184674, "raw_average_value_size": 2260, "num_data_blocks": 97, "num_entries": 524, "num_filter_entries": 524, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770047363, "oldest_key_time": 1770047363, "file_creation_time": 1770047438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7205 microseconds, and 2844 cpu microseconds.
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.266491) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1198592 bytes OK
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.266510) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.268945) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.268956) EVENT_LOG_v1 {"time_micros": 1770047438268953, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.268973) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1205916, prev total WAL file size 1205916, number of live WAL files 2.
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.269337) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1170KB)], [71(9917KB)]
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047438269366, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11353677, "oldest_snapshot_seqno": -1}
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6779 keys, 11192203 bytes, temperature: kUnknown
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047438320576, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11192203, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11140774, "index_size": 33397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 170914, "raw_average_key_size": 25, "raw_value_size": 11012977, "raw_average_value_size": 1624, "num_data_blocks": 1336, "num_entries": 6779, "num_filter_entries": 6779, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770047438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.320814) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11192203 bytes
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.323244) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.4 rd, 218.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.7 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(18.8) write-amplify(9.3) OK, records in: 7311, records dropped: 532 output_compression: NoCompression
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.323261) EVENT_LOG_v1 {"time_micros": 1770047438323252, "job": 40, "event": "compaction_finished", "compaction_time_micros": 51274, "compaction_time_cpu_micros": 17957, "output_level": 6, "num_output_files": 1, "total_output_size": 11192203, "num_input_records": 7311, "num_output_records": 6779, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047438323494, "job": 40, "event": "table_file_deletion", "file_number": 73}
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047438324308, "job": 40, "event": "table_file_deletion", "file_number": 71}
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.269266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.324361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.324365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.324367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.324368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:50:38 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:50:38.324370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:50:39 compute-0 nova_compute[239545]: 2026-02-02 15:50:39.146 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Feb 02 15:50:39 compute-0 ceph-mon[75334]: pgmap v1794: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.5 KiB/s wr, 71 op/s
Feb 02 15:50:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Feb 02 15:50:39 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Feb 02 15:50:39 compute-0 nova_compute[239545]: 2026-02-02 15:50:39.544 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 1.7 KiB/s wr, 59 op/s
Feb 02 15:50:40 compute-0 ceph-mon[75334]: osdmap e487: 3 total, 3 up, 3 in
Feb 02 15:50:41 compute-0 ceph-mon[75334]: pgmap v1796: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 1.7 KiB/s wr, 59 op/s
Feb 02 15:50:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 3.9 KiB/s wr, 104 op/s
Feb 02 15:50:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:50:42
Feb 02 15:50:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:50:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:50:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data', '.mgr', 'vms', 'backups']
Feb 02 15:50:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:50:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e487 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Feb 02 15:50:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Feb 02 15:50:43 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Feb 02 15:50:43 compute-0 ceph-mon[75334]: pgmap v1797: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 3.9 KiB/s wr, 104 op/s
Feb 02 15:50:43 compute-0 ceph-mon[75334]: osdmap e488: 3 total, 3 up, 3 in
Feb 02 15:50:44 compute-0 nova_compute[239545]: 2026-02-02 15:50:44.119 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047429.1168838, 2d2eca14-3fbd-4b14-89c7-1222669b1ce0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:50:44 compute-0 nova_compute[239545]: 2026-02-02 15:50:44.120 239549 INFO nova.compute.manager [-] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] VM Stopped (Lifecycle Event)
Feb 02 15:50:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.6 KiB/s wr, 73 op/s
Feb 02 15:50:44 compute-0 nova_compute[239545]: 2026-02-02 15:50:44.140 239549 DEBUG nova.compute.manager [None req-1422312f-e4a5-4d5e-b07c-12389fd589ac - - - - - -] [instance: 2d2eca14-3fbd-4b14-89c7-1222669b1ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:50:44 compute-0 nova_compute[239545]: 2026-02-02 15:50:44.183 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:44 compute-0 nova_compute[239545]: 2026-02-02 15:50:44.545 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:50:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:50:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:50:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:50:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:50:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:50:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:50:45 compute-0 ceph-mon[75334]: pgmap v1799: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.6 KiB/s wr, 73 op/s
Feb 02 15:50:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.2 KiB/s wr, 63 op/s
Feb 02 15:50:47 compute-0 ceph-mon[75334]: pgmap v1800: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.2 KiB/s wr, 63 op/s
Feb 02 15:50:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.0 KiB/s wr, 43 op/s
Feb 02 15:50:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Feb 02 15:50:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Feb 02 15:50:48 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Feb 02 15:50:49 compute-0 nova_compute[239545]: 2026-02-02 15:50:49.186 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:49 compute-0 nova_compute[239545]: 2026-02-02 15:50:49.546 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:49 compute-0 ceph-mon[75334]: pgmap v1801: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.0 KiB/s wr, 43 op/s
Feb 02 15:50:49 compute-0 ceph-mon[75334]: osdmap e489: 3 total, 3 up, 3 in
Feb 02 15:50:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:50:51 compute-0 ceph-mon[75334]: pgmap v1803: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail
Feb 02 15:50:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 692 B/s rd, 1 op/s
Feb 02 15:50:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:53 compute-0 ceph-mon[75334]: pgmap v1804: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 692 B/s rd, 1 op/s
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Feb 02 15:50:54 compute-0 nova_compute[239545]: 2026-02-02 15:50:54.189 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:54 compute-0 nova_compute[239545]: 2026-02-02 15:50:54.548 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007631947629858664 of space, bias 1.0, pg target 0.2289584288957599 quantized to 32 (current 32)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000386112841838209 of space, bias 1.0, pg target 0.1158338525514627 quantized to 32 (current 32)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.876890716681903e-06 of space, bias 1.0, pg target 0.0011630672150045708 quantized to 32 (current 32)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677628867582875 of space, bias 1.0, pg target 0.20032886602748626 quantized to 32 (current 32)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.425403724884532e-06 of space, bias 4.0, pg target 0.0017104844698614386 quantized to 16 (current 16)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:50:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:50:55 compute-0 podman[273524]: 2026-02-02 15:50:55.334870444 +0000 UTC m=+0.067360215 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 02 15:50:55 compute-0 podman[273525]: 2026-02-02 15:50:55.35143169 +0000 UTC m=+0.079223013 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Feb 02 15:50:55 compute-0 ceph-mon[75334]: pgmap v1805: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Feb 02 15:50:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Feb 02 15:50:57 compute-0 ceph-mon[75334]: pgmap v1806: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Feb 02 15:50:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Feb 02 15:50:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:50:59 compute-0 nova_compute[239545]: 2026-02-02 15:50:59.192 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:59.261 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:50:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:59.262 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:50:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:50:59.262 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:50:59 compute-0 nova_compute[239545]: 2026-02-02 15:50:59.551 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:50:59 compute-0 ceph-mon[75334]: pgmap v1807: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Feb 02 15:51:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 89 B/s wr, 7 op/s
Feb 02 15:51:01 compute-0 ovn_controller[144995]: 2026-02-02T15:51:01Z|00269|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Feb 02 15:51:01 compute-0 ceph-mon[75334]: pgmap v1808: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 89 B/s wr, 7 op/s
Feb 02 15:51:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Feb 02 15:51:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:03 compute-0 ceph-mon[75334]: pgmap v1809: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Feb 02 15:51:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Feb 02 15:51:04 compute-0 nova_compute[239545]: 2026-02-02 15:51:04.194 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:04 compute-0 sudo[273568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:51:04 compute-0 sudo[273568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:04 compute-0 sudo[273568]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:04 compute-0 sudo[273593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 15:51:04 compute-0 sudo[273593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:04 compute-0 nova_compute[239545]: 2026-02-02 15:51:04.551 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:04 compute-0 podman[273660]: 2026-02-02 15:51:04.876612066 +0000 UTC m=+0.048942592 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:51:04 compute-0 podman[273660]: 2026-02-02 15:51:04.960178288 +0000 UTC m=+0.132508814 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:51:05 compute-0 sudo[273593]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:51:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:05 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:51:05 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:05 compute-0 sudo[273847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:51:05 compute-0 sudo[273847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:05 compute-0 sudo[273847]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:05 compute-0 sudo[273872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:51:05 compute-0 sudo[273872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:05 compute-0 ceph-mon[75334]: pgmap v1810: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Feb 02 15:51:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:05 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:51:06 compute-0 sudo[273872]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:51:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:51:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:51:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:51:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:51:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:51:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:51:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:51:06 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:51:06 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:51:06 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:51:06 compute-0 sudo[273929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:51:06 compute-0 sudo[273929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:06 compute-0 sudo[273929]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:06 compute-0 sudo[273954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:51:06 compute-0 sudo[273954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:06 compute-0 podman[273990]: 2026-02-02 15:51:06.544247028 +0000 UTC m=+0.032090458 container create 23cc314239927d655b831ff46c05f8ce14473b4c5039632dbdaa1fa99ea2b342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:51:06 compute-0 systemd[1]: Started libpod-conmon-23cc314239927d655b831ff46c05f8ce14473b4c5039632dbdaa1fa99ea2b342.scope.
Feb 02 15:51:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:51:06 compute-0 podman[273990]: 2026-02-02 15:51:06.61869439 +0000 UTC m=+0.106537850 container init 23cc314239927d655b831ff46c05f8ce14473b4c5039632dbdaa1fa99ea2b342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:51:06 compute-0 podman[273990]: 2026-02-02 15:51:06.62781855 +0000 UTC m=+0.115662020 container start 23cc314239927d655b831ff46c05f8ce14473b4c5039632dbdaa1fa99ea2b342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hopper, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:51:06 compute-0 podman[273990]: 2026-02-02 15:51:06.530522253 +0000 UTC m=+0.018365703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:51:06 compute-0 podman[273990]: 2026-02-02 15:51:06.63219425 +0000 UTC m=+0.120037710 container attach 23cc314239927d655b831ff46c05f8ce14473b4c5039632dbdaa1fa99ea2b342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:51:06 compute-0 cranky_hopper[274005]: 167 167
Feb 02 15:51:06 compute-0 systemd[1]: libpod-23cc314239927d655b831ff46c05f8ce14473b4c5039632dbdaa1fa99ea2b342.scope: Deactivated successfully.
Feb 02 15:51:06 compute-0 podman[273990]: 2026-02-02 15:51:06.633996975 +0000 UTC m=+0.121840405 container died 23cc314239927d655b831ff46c05f8ce14473b4c5039632dbdaa1fa99ea2b342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hopper, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c9c98e358e67ba9f12a08a6129a36c6228856224dcb5e4b3f21ada9e86c5233-merged.mount: Deactivated successfully.
Feb 02 15:51:06 compute-0 podman[273990]: 2026-02-02 15:51:06.671432447 +0000 UTC m=+0.159275877 container remove 23cc314239927d655b831ff46c05f8ce14473b4c5039632dbdaa1fa99ea2b342 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hopper, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:51:06 compute-0 systemd[1]: libpod-conmon-23cc314239927d655b831ff46c05f8ce14473b4c5039632dbdaa1fa99ea2b342.scope: Deactivated successfully.
Feb 02 15:51:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:51:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:51:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:51:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:51:06 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:51:06 compute-0 podman[274029]: 2026-02-02 15:51:06.814573537 +0000 UTC m=+0.046883040 container create 1ee21572f8054d598f43c206625876879792817af57589f870cd353c305301b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_spence, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:51:06 compute-0 systemd[1]: Started libpod-conmon-1ee21572f8054d598f43c206625876879792817af57589f870cd353c305301b7.scope.
Feb 02 15:51:06 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2c4067d6b0c573b3bf9469f7396e81940f7208bf045f113f04683aea68e4963/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2c4067d6b0c573b3bf9469f7396e81940f7208bf045f113f04683aea68e4963/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2c4067d6b0c573b3bf9469f7396e81940f7208bf045f113f04683aea68e4963/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:06 compute-0 podman[274029]: 2026-02-02 15:51:06.791859926 +0000 UTC m=+0.024169489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2c4067d6b0c573b3bf9469f7396e81940f7208bf045f113f04683aea68e4963/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2c4067d6b0c573b3bf9469f7396e81940f7208bf045f113f04683aea68e4963/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:06 compute-0 podman[274029]: 2026-02-02 15:51:06.896318754 +0000 UTC m=+0.128628307 container init 1ee21572f8054d598f43c206625876879792817af57589f870cd353c305301b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_spence, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:51:06 compute-0 podman[274029]: 2026-02-02 15:51:06.907191256 +0000 UTC m=+0.139500759 container start 1ee21572f8054d598f43c206625876879792817af57589f870cd353c305301b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_spence, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:51:06 compute-0 podman[274029]: 2026-02-02 15:51:06.911014353 +0000 UTC m=+0.143323876 container attach 1ee21572f8054d598f43c206625876879792817af57589f870cd353c305301b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_spence, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:51:07 compute-0 crazy_spence[274046]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:51:07 compute-0 crazy_spence[274046]: --> All data devices are unavailable
Feb 02 15:51:07 compute-0 systemd[1]: libpod-1ee21572f8054d598f43c206625876879792817af57589f870cd353c305301b7.scope: Deactivated successfully.
Feb 02 15:51:07 compute-0 podman[274029]: 2026-02-02 15:51:07.357295637 +0000 UTC m=+0.589605160 container died 1ee21572f8054d598f43c206625876879792817af57589f870cd353c305301b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2c4067d6b0c573b3bf9469f7396e81940f7208bf045f113f04683aea68e4963-merged.mount: Deactivated successfully.
Feb 02 15:51:07 compute-0 podman[274029]: 2026-02-02 15:51:07.402180326 +0000 UTC m=+0.634489829 container remove 1ee21572f8054d598f43c206625876879792817af57589f870cd353c305301b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:51:07 compute-0 systemd[1]: libpod-conmon-1ee21572f8054d598f43c206625876879792817af57589f870cd353c305301b7.scope: Deactivated successfully.
Feb 02 15:51:07 compute-0 sudo[273954]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:07 compute-0 sudo[274077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:51:07 compute-0 sudo[274077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:07 compute-0 sudo[274077]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:07 compute-0 sudo[274102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:51:07 compute-0 sudo[274102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:07 compute-0 ceph-mon[75334]: pgmap v1811: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:51:07 compute-0 podman[274137]: 2026-02-02 15:51:07.792074582 +0000 UTC m=+0.038549570 container create 0510c77be680f07ed61795ca2f41167fea24699e4d6fefbf3ed5887ce9c2ca58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:51:07 compute-0 systemd[1]: Started libpod-conmon-0510c77be680f07ed61795ca2f41167fea24699e4d6fefbf3ed5887ce9c2ca58.scope.
Feb 02 15:51:07 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:51:07 compute-0 podman[274137]: 2026-02-02 15:51:07.85557047 +0000 UTC m=+0.102045458 container init 0510c77be680f07ed61795ca2f41167fea24699e4d6fefbf3ed5887ce9c2ca58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:51:07 compute-0 podman[274137]: 2026-02-02 15:51:07.864957485 +0000 UTC m=+0.111432473 container start 0510c77be680f07ed61795ca2f41167fea24699e4d6fefbf3ed5887ce9c2ca58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:51:07 compute-0 tender_sanderson[274153]: 167 167
Feb 02 15:51:07 compute-0 systemd[1]: libpod-0510c77be680f07ed61795ca2f41167fea24699e4d6fefbf3ed5887ce9c2ca58.scope: Deactivated successfully.
Feb 02 15:51:07 compute-0 podman[274137]: 2026-02-02 15:51:07.870792753 +0000 UTC m=+0.117267841 container attach 0510c77be680f07ed61795ca2f41167fea24699e4d6fefbf3ed5887ce9c2ca58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:51:07 compute-0 podman[274137]: 2026-02-02 15:51:07.871447989 +0000 UTC m=+0.117922987 container died 0510c77be680f07ed61795ca2f41167fea24699e4d6fefbf3ed5887ce9c2ca58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:51:07 compute-0 podman[274137]: 2026-02-02 15:51:07.775934436 +0000 UTC m=+0.022409444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aa8927394bfd80622730548f8bb509e80610a41683e49e53a40b7185bd19539-merged.mount: Deactivated successfully.
Feb 02 15:51:07 compute-0 podman[274137]: 2026-02-02 15:51:07.904973822 +0000 UTC m=+0.151448810 container remove 0510c77be680f07ed61795ca2f41167fea24699e4d6fefbf3ed5887ce9c2ca58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_sanderson, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:51:07 compute-0 systemd[1]: libpod-conmon-0510c77be680f07ed61795ca2f41167fea24699e4d6fefbf3ed5887ce9c2ca58.scope: Deactivated successfully.
Feb 02 15:51:08 compute-0 podman[274176]: 2026-02-02 15:51:08.034463398 +0000 UTC m=+0.034283973 container create 3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:51:08 compute-0 systemd[1]: Started libpod-conmon-3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339.scope.
Feb 02 15:51:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8a91b06d7c152cc828a43fb8b9c84cc5c92c2d916f0736ed29c3d067068554/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8a91b06d7c152cc828a43fb8b9c84cc5c92c2d916f0736ed29c3d067068554/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8a91b06d7c152cc828a43fb8b9c84cc5c92c2d916f0736ed29c3d067068554/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f8a91b06d7c152cc828a43fb8b9c84cc5c92c2d916f0736ed29c3d067068554/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:08 compute-0 podman[274176]: 2026-02-02 15:51:08.017525562 +0000 UTC m=+0.017346187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:51:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:51:08 compute-0 podman[274176]: 2026-02-02 15:51:08.137884679 +0000 UTC m=+0.137705284 container init 3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:51:08 compute-0 podman[274176]: 2026-02-02 15:51:08.144359422 +0000 UTC m=+0.144180007 container start 3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 15:51:08 compute-0 podman[274176]: 2026-02-02 15:51:08.160757807 +0000 UTC m=+0.160578402 container attach 3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]: {
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:     "0": [
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:         {
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "devices": [
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "/dev/loop3"
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             ],
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_name": "ceph_lv0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_size": "21470642176",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "name": "ceph_lv0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "tags": {
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.cluster_name": "ceph",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.crush_device_class": "",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.encrypted": "0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.objectstore": "bluestore",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.osd_id": "0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.type": "block",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.vdo": "0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.with_tpm": "0"
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             },
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "type": "block",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "vg_name": "ceph_vg0"
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:         }
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:     ],
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:     "1": [
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:         {
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "devices": [
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "/dev/loop4"
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             ],
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_name": "ceph_lv1",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_size": "21470642176",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "name": "ceph_lv1",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "tags": {
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.cluster_name": "ceph",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.crush_device_class": "",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.encrypted": "0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.objectstore": "bluestore",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.osd_id": "1",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.type": "block",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.vdo": "0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.with_tpm": "0"
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             },
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "type": "block",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "vg_name": "ceph_vg1"
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:         }
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:     ],
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:     "2": [
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:         {
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "devices": [
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "/dev/loop5"
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             ],
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_name": "ceph_lv2",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_size": "21470642176",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "name": "ceph_lv2",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "tags": {
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.cluster_name": "ceph",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.crush_device_class": "",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.encrypted": "0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.objectstore": "bluestore",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.osd_id": "2",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.type": "block",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.vdo": "0",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:                 "ceph.with_tpm": "0"
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             },
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "type": "block",
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:             "vg_name": "ceph_vg2"
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:         }
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]:     ]
Feb 02 15:51:08 compute-0 relaxed_sutherland[274192]: }
Feb 02 15:51:08 compute-0 systemd[1]: libpod-3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339.scope: Deactivated successfully.
Feb 02 15:51:08 compute-0 conmon[274192]: conmon 3171946ce6b6a52f2f1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339.scope/container/memory.events
Feb 02 15:51:08 compute-0 podman[274176]: 2026-02-02 15:51:08.396470431 +0000 UTC m=+0.396291016 container died 3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f8a91b06d7c152cc828a43fb8b9c84cc5c92c2d916f0736ed29c3d067068554-merged.mount: Deactivated successfully.
Feb 02 15:51:08 compute-0 podman[274176]: 2026-02-02 15:51:08.433562026 +0000 UTC m=+0.433382631 container remove 3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:51:08 compute-0 systemd[1]: libpod-conmon-3171946ce6b6a52f2f1fa7dcda2eb2efa381e1e334ac7f0ccc29ba75e494b339.scope: Deactivated successfully.
Feb 02 15:51:08 compute-0 sudo[274102]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:08 compute-0 sudo[274213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:51:08 compute-0 sudo[274213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:08 compute-0 sudo[274213]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:08 compute-0 sudo[274238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:51:08 compute-0 sudo[274238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:08 compute-0 podman[274276]: 2026-02-02 15:51:08.811360084 +0000 UTC m=+0.035602329 container create 230a61600d085cd5b5a777a957df58da94e7282a1d65e767fb6070661cf43f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goldstine, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:51:08 compute-0 systemd[1]: Started libpod-conmon-230a61600d085cd5b5a777a957df58da94e7282a1d65e767fb6070661cf43f8f.scope.
Feb 02 15:51:08 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:51:08 compute-0 podman[274276]: 2026-02-02 15:51:08.883676438 +0000 UTC m=+0.107918683 container init 230a61600d085cd5b5a777a957df58da94e7282a1d65e767fb6070661cf43f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:51:08 compute-0 podman[274276]: 2026-02-02 15:51:08.794187501 +0000 UTC m=+0.018429806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:51:08 compute-0 podman[274276]: 2026-02-02 15:51:08.890394964 +0000 UTC m=+0.114637219 container start 230a61600d085cd5b5a777a957df58da94e7282a1d65e767fb6070661cf43f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:51:08 compute-0 priceless_goldstine[274292]: 167 167
Feb 02 15:51:08 compute-0 systemd[1]: libpod-230a61600d085cd5b5a777a957df58da94e7282a1d65e767fb6070661cf43f8f.scope: Deactivated successfully.
Feb 02 15:51:08 compute-0 podman[274276]: 2026-02-02 15:51:08.89468424 +0000 UTC m=+0.118926485 container attach 230a61600d085cd5b5a777a957df58da94e7282a1d65e767fb6070661cf43f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:51:08 compute-0 podman[274276]: 2026-02-02 15:51:08.894928026 +0000 UTC m=+0.119170271 container died 230a61600d085cd5b5a777a957df58da94e7282a1d65e767fb6070661cf43f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 02 15:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcb8577e4d7d270d54fff5e8b404fa451493fe6e158ef040be1b349a82db605c-merged.mount: Deactivated successfully.
Feb 02 15:51:08 compute-0 podman[274276]: 2026-02-02 15:51:08.926066204 +0000 UTC m=+0.150308449 container remove 230a61600d085cd5b5a777a957df58da94e7282a1d65e767fb6070661cf43f8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goldstine, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 02 15:51:08 compute-0 systemd[1]: libpod-conmon-230a61600d085cd5b5a777a957df58da94e7282a1d65e767fb6070661cf43f8f.scope: Deactivated successfully.
Feb 02 15:51:09 compute-0 podman[274316]: 2026-02-02 15:51:09.066939819 +0000 UTC m=+0.041948186 container create 270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:51:09 compute-0 systemd[1]: Started libpod-conmon-270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c.scope.
Feb 02 15:51:09 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32766b1d4f42698e603e803c3cd9db90218d4ff1ab1e2a7e355a4c66311aa513/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32766b1d4f42698e603e803c3cd9db90218d4ff1ab1e2a7e355a4c66311aa513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32766b1d4f42698e603e803c3cd9db90218d4ff1ab1e2a7e355a4c66311aa513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32766b1d4f42698e603e803c3cd9db90218d4ff1ab1e2a7e355a4c66311aa513/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:09 compute-0 podman[274316]: 2026-02-02 15:51:09.132415604 +0000 UTC m=+0.107423981 container init 270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_franklin, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:51:09 compute-0 podman[274316]: 2026-02-02 15:51:09.136753821 +0000 UTC m=+0.111762188 container start 270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_franklin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:51:09 compute-0 podman[274316]: 2026-02-02 15:51:09.140959615 +0000 UTC m=+0.115968002 container attach 270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:51:09 compute-0 podman[274316]: 2026-02-02 15:51:09.052295467 +0000 UTC m=+0.027303854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:51:09 compute-0 nova_compute[239545]: 2026-02-02 15:51:09.196 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:09 compute-0 nova_compute[239545]: 2026-02-02 15:51:09.552 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:09 compute-0 lvm[274413]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:51:09 compute-0 lvm[274413]: VG ceph_vg1 finished
Feb 02 15:51:09 compute-0 lvm[274412]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:51:09 compute-0 lvm[274412]: VG ceph_vg0 finished
Feb 02 15:51:09 compute-0 lvm[274415]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:51:09 compute-0 lvm[274415]: VG ceph_vg2 finished
Feb 02 15:51:09 compute-0 ceph-mon[75334]: pgmap v1812: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:51:09 compute-0 busy_franklin[274333]: {}
Feb 02 15:51:09 compute-0 systemd[1]: libpod-270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c.scope: Deactivated successfully.
Feb 02 15:51:09 compute-0 podman[274316]: 2026-02-02 15:51:09.870699655 +0000 UTC m=+0.845708022 container died 270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Feb 02 15:51:09 compute-0 systemd[1]: libpod-270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c.scope: Consumed 1.019s CPU time.
Feb 02 15:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-32766b1d4f42698e603e803c3cd9db90218d4ff1ab1e2a7e355a4c66311aa513-merged.mount: Deactivated successfully.
Feb 02 15:51:09 compute-0 podman[274316]: 2026-02-02 15:51:09.997692998 +0000 UTC m=+0.972701375 container remove 270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:51:10 compute-0 systemd[1]: libpod-conmon-270859e4b5aaa4f36d9de07e210c64b7daafc09dae5289ca47ca2a1a54eae28c.scope: Deactivated successfully.
Feb 02 15:51:10 compute-0 sudo[274238]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:51:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:51:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:51:10 compute-0 sudo[274431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:51:10 compute-0 sudo[274431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:51:10 compute-0 sudo[274431]: pam_unix(sudo:session): session closed for user root
Feb 02 15:51:10 compute-0 nova_compute[239545]: 2026-02-02 15:51:10.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:51:10 compute-0 nova_compute[239545]: 2026-02-02 15:51:10.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:51:10 compute-0 nova_compute[239545]: 2026-02-02 15:51:10.577 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:51:10 compute-0 nova_compute[239545]: 2026-02-02 15:51:10.578 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:51:11 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:11 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:51:11 compute-0 ceph-mon[75334]: pgmap v1813: 305 pgs: 305 active+clean; 170 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Feb 02 15:51:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 198 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 MiB/s wr, 23 op/s
Feb 02 15:51:13 compute-0 ceph-mon[75334]: pgmap v1814: 305 pgs: 305 active+clean; 198 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 MiB/s wr, 23 op/s
Feb 02 15:51:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 218 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 3.9 MiB/s wr, 18 op/s
Feb 02 15:51:14 compute-0 nova_compute[239545]: 2026-02-02 15:51:14.235 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:14 compute-0 nova_compute[239545]: 2026-02-02 15:51:14.555 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:14 compute-0 nova_compute[239545]: 2026-02-02 15:51:14.574 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:51:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:51:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:51:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:51:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:51:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:51:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:51:14 compute-0 nova_compute[239545]: 2026-02-02 15:51:14.913 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:14 compute-0 nova_compute[239545]: 2026-02-02 15:51:14.914 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:14 compute-0 nova_compute[239545]: 2026-02-02 15:51:14.939 239549 DEBUG nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.011 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.011 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.019 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.020 239549 INFO nova.compute.claims [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.136 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:15 compute-0 ceph-mon[75334]: pgmap v1815: 305 pgs: 305 active+clean; 218 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 3.9 MiB/s wr, 18 op/s
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:51:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:51:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1413737890' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.729 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.735 239549 DEBUG nova.compute.provider_tree [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.761 239549 DEBUG nova.scheduler.client.report [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.787 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.788 239549 DEBUG nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.844 239549 DEBUG nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.844 239549 DEBUG nova.network.neutron [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.867 239549 INFO nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.890 239549 DEBUG nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:51:15 compute-0 nova_compute[239545]: 2026-02-02 15:51:15.930 239549 INFO nova.virt.block_device [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Booting with volume 7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6 at /dev/vda
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.065 239549 DEBUG os_brick.utils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.067 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.077 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.078 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[32f87d35-7499-4484-bb61-b325a49a563b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.079 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.087 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.088 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[daacde4f-3d5e-47a1-b087-18dba00a3c61]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.090 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.100 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.101 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[8bffaa72-069a-4005-9029-898e157971a6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.102 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c2dbdc-3f61-4678-a6ef-45b42a9b85a5]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.103 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.125 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.127 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.127 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.127 239549 DEBUG os_brick.initiator.connectors.lightos [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.128 239549 DEBUG os_brick.utils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.128 239549 DEBUG nova.virt.block_device [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Updating existing volume attachment record: 6c714970-2a7a-40ec-8cac-2eda7b281e44 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:51:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Feb 02 15:51:16 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1413737890' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:51:16 compute-0 nova_compute[239545]: 2026-02-02 15:51:16.641 239549 DEBUG nova.policy [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16b55bfc98574e0096db4f19bcdcbb2e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6e1abae6c1404ce2b24265e7136ffe6a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:51:16 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:51:16 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2240883430' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.156 239549 DEBUG nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.158 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.158 239549 INFO nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Creating image(s)
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.158 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.159 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Ensure instance console log exists: /var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.159 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.159 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.160 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:17 compute-0 ceph-mon[75334]: pgmap v1816: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Feb 02 15:51:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2240883430' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.293 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:17 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:17.294 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:51:17 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:17.296 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.411 239549 DEBUG nova.network.neutron [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Successfully created port: 8bdda305-0b99-405f-a9b5-e64e9600d192 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.575 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.576 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.576 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.576 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:51:17 compute-0 nova_compute[239545]: 2026-02-02 15:51:17.576 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:51:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3106024128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.077 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.158 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.159 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.159 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:51:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3106024128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.279 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.280 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3965MB free_disk=59.94249867834151GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.280 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.280 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.302 239549 DEBUG nova.network.neutron [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Successfully updated port: 8bdda305-0b99-405f-a9b5-e64e9600d192 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.328 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "refresh_cache-bb8fe37f-cd7c-43d8-9900-7d5ff683444d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.328 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquired lock "refresh_cache-bb8fe37f-cd7c-43d8-9900-7d5ff683444d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.328 239549 DEBUG nova.network.neutron [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.357 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.358 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance bb8fe37f-cd7c-43d8-9900-7d5ff683444d actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.358 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.358 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.400 239549 DEBUG nova.compute.manager [req-36ce646f-355d-43c0-bbf0-4d474c1e91e8 req-36bd6a33-38fa-4385-ab7d-dc12f4f39329 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received event network-changed-8bdda305-0b99-405f-a9b5-e64e9600d192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.401 239549 DEBUG nova.compute.manager [req-36ce646f-355d-43c0-bbf0-4d474c1e91e8 req-36bd6a33-38fa-4385-ab7d-dc12f4f39329 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Refreshing instance network info cache due to event network-changed-8bdda305-0b99-405f-a9b5-e64e9600d192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.401 239549 DEBUG oslo_concurrency.lockutils [req-36ce646f-355d-43c0-bbf0-4d474c1e91e8 req-36bd6a33-38fa-4385-ab7d-dc12f4f39329 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-bb8fe37f-cd7c-43d8-9900-7d5ff683444d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.422 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.468 239549 DEBUG nova.network.neutron [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:51:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:51:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1523984547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.942 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.947 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:51:18 compute-0 nova_compute[239545]: 2026-02-02 15:51:18.984 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.010 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.010 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.236 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:19 compute-0 ceph-mon[75334]: pgmap v1817: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Feb 02 15:51:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1523984547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.525 239549 DEBUG nova.network.neutron [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Updating instance_info_cache with network_info: [{"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.557 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.583 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Releasing lock "refresh_cache-bb8fe37f-cd7c-43d8-9900-7d5ff683444d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.583 239549 DEBUG nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Instance network_info: |[{"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.584 239549 DEBUG oslo_concurrency.lockutils [req-36ce646f-355d-43c0-bbf0-4d474c1e91e8 req-36bd6a33-38fa-4385-ab7d-dc12f4f39329 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-bb8fe37f-cd7c-43d8-9900-7d5ff683444d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.584 239549 DEBUG nova.network.neutron [req-36ce646f-355d-43c0-bbf0-4d474c1e91e8 req-36bd6a33-38fa-4385-ab7d-dc12f4f39329 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Refreshing network info cache for port 8bdda305-0b99-405f-a9b5-e64e9600d192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.589 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Start _get_guest_xml network_info=[{"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '6c714970-2a7a-40ec-8cac-2eda7b281e44', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bb8fe37f-cd7c-43d8-9900-7d5ff683444d', 'attached_at': '', 'detached_at': '', 'volume_id': '7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6', 'serial': '7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.594 239549 WARNING nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.599 239549 DEBUG nova.virt.libvirt.host [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.600 239549 DEBUG nova.virt.libvirt.host [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.603 239549 DEBUG nova.virt.libvirt.host [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.603 239549 DEBUG nova.virt.libvirt.host [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.604 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.604 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.605 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.605 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.605 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.606 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.606 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.606 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.607 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.607 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.607 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.607 239549 DEBUG nova.virt.hardware [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.634 239549 DEBUG nova.storage.rbd_utils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image bb8fe37f-cd7c-43d8-9900-7d5ff683444d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:51:19 compute-0 nova_compute[239545]: 2026-02-02 15:51:19.637 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.010 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:51:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Feb 02 15:51:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:51:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121276630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.165 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1121276630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.298 239549 DEBUG os_brick.encryptors [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Using volume encryption metadata '{'encryption_key_id': '2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bb8fe37f-cd7c-43d8-9900-7d5ff683444d', 'attached_at': '', 'detached_at': '', 'volume_id': '7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.301 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.318 239549 DEBUG barbicanclient.v1.secrets [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.318 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.345 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.345 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.374 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.374 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.404 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.404 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.437 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.437 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.539 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.540 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.565 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.565 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.586 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.587 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.623 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.624 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.646 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.647 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.667 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.667 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.690 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.690 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.713 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.714 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.733 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.734 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.757 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.758 239549 INFO barbicanclient.base [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/2d82e80f-c4f6-40f4-b2d3-6ec3ffe1df6a
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.777 239549 DEBUG barbicanclient.client [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.778 239549 DEBUG nova.virt.libvirt.host [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <usage type="volume">
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <volume>7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6</volume>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   </usage>
Feb 02 15:51:20 compute-0 nova_compute[239545]: </secret>
Feb 02 15:51:20 compute-0 nova_compute[239545]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.819 239549 DEBUG nova.virt.libvirt.vif [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1660316277',display_name='tempest-TestEncryptedCinderVolumes-server-1660316277',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1660316277',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGghUGTIcKcbmFyDjDExJq0q+VsuYW4BqumxGHKLx3E1e/6oKedlb5/fmggown6dVAhqPLOwmstclEUWmmD7KyDyLHDlHuBYQ6150Bpk3MrMabPI6fo5dl75qL/VQaUJ/g==',key_name='tempest-TestEncryptedCinderVolumes-841766027',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e1abae6c1404ce2b24265e7136ffe6a',ramdisk_id='',reservation_id='r-usxwqhrl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-987785960',owner_user_name='tempest-TestEncryptedCinderVolumes-987785960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:51:15Z,user_data=None,user_id='16b55bfc98574e0096db4f19bcdcbb2e',uuid=bb8fe37f-cd7c-43d8-9900-7d5ff683444d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.820 239549 DEBUG nova.network.os_vif_util [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converting VIF {"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.821 239549 DEBUG nova.network.os_vif_util [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:eb:72,bridge_name='br-int',has_traffic_filtering=True,id=8bdda305-0b99-405f-a9b5-e64e9600d192,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bdda305-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.822 239549 DEBUG nova.objects.instance [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'pci_devices' on Instance uuid bb8fe37f-cd7c-43d8-9900-7d5ff683444d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.838 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <uuid>bb8fe37f-cd7c-43d8-9900-7d5ff683444d</uuid>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <name>instance-0000001c</name>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1660316277</nova:name>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:51:19</nova:creationTime>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <nova:user uuid="16b55bfc98574e0096db4f19bcdcbb2e">tempest-TestEncryptedCinderVolumes-987785960-project-member</nova:user>
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <nova:project uuid="6e1abae6c1404ce2b24265e7136ffe6a">tempest-TestEncryptedCinderVolumes-987785960</nova:project>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <nova:port uuid="8bdda305-0b99-405f-a9b5-e64e9600d192">
Feb 02 15:51:20 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <system>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <entry name="serial">bb8fe37f-cd7c-43d8-9900-7d5ff683444d</entry>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <entry name="uuid">bb8fe37f-cd7c-43d8-9900-7d5ff683444d</entry>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     </system>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <os>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   </os>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <features>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   </features>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/bb8fe37f-cd7c-43d8-9900-7d5ff683444d_disk.config">
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       </source>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6">
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       </source>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <serial>7c7bd6ba-a234-4bcb-9249-65ba40a0a8d6</serial>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <encryption format="luks">
Feb 02 15:51:20 compute-0 nova_compute[239545]:         <secret type="passphrase" uuid="7545544b-20a9-47bc-9408-1a949a3cea45"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       </encryption>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:9e:eb:72"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <target dev="tap8bdda305-0b"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d/console.log" append="off"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <video>
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     </video>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:51:20 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:51:20 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:51:20 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:51:20 compute-0 nova_compute[239545]: </domain>
Feb 02 15:51:20 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.839 239549 DEBUG nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Preparing to wait for external event network-vif-plugged-8bdda305-0b99-405f-a9b5-e64e9600d192 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.839 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.839 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.840 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.840 239549 DEBUG nova.virt.libvirt.vif [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1660316277',display_name='tempest-TestEncryptedCinderVolumes-server-1660316277',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1660316277',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGghUGTIcKcbmFyDjDExJq0q+VsuYW4BqumxGHKLx3E1e/6oKedlb5/fmggown6dVAhqPLOwmstclEUWmmD7KyDyLHDlHuBYQ6150Bpk3MrMabPI6fo5dl75qL/VQaUJ/g==',key_name='tempest-TestEncryptedCinderVolumes-841766027',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e1abae6c1404ce2b24265e7136ffe6a',ramdisk_id='',reservation_id='r-usxwqhrl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-987785960',owner_user_name='tempest-TestEncryptedCinderVolumes-987785960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:51:15Z,user_data=None,user_id='16b55bfc98574e0096db4f19bcdcbb2e',uuid=bb8fe37f-cd7c-43d8-9900-7d5ff683444d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.841 239549 DEBUG nova.network.os_vif_util [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converting VIF {"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.841 239549 DEBUG nova.network.os_vif_util [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:eb:72,bridge_name='br-int',has_traffic_filtering=True,id=8bdda305-0b99-405f-a9b5-e64e9600d192,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bdda305-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.842 239549 DEBUG os_vif [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:eb:72,bridge_name='br-int',has_traffic_filtering=True,id=8bdda305-0b99-405f-a9b5-e64e9600d192,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bdda305-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.842 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.843 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.843 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.846 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.846 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8bdda305-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.847 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8bdda305-0b, col_values=(('external_ids', {'iface-id': '8bdda305-0b99-405f-a9b5-e64e9600d192', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:eb:72', 'vm-uuid': 'bb8fe37f-cd7c-43d8-9900-7d5ff683444d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.848 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:20 compute-0 NetworkManager[49171]: <info>  [1770047480.8494] manager: (tap8bdda305-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.851 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.854 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.855 239549 INFO os_vif [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:eb:72,bridge_name='br-int',has_traffic_filtering=True,id=8bdda305-0b99-405f-a9b5-e64e9600d192,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bdda305-0b')
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.900 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.901 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.901 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No VIF found with MAC fa:16:3e:9e:eb:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.901 239549 INFO nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Using config drive
Feb 02 15:51:20 compute-0 nova_compute[239545]: 2026-02-02 15:51:20.920 239549 DEBUG nova.storage.rbd_utils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image bb8fe37f-cd7c-43d8-9900-7d5ff683444d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:51:21 compute-0 ceph-mon[75334]: pgmap v1818: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Feb 02 15:51:21 compute-0 nova_compute[239545]: 2026-02-02 15:51:21.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:51:21 compute-0 nova_compute[239545]: 2026-02-02 15:51:21.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:51:21 compute-0 nova_compute[239545]: 2026-02-02 15:51:21.858 239549 DEBUG nova.network.neutron [req-36ce646f-355d-43c0-bbf0-4d474c1e91e8 req-36bd6a33-38fa-4385-ab7d-dc12f4f39329 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Updated VIF entry in instance network info cache for port 8bdda305-0b99-405f-a9b5-e64e9600d192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:51:21 compute-0 nova_compute[239545]: 2026-02-02 15:51:21.859 239549 DEBUG nova.network.neutron [req-36ce646f-355d-43c0-bbf0-4d474c1e91e8 req-36bd6a33-38fa-4385-ab7d-dc12f4f39329 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Updating instance_info_cache with network_info: [{"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:51:21 compute-0 nova_compute[239545]: 2026-02-02 15:51:21.877 239549 DEBUG oslo_concurrency.lockutils [req-36ce646f-355d-43c0-bbf0-4d474c1e91e8 req-36bd6a33-38fa-4385-ab7d-dc12f4f39329 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-bb8fe37f-cd7c-43d8-9900-7d5ff683444d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:51:21 compute-0 nova_compute[239545]: 2026-02-02 15:51:21.984 239549 INFO nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Creating config drive at /var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d/disk.config
Feb 02 15:51:21 compute-0 nova_compute[239545]: 2026-02-02 15:51:21.989 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpegmb1wu7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.110 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpegmb1wu7" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.135 239549 DEBUG nova.storage.rbd_utils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image bb8fe37f-cd7c-43d8-9900-7d5ff683444d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:51:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.141 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d/disk.config bb8fe37f-cd7c-43d8-9900-7d5ff683444d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.276 239549 DEBUG oslo_concurrency.processutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d/disk.config bb8fe37f-cd7c-43d8-9900-7d5ff683444d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.276 239549 INFO nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Deleting local config drive /var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d/disk.config because it was imported into RBD.
Feb 02 15:51:22 compute-0 kernel: tap8bdda305-0b: entered promiscuous mode
Feb 02 15:51:22 compute-0 NetworkManager[49171]: <info>  [1770047482.3151] manager: (tap8bdda305-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Feb 02 15:51:22 compute-0 ovn_controller[144995]: 2026-02-02T15:51:22Z|00270|binding|INFO|Claiming lport 8bdda305-0b99-405f-a9b5-e64e9600d192 for this chassis.
Feb 02 15:51:22 compute-0 ovn_controller[144995]: 2026-02-02T15:51:22Z|00271|binding|INFO|8bdda305-0b99-405f-a9b5-e64e9600d192: Claiming fa:16:3e:9e:eb:72 10.100.0.13
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.318 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:22 compute-0 ovn_controller[144995]: 2026-02-02T15:51:22Z|00272|binding|INFO|Setting lport 8bdda305-0b99-405f-a9b5-e64e9600d192 ovn-installed in OVS
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.324 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:22 compute-0 systemd-machined[207609]: New machine qemu-28-instance-0000001c.
Feb 02 15:51:22 compute-0 ovn_controller[144995]: 2026-02-02T15:51:22Z|00273|binding|INFO|Setting lport 8bdda305-0b99-405f-a9b5-e64e9600d192 up in Southbound
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.343 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:eb:72 10.100.0.13'], port_security=['fa:16:3e:9e:eb:72 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8fe37f-cd7c-43d8-9900-7d5ff683444d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-571a8d26-1b08-4233-a158-71a28cbbf88c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e1abae6c1404ce2b24265e7136ffe6a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '189fd68e-8be4-418b-963a-7de1d59bfc2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7394ccd-eb0f-47a9-85af-ffa4a04fcde8, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=8bdda305-0b99-405f-a9b5-e64e9600d192) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.345 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 8bdda305-0b99-405f-a9b5-e64e9600d192 in datapath 571a8d26-1b08-4233-a158-71a28cbbf88c bound to our chassis
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.346 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 571a8d26-1b08-4233-a158-71a28cbbf88c
Feb 02 15:51:22 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.354 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce72411-74ad-4d60-9fd3-c59e18db2b50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.355 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap571a8d26-11 in ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.358 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap571a8d26-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.358 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[23fd3aaf-f255-46cd-b179-1905dcc79bf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.359 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5645f8a6-1619-4f12-9c5b-7694e8e05455]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 systemd-udevd[274647]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.369 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[11599b50-b44f-4afe-9a1d-90a6b804f07e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 NetworkManager[49171]: <info>  [1770047482.3796] device (tap8bdda305-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:51:22 compute-0 NetworkManager[49171]: <info>  [1770047482.3804] device (tap8bdda305-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.382 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[61b2f738-5748-4968-9aec-269bb5f395a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.406 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f55709-c7d1-4cee-9dc9-8ae32c19eb10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 systemd-udevd[274654]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.411 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[708f9b9f-967d-4ace-b440-f20656a68299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 NetworkManager[49171]: <info>  [1770047482.4122] manager: (tap571a8d26-10): new Veth device (/org/freedesktop/NetworkManager/Devices/137)
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.429 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[90e957cc-6e74-4ca2-a654-03df73a713ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.432 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[2dea15ca-249f-428e-94d1-3e034dd112c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 NetworkManager[49171]: <info>  [1770047482.4460] device (tap571a8d26-10): carrier: link connected
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.449 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[9187078d-e6cb-4008-8ad6-c2cc5ad5315e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.460 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f28ff493-3ee3-441f-9321-6d7247efb7b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap571a8d26-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:4f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496836, 'reachable_time': 30303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274677, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.469 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[89250262-8518-4643-abd6-2d273a19a350]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:4fa3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496836, 'tstamp': 496836}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274678, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.482 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[d85e3c61-c792-4244-8378-c6b18be1fa54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap571a8d26-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:4f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496836, 'reachable_time': 30303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274679, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.503 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[890b3bd9-04a8-4d3d-ad3c-c7a4db2312e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.544 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd4cc0e-b5ad-47c4-ac4b-3df603733378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.545 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap571a8d26-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.545 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.545 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap571a8d26-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.547 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:22 compute-0 kernel: tap571a8d26-10: entered promiscuous mode
Feb 02 15:51:22 compute-0 NetworkManager[49171]: <info>  [1770047482.5481] manager: (tap571a8d26-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.550 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap571a8d26-10, col_values=(('external_ids', {'iface-id': '394690c2-9066-491c-bd5b-f924947b57f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.551 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:22 compute-0 ovn_controller[144995]: 2026-02-02T15:51:22Z|00274|binding|INFO|Releasing lport 394690c2-9066-491c-bd5b-f924947b57f3 from this chassis (sb_readonly=0)
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.552 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/571a8d26-1b08-4233-a158-71a28cbbf88c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/571a8d26-1b08-4233-a158-71a28cbbf88c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.553 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe0af01-523a-48c6-9937-11acb5721162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.553 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-571a8d26-1b08-4233-a158-71a28cbbf88c
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/571a8d26-1b08-4233-a158-71a28cbbf88c.pid.haproxy
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 571a8d26-1b08-4233-a158-71a28cbbf88c
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:51:22 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:22.554 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'env', 'PROCESS_TAG=haproxy-571a8d26-1b08-4233-a158-71a28cbbf88c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/571a8d26-1b08-4233-a158-71a28cbbf88c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.557 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.758 239549 DEBUG nova.compute.manager [req-a4004b7b-8b16-4b8f-8728-165c30c7eb03 req-f1b89c80-56f1-4f84-b258-75751b129d3d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received event network-vif-plugged-8bdda305-0b99-405f-a9b5-e64e9600d192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.758 239549 DEBUG oslo_concurrency.lockutils [req-a4004b7b-8b16-4b8f-8728-165c30c7eb03 req-f1b89c80-56f1-4f84-b258-75751b129d3d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.759 239549 DEBUG oslo_concurrency.lockutils [req-a4004b7b-8b16-4b8f-8728-165c30c7eb03 req-f1b89c80-56f1-4f84-b258-75751b129d3d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.759 239549 DEBUG oslo_concurrency.lockutils [req-a4004b7b-8b16-4b8f-8728-165c30c7eb03 req-f1b89c80-56f1-4f84-b258-75751b129d3d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:22 compute-0 nova_compute[239545]: 2026-02-02 15:51:22.759 239549 DEBUG nova.compute.manager [req-a4004b7b-8b16-4b8f-8728-165c30c7eb03 req-f1b89c80-56f1-4f84-b258-75751b129d3d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Processing event network-vif-plugged-8bdda305-0b99-405f-a9b5-e64e9600d192 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:51:22 compute-0 podman[274748]: 2026-02-02 15:51:22.871773606 +0000 UTC m=+0.041732490 container create 87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 02 15:51:22 compute-0 systemd[1]: Started libpod-conmon-87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4.scope.
Feb 02 15:51:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:51:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4904e508888daf4d3208638e613210c599b49b4097e59839cc916e988d6b432b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:51:22 compute-0 podman[274748]: 2026-02-02 15:51:22.848334698 +0000 UTC m=+0.018293602 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:51:22 compute-0 podman[274748]: 2026-02-02 15:51:22.947516655 +0000 UTC m=+0.117475559 container init 87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 15:51:22 compute-0 podman[274748]: 2026-02-02 15:51:22.954285641 +0000 UTC m=+0.124244535 container start 87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:51:22 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[274764]: [NOTICE]   (274768) : New worker (274770) forked
Feb 02 15:51:22 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[274764]: [NOTICE]   (274768) : Loading success.
Feb 02 15:51:23 compute-0 ceph-mon[75334]: pgmap v1819: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Feb 02 15:51:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 7.2 MiB/s wr, 26 op/s
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.559 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.875 239549 DEBUG nova.compute.manager [req-7f82a459-e934-4034-88a8-c614b56248fe req-8579a07e-8447-4f6d-9721-7a2a2111cabb d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received event network-vif-plugged-8bdda305-0b99-405f-a9b5-e64e9600d192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.876 239549 DEBUG oslo_concurrency.lockutils [req-7f82a459-e934-4034-88a8-c614b56248fe req-8579a07e-8447-4f6d-9721-7a2a2111cabb d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.876 239549 DEBUG oslo_concurrency.lockutils [req-7f82a459-e934-4034-88a8-c614b56248fe req-8579a07e-8447-4f6d-9721-7a2a2111cabb d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.877 239549 DEBUG oslo_concurrency.lockutils [req-7f82a459-e934-4034-88a8-c614b56248fe req-8579a07e-8447-4f6d-9721-7a2a2111cabb d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.877 239549 DEBUG nova.compute.manager [req-7f82a459-e934-4034-88a8-c614b56248fe req-8579a07e-8447-4f6d-9721-7a2a2111cabb d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] No waiting events found dispatching network-vif-plugged-8bdda305-0b99-405f-a9b5-e64e9600d192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.877 239549 WARNING nova.compute.manager [req-7f82a459-e934-4034-88a8-c614b56248fe req-8579a07e-8447-4f6d-9721-7a2a2111cabb d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received unexpected event network-vif-plugged-8bdda305-0b99-405f-a9b5-e64e9600d192 for instance with vm_state building and task_state spawning.
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.878 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047484.8775988, bb8fe37f-cd7c-43d8-9900-7d5ff683444d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.878 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] VM Started (Lifecycle Event)
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.880 239549 DEBUG nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.883 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.886 239549 INFO nova.virt.libvirt.driver [-] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Instance spawned successfully.
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.886 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.913 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.917 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.917 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.917 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.918 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.918 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.918 239549 DEBUG nova.virt.libvirt.driver [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:51:24 compute-0 nova_compute[239545]: 2026-02-02 15:51:24.922 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.047 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.048 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047484.8784935, bb8fe37f-cd7c-43d8-9900-7d5ff683444d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.048 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] VM Paused (Lifecycle Event)
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.105 239549 INFO nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Took 7.95 seconds to spawn the instance on the hypervisor.
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.106 239549 DEBUG nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.123 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.127 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047484.8828797, bb8fe37f-cd7c-43d8-9900-7d5ff683444d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.128 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] VM Resumed (Lifecycle Event)
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.165 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.170 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.212 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.218 239549 INFO nova.compute.manager [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Took 10.23 seconds to build instance.
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.245 239549 DEBUG oslo_concurrency.lockutils [None req-7676cc74-ecba-4675-a688-f4f2dcb1288c 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:25 compute-0 ceph-mon[75334]: pgmap v1820: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 7.2 MiB/s wr, 26 op/s
Feb 02 15:51:25 compute-0 nova_compute[239545]: 2026-02-02 15:51:25.850 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.5 MiB/s wr, 37 op/s
Feb 02 15:51:26 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:26.298 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:51:26 compute-0 podman[274786]: 2026-02-02 15:51:26.335960956 +0000 UTC m=+0.062206626 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 02 15:51:26 compute-0 podman[274785]: 2026-02-02 15:51:26.358500742 +0000 UTC m=+0.085119251 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 02 15:51:27 compute-0 ceph-mon[75334]: pgmap v1821: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.5 MiB/s wr, 37 op/s
Feb 02 15:51:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:51:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1688120025' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:51:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:51:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1688120025' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:51:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 12 KiB/s wr, 11 op/s
Feb 02 15:51:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1688120025' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:51:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1688120025' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:51:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:29 compute-0 ceph-mon[75334]: pgmap v1822: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 12 KiB/s wr, 11 op/s
Feb 02 15:51:29 compute-0 nova_compute[239545]: 2026-02-02 15:51:29.561 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:29 compute-0 nova_compute[239545]: 2026-02-02 15:51:29.752 239549 DEBUG nova.compute.manager [req-80a85274-f003-4acf-bc42-77ea9647a7ad req-c27aa48e-6865-470e-8e74-617d06e02dda d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received event network-changed-8bdda305-0b99-405f-a9b5-e64e9600d192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:51:29 compute-0 nova_compute[239545]: 2026-02-02 15:51:29.753 239549 DEBUG nova.compute.manager [req-80a85274-f003-4acf-bc42-77ea9647a7ad req-c27aa48e-6865-470e-8e74-617d06e02dda d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Refreshing instance network info cache due to event network-changed-8bdda305-0b99-405f-a9b5-e64e9600d192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:51:29 compute-0 nova_compute[239545]: 2026-02-02 15:51:29.753 239549 DEBUG oslo_concurrency.lockutils [req-80a85274-f003-4acf-bc42-77ea9647a7ad req-c27aa48e-6865-470e-8e74-617d06e02dda d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-bb8fe37f-cd7c-43d8-9900-7d5ff683444d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:51:29 compute-0 nova_compute[239545]: 2026-02-02 15:51:29.753 239549 DEBUG oslo_concurrency.lockutils [req-80a85274-f003-4acf-bc42-77ea9647a7ad req-c27aa48e-6865-470e-8e74-617d06e02dda d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-bb8fe37f-cd7c-43d8-9900-7d5ff683444d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:51:29 compute-0 nova_compute[239545]: 2026-02-02 15:51:29.753 239549 DEBUG nova.network.neutron [req-80a85274-f003-4acf-bc42-77ea9647a7ad req-c27aa48e-6865-470e-8e74-617d06e02dda d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Refreshing network info cache for port 8bdda305-0b99-405f-a9b5-e64e9600d192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:51:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 812 KiB/s rd, 12 KiB/s wr, 34 op/s
Feb 02 15:51:30 compute-0 nova_compute[239545]: 2026-02-02 15:51:30.852 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:31 compute-0 nova_compute[239545]: 2026-02-02 15:51:31.150 239549 DEBUG nova.network.neutron [req-80a85274-f003-4acf-bc42-77ea9647a7ad req-c27aa48e-6865-470e-8e74-617d06e02dda d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Updated VIF entry in instance network info cache for port 8bdda305-0b99-405f-a9b5-e64e9600d192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:51:31 compute-0 nova_compute[239545]: 2026-02-02 15:51:31.150 239549 DEBUG nova.network.neutron [req-80a85274-f003-4acf-bc42-77ea9647a7ad req-c27aa48e-6865-470e-8e74-617d06e02dda d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Updating instance_info_cache with network_info: [{"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:51:31 compute-0 nova_compute[239545]: 2026-02-02 15:51:31.177 239549 DEBUG oslo_concurrency.lockutils [req-80a85274-f003-4acf-bc42-77ea9647a7ad req-c27aa48e-6865-470e-8e74-617d06e02dda d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-bb8fe37f-cd7c-43d8-9900-7d5ff683444d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:51:31 compute-0 ceph-mon[75334]: pgmap v1823: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 812 KiB/s rd, 12 KiB/s wr, 34 op/s
Feb 02 15:51:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 02 15:51:33 compute-0 ceph-mon[75334]: pgmap v1824: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 02 15:51:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 02 15:51:34 compute-0 nova_compute[239545]: 2026-02-02 15:51:34.562 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:35 compute-0 ceph-mon[75334]: pgmap v1825: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 02 15:51:35 compute-0 nova_compute[239545]: 2026-02-02 15:51:35.855 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 256 KiB/s wr, 82 op/s
Feb 02 15:51:36 compute-0 ovn_controller[144995]: 2026-02-02T15:51:36Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:eb:72 10.100.0.13
Feb 02 15:51:36 compute-0 ovn_controller[144995]: 2026-02-02T15:51:36Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:eb:72 10.100.0.13
Feb 02 15:51:36 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 02 15:51:37 compute-0 ceph-mon[75334]: pgmap v1826: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 256 KiB/s wr, 82 op/s
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.403311) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047497403343, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 797, "num_deletes": 252, "total_data_size": 995060, "memory_usage": 1010072, "flush_reason": "Manual Compaction"}
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047497408652, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 973707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36063, "largest_seqno": 36859, "table_properties": {"data_size": 969647, "index_size": 1776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9342, "raw_average_key_size": 19, "raw_value_size": 961370, "raw_average_value_size": 2036, "num_data_blocks": 79, "num_entries": 472, "num_filter_entries": 472, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770047439, "oldest_key_time": 1770047439, "file_creation_time": 1770047497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 5407 microseconds, and 2637 cpu microseconds.
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.408697) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 973707 bytes OK
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.408738) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.410475) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.410492) EVENT_LOG_v1 {"time_micros": 1770047497410486, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.410518) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 991041, prev total WAL file size 991041, number of live WAL files 2.
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.411129) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(950KB)], [74(10MB)]
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047497411197, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12165910, "oldest_snapshot_seqno": -1}
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6730 keys, 10421710 bytes, temperature: kUnknown
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047497459188, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10421710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10371680, "index_size": 32126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 170580, "raw_average_key_size": 25, "raw_value_size": 10245767, "raw_average_value_size": 1522, "num_data_blocks": 1274, "num_entries": 6730, "num_filter_entries": 6730, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770047497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.459662) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10421710 bytes
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.461257) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.1 rd, 216.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(23.2) write-amplify(10.7) OK, records in: 7251, records dropped: 521 output_compression: NoCompression
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.461278) EVENT_LOG_v1 {"time_micros": 1770047497461267, "job": 42, "event": "compaction_finished", "compaction_time_micros": 48251, "compaction_time_cpu_micros": 20598, "output_level": 6, "num_output_files": 1, "total_output_size": 10421710, "num_input_records": 7251, "num_output_records": 6730, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047497461770, "job": 42, "event": "table_file_deletion", "file_number": 76}
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047497463153, "job": 42, "event": "table_file_deletion", "file_number": 74}
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.410967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.463288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.463293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.463298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.463300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:51:37 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:51:37.463301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:51:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 244 KiB/s wr, 71 op/s
Feb 02 15:51:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:39 compute-0 ceph-mon[75334]: pgmap v1827: 305 pgs: 305 active+clean; 284 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 244 KiB/s wr, 71 op/s
Feb 02 15:51:39 compute-0 nova_compute[239545]: 2026-02-02 15:51:39.566 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 300 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 77 op/s
Feb 02 15:51:40 compute-0 nova_compute[239545]: 2026-02-02 15:51:40.859 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:41 compute-0 ceph-mon[75334]: pgmap v1828: 305 pgs: 305 active+clean; 300 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 77 op/s
Feb 02 15:51:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.8 MiB/s wr, 115 op/s
Feb 02 15:51:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:51:42
Feb 02 15:51:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:51:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:51:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta']
Feb 02 15:51:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:51:43 compute-0 ceph-mon[75334]: pgmap v1829: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.8 MiB/s wr, 115 op/s
Feb 02 15:51:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Feb 02 15:51:44 compute-0 nova_compute[239545]: 2026-02-02 15:51:44.566 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:51:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:51:44 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:51:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:51:44 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 15:51:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:51:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:51:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:51:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.385 239549 DEBUG oslo_concurrency.lockutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.386 239549 DEBUG oslo_concurrency.lockutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.386 239549 DEBUG oslo_concurrency.lockutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.386 239549 DEBUG oslo_concurrency.lockutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.386 239549 DEBUG oslo_concurrency.lockutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.387 239549 INFO nova.compute.manager [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Terminating instance
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.388 239549 DEBUG nova.compute.manager [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:51:45 compute-0 kernel: tap8bdda305-0b (unregistering): left promiscuous mode
Feb 02 15:51:45 compute-0 NetworkManager[49171]: <info>  [1770047505.4292] device (tap8bdda305-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:51:45 compute-0 ovn_controller[144995]: 2026-02-02T15:51:45Z|00275|binding|INFO|Releasing lport 8bdda305-0b99-405f-a9b5-e64e9600d192 from this chassis (sb_readonly=0)
Feb 02 15:51:45 compute-0 ovn_controller[144995]: 2026-02-02T15:51:45Z|00276|binding|INFO|Setting lport 8bdda305-0b99-405f-a9b5-e64e9600d192 down in Southbound
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.440 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:45 compute-0 ovn_controller[144995]: 2026-02-02T15:51:45Z|00277|binding|INFO|Removing iface tap8bdda305-0b ovn-installed in OVS
Feb 02 15:51:45 compute-0 ceph-mon[75334]: pgmap v1830: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.443 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.449 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.453 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:eb:72 10.100.0.13'], port_security=['fa:16:3e:9e:eb:72 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8fe37f-cd7c-43d8-9900-7d5ff683444d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-571a8d26-1b08-4233-a158-71a28cbbf88c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e1abae6c1404ce2b24265e7136ffe6a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '189fd68e-8be4-418b-963a-7de1d59bfc2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7394ccd-eb0f-47a9-85af-ffa4a04fcde8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=8bdda305-0b99-405f-a9b5-e64e9600d192) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.455 154982 INFO neutron.agent.ovn.metadata.agent [-] Port 8bdda305-0b99-405f-a9b5-e64e9600d192 in datapath 571a8d26-1b08-4233-a158-71a28cbbf88c unbound from our chassis
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.458 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 571a8d26-1b08-4233-a158-71a28cbbf88c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.459 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[117d7dab-a0e7-43d5-ac56-92e96aa55e3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.460 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c namespace which is not needed anymore
Feb 02 15:51:45 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Feb 02 15:51:45 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 14.494s CPU time.
Feb 02 15:51:45 compute-0 systemd-machined[207609]: Machine qemu-28-instance-0000001c terminated.
Feb 02 15:51:45 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[274764]: [NOTICE]   (274768) : haproxy version is 2.8.14-c23fe91
Feb 02 15:51:45 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[274764]: [NOTICE]   (274768) : path to executable is /usr/sbin/haproxy
Feb 02 15:51:45 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[274764]: [WARNING]  (274768) : Exiting Master process...
Feb 02 15:51:45 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[274764]: [WARNING]  (274768) : Exiting Master process...
Feb 02 15:51:45 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[274764]: [ALERT]    (274768) : Current worker (274770) exited with code 143 (Terminated)
Feb 02 15:51:45 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[274764]: [WARNING]  (274768) : All workers exited. Exiting... (0)
Feb 02 15:51:45 compute-0 systemd[1]: libpod-87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4.scope: Deactivated successfully.
Feb 02 15:51:45 compute-0 podman[274855]: 2026-02-02 15:51:45.575288572 +0000 UTC m=+0.039886045 container died 87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:51:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4-userdata-shm.mount: Deactivated successfully.
Feb 02 15:51:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4904e508888daf4d3208638e613210c599b49b4097e59839cc916e988d6b432b-merged.mount: Deactivated successfully.
Feb 02 15:51:45 compute-0 podman[274855]: 2026-02-02 15:51:45.615651367 +0000 UTC m=+0.080248830 container cleanup 87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.615 239549 INFO nova.virt.libvirt.driver [-] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Instance destroyed successfully.
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.615 239549 DEBUG nova.objects.instance [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'resources' on Instance uuid bb8fe37f-cd7c-43d8-9900-7d5ff683444d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:51:45 compute-0 systemd[1]: libpod-conmon-87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4.scope: Deactivated successfully.
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.637 239549 DEBUG nova.virt.libvirt.vif [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:51:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1660316277',display_name='tempest-TestEncryptedCinderVolumes-server-1660316277',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1660316277',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGghUGTIcKcbmFyDjDExJq0q+VsuYW4BqumxGHKLx3E1e/6oKedlb5/fmggown6dVAhqPLOwmstclEUWmmD7KyDyLHDlHuBYQ6150Bpk3MrMabPI6fo5dl75qL/VQaUJ/g==',key_name='tempest-TestEncryptedCinderVolumes-841766027',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:51:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6e1abae6c1404ce2b24265e7136ffe6a',ramdisk_id='',reservation_id='r-usxwqhrl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-987785960',owner_user_name='tempest-TestEncryptedCinderVolumes-987785960-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:51:25Z,user_data=None,user_id='16b55bfc98574e0096db4f19bcdcbb2e',uuid=bb8fe37f-cd7c-43d8-9900-7d5ff683444d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.637 239549 DEBUG nova.network.os_vif_util [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converting VIF {"id": "8bdda305-0b99-405f-a9b5-e64e9600d192", "address": "fa:16:3e:9e:eb:72", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bdda305-0b", "ovs_interfaceid": "8bdda305-0b99-405f-a9b5-e64e9600d192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.638 239549 DEBUG nova.network.os_vif_util [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:eb:72,bridge_name='br-int',has_traffic_filtering=True,id=8bdda305-0b99-405f-a9b5-e64e9600d192,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bdda305-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.638 239549 DEBUG os_vif [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:eb:72,bridge_name='br-int',has_traffic_filtering=True,id=8bdda305-0b99-405f-a9b5-e64e9600d192,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bdda305-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.640 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.640 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8bdda305-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.642 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.643 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.645 239549 INFO os_vif [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:eb:72,bridge_name='br-int',has_traffic_filtering=True,id=8bdda305-0b99-405f-a9b5-e64e9600d192,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bdda305-0b')
Feb 02 15:51:45 compute-0 podman[274892]: 2026-02-02 15:51:45.672837248 +0000 UTC m=+0.041898845 container remove 87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.676 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[447448e9-975f-4d05-afa8-6756d45e44cd]: (4, ('Mon Feb  2 03:51:45 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c (87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4)\n87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4\nMon Feb  2 03:51:45 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c (87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4)\n87dacf3902dbc5b75050410dd847e93520ed2ecd470eb5decd22f6aca34fddc4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.677 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[f22bbea1-7fcc-4a25-a2bb-ec6c4a7e22dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.678 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap571a8d26-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.680 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:45 compute-0 kernel: tap571a8d26-10: left promiscuous mode
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.682 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.684 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[af20d632-2d2b-47ee-94b0-8d190687b2ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.687 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.706 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1041e7-b779-42a9-abc8-95ed6e460f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.707 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3627f4a-edd5-4299-b12a-1de3cd6a2b5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.718 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[daba11d1-fabd-4c68-97e5-b253ccbd4f87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496831, 'reachable_time': 18838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274925, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.720 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:51:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d571a8d26\x2d1b08\x2d4233\x2da158\x2d71a28cbbf88c.mount: Deactivated successfully.
Feb 02 15:51:45 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:45.720 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[564036c8-f8b8-4327-bf34-0082291a7cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.782 239549 INFO nova.virt.libvirt.driver [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Deleting instance files /var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d_del
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.783 239549 INFO nova.virt.libvirt.driver [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Deletion of /var/lib/nova/instances/bb8fe37f-cd7c-43d8-9900-7d5ff683444d_del complete
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.817 239549 DEBUG nova.compute.manager [req-4cfae7db-158c-4cfa-8004-1e2a90aaf528 req-d27082b5-5c9e-4156-8875-f928109ea965 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received event network-vif-unplugged-8bdda305-0b99-405f-a9b5-e64e9600d192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.818 239549 DEBUG oslo_concurrency.lockutils [req-4cfae7db-158c-4cfa-8004-1e2a90aaf528 req-d27082b5-5c9e-4156-8875-f928109ea965 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.819 239549 DEBUG oslo_concurrency.lockutils [req-4cfae7db-158c-4cfa-8004-1e2a90aaf528 req-d27082b5-5c9e-4156-8875-f928109ea965 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.819 239549 DEBUG oslo_concurrency.lockutils [req-4cfae7db-158c-4cfa-8004-1e2a90aaf528 req-d27082b5-5c9e-4156-8875-f928109ea965 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.820 239549 DEBUG nova.compute.manager [req-4cfae7db-158c-4cfa-8004-1e2a90aaf528 req-d27082b5-5c9e-4156-8875-f928109ea965 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] No waiting events found dispatching network-vif-unplugged-8bdda305-0b99-405f-a9b5-e64e9600d192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.820 239549 DEBUG nova.compute.manager [req-4cfae7db-158c-4cfa-8004-1e2a90aaf528 req-d27082b5-5c9e-4156-8875-f928109ea965 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received event network-vif-unplugged-8bdda305-0b99-405f-a9b5-e64e9600d192 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.839 239549 INFO nova.compute.manager [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Took 0.45 seconds to destroy the instance on the hypervisor.
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.840 239549 DEBUG oslo.service.loopingcall [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.840 239549 DEBUG nova.compute.manager [-] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:51:45 compute-0 nova_compute[239545]: 2026-02-02 15:51:45.841 239549 DEBUG nova.network.neutron [-] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:51:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 5.8 MiB/s wr, 78 op/s
Feb 02 15:51:46 compute-0 nova_compute[239545]: 2026-02-02 15:51:46.630 239549 DEBUG nova.network.neutron [-] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:51:46 compute-0 nova_compute[239545]: 2026-02-02 15:51:46.658 239549 INFO nova.compute.manager [-] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Took 0.82 seconds to deallocate network for instance.
Feb 02 15:51:46 compute-0 nova_compute[239545]: 2026-02-02 15:51:46.720 239549 DEBUG nova.compute.manager [req-6b0c7f74-4e9d-4a71-919c-e1478be9cb49 req-1a62d0a5-6225-4307-9571-210e040ac07d d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received event network-vif-deleted-8bdda305-0b99-405f-a9b5-e64e9600d192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:51:46 compute-0 nova_compute[239545]: 2026-02-02 15:51:46.871 239549 INFO nova.compute.manager [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Took 0.21 seconds to detach 1 volumes for instance.
Feb 02 15:51:46 compute-0 nova_compute[239545]: 2026-02-02 15:51:46.919 239549 DEBUG oslo_concurrency.lockutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:46 compute-0 nova_compute[239545]: 2026-02-02 15:51:46.920 239549 DEBUG oslo_concurrency.lockutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:46 compute-0 nova_compute[239545]: 2026-02-02 15:51:46.980 239549 DEBUG oslo_concurrency.processutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:47 compute-0 ceph-mon[75334]: pgmap v1831: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 5.8 MiB/s wr, 78 op/s
Feb 02 15:51:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:51:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3572623376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.623 239549 DEBUG oslo_concurrency.processutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.628 239549 DEBUG nova.compute.provider_tree [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.658 239549 DEBUG nova.scheduler.client.report [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.679 239549 DEBUG oslo_concurrency.lockutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.699 239549 INFO nova.scheduler.client.report [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Deleted allocations for instance bb8fe37f-cd7c-43d8-9900-7d5ff683444d
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.756 239549 DEBUG oslo_concurrency.lockutils [None req-ea23dce2-0871-4f14-ba54-372ccb99796f 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.911 239549 DEBUG nova.compute.manager [req-37e37df8-13f2-43d0-9ba7-335bf5d3b9ee req-61e9cbce-384c-4ee0-bfc2-fcb6f87e6d38 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received event network-vif-plugged-8bdda305-0b99-405f-a9b5-e64e9600d192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.912 239549 DEBUG oslo_concurrency.lockutils [req-37e37df8-13f2-43d0-9ba7-335bf5d3b9ee req-61e9cbce-384c-4ee0-bfc2-fcb6f87e6d38 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.913 239549 DEBUG oslo_concurrency.lockutils [req-37e37df8-13f2-43d0-9ba7-335bf5d3b9ee req-61e9cbce-384c-4ee0-bfc2-fcb6f87e6d38 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.913 239549 DEBUG oslo_concurrency.lockutils [req-37e37df8-13f2-43d0-9ba7-335bf5d3b9ee req-61e9cbce-384c-4ee0-bfc2-fcb6f87e6d38 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "bb8fe37f-cd7c-43d8-9900-7d5ff683444d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.913 239549 DEBUG nova.compute.manager [req-37e37df8-13f2-43d0-9ba7-335bf5d3b9ee req-61e9cbce-384c-4ee0-bfc2-fcb6f87e6d38 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] No waiting events found dispatching network-vif-plugged-8bdda305-0b99-405f-a9b5-e64e9600d192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:51:47 compute-0 nova_compute[239545]: 2026-02-02 15:51:47.914 239549 WARNING nova.compute.manager [req-37e37df8-13f2-43d0-9ba7-335bf5d3b9ee req-61e9cbce-384c-4ee0-bfc2-fcb6f87e6d38 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Received unexpected event network-vif-plugged-8bdda305-0b99-405f-a9b5-e64e9600d192 for instance with vm_state deleted and task_state None.
Feb 02 15:51:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 5.6 MiB/s wr, 69 op/s
Feb 02 15:51:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:48 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3572623376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:49 compute-0 nova_compute[239545]: 2026-02-02 15:51:49.569 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Feb 02 15:51:49 compute-0 ceph-mon[75334]: pgmap v1832: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 5.6 MiB/s wr, 69 op/s
Feb 02 15:51:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Feb 02 15:51:49 compute-0 ceph-mon[75334]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Feb 02 15:51:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 5.1 MiB/s wr, 77 op/s
Feb 02 15:51:50 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:51:50 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1814791784' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:51:50 compute-0 nova_compute[239545]: 2026-02-02 15:51:50.643 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:50 compute-0 ceph-mon[75334]: osdmap e490: 3 total, 3 up, 3 in
Feb 02 15:51:50 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1814791784' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:51:51 compute-0 ceph-mon[75334]: pgmap v1834: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 5.1 MiB/s wr, 77 op/s
Feb 02 15:51:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 19 KiB/s wr, 39 op/s
Feb 02 15:51:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:53 compute-0 ceph-mon[75334]: pgmap v1835: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 19 KiB/s wr, 39 op/s
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 19 KiB/s wr, 39 op/s
Feb 02 15:51:54 compute-0 nova_compute[239545]: 2026-02-02 15:51:54.571 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632082077009476 of space, bias 1.0, pg target 0.22896246231028428 quantized to 32 (current 32)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002929221095692299 of space, bias 1.0, pg target 0.8787663287076897 quantized to 32 (current 32)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.867358320538883e-06 of space, bias 1.0, pg target 0.001160207496161665 quantized to 32 (current 32)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677628867582875 of space, bias 1.0, pg target 0.20032886602748626 quantized to 32 (current 32)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.424844822179078e-06 of space, bias 4.0, pg target 0.0017098137866148938 quantized to 16 (current 16)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:51:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:51:55 compute-0 nova_compute[239545]: 2026-02-02 15:51:55.646 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:55 compute-0 nova_compute[239545]: 2026-02-02 15:51:55.881 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "c79d4e81-b8f8-4ca4-8355-90da048bd198" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:55 compute-0 nova_compute[239545]: 2026-02-02 15:51:55.881 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:55 compute-0 nova_compute[239545]: 2026-02-02 15:51:55.896 239549 DEBUG nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 02 15:51:55 compute-0 ceph-mon[75334]: pgmap v1836: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 19 KiB/s wr, 39 op/s
Feb 02 15:51:55 compute-0 nova_compute[239545]: 2026-02-02 15:51:55.956 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:55 compute-0 nova_compute[239545]: 2026-02-02 15:51:55.957 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:55 compute-0 nova_compute[239545]: 2026-02-02 15:51:55.962 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 02 15:51:55 compute-0 nova_compute[239545]: 2026-02-02 15:51:55.963 239549 INFO nova.compute.claims [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Claim successful on node compute-0.ctlplane.example.com
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.111 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Feb 02 15:51:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:51:56 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/271685513' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.620 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.625 239549 DEBUG nova.compute.provider_tree [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.646 239549 DEBUG nova.scheduler.client.report [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.666 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.666 239549 DEBUG nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.717 239549 DEBUG nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.718 239549 DEBUG nova.network.neutron [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.736 239549 INFO nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.753 239549 DEBUG nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.813 239549 INFO nova.virt.block_device [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Booting with volume d4cd7e71-df50-47bc-ab5a-0da62f8b37ec at /dev/vda
Feb 02 15:51:56 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/271685513' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.930 239549 DEBUG os_brick.utils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.931 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.939 248437 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.940 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8272f2-b24d-4087-9cb0-61f9dd171a35]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.941 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.946 248437 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.946 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[479c4a84-2ffa-49d2-89ad-5c292f496843]: (4, ('InitiatorName=iqn.1994-05.com.redhat:86745e18af85', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.948 248437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.956 248437 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.957 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[515e1360-263d-4201-b997-87ddeaf14805]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.958 248437 DEBUG oslo.privsep.daemon [-] privsep: reply[09d2070d-894b-4b9e-a0e9-5304def5197f]: (4, '91f81291-8830-4d3a-ad9a-f49b9247697f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.959 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.980 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.982 239549 DEBUG os_brick.initiator.connectors.lightos [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.982 239549 DEBUG os_brick.initiator.connectors.lightos [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.982 239549 DEBUG os_brick.initiator.connectors.lightos [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.982 239549 DEBUG os_brick.utils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:86745e18af85', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '91f81291-8830-4d3a-ad9a-f49b9247697f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Feb 02 15:51:56 compute-0 nova_compute[239545]: 2026-02-02 15:51:56.983 239549 DEBUG nova.virt.block_device [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Updating existing volume attachment record: 7118a500-5e96-4247-8eba-ed3eef5832cb _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Feb 02 15:51:57 compute-0 nova_compute[239545]: 2026-02-02 15:51:57.123 239549 DEBUG nova.policy [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16b55bfc98574e0096db4f19bcdcbb2e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6e1abae6c1404ce2b24265e7136ffe6a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 02 15:51:57 compute-0 podman[274980]: 2026-02-02 15:51:57.321298905 +0000 UTC m=+0.054576168 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 02 15:51:57 compute-0 podman[274979]: 2026-02-02 15:51:57.335576757 +0000 UTC m=+0.073117255 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 02 15:51:57 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:51:57 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/140415796' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:51:57 compute-0 ceph-mon[75334]: pgmap v1837: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Feb 02 15:51:57 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/140415796' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:51:57 compute-0 nova_compute[239545]: 2026-02-02 15:51:57.935 239549 DEBUG nova.network.neutron [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Successfully created port: d01f5485-2544-4646-8ca6-308513fda325 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 02 15:51:58 compute-0 nova_compute[239545]: 2026-02-02 15:51:58.036 239549 DEBUG nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 02 15:51:58 compute-0 nova_compute[239545]: 2026-02-02 15:51:58.037 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 02 15:51:58 compute-0 nova_compute[239545]: 2026-02-02 15:51:58.038 239549 INFO nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Creating image(s)
Feb 02 15:51:58 compute-0 nova_compute[239545]: 2026-02-02 15:51:58.038 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Feb 02 15:51:58 compute-0 nova_compute[239545]: 2026-02-02 15:51:58.039 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Ensure instance console log exists: /var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 02 15:51:58 compute-0 nova_compute[239545]: 2026-02-02 15:51:58.039 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:58 compute-0 nova_compute[239545]: 2026-02-02 15:51:58.039 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:58 compute-0 nova_compute[239545]: 2026-02-02 15:51:58.040 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Feb 02 15:51:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:51:59 compute-0 nova_compute[239545]: 2026-02-02 15:51:59.028 239549 DEBUG nova.network.neutron [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Successfully updated port: d01f5485-2544-4646-8ca6-308513fda325 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 02 15:51:59 compute-0 nova_compute[239545]: 2026-02-02 15:51:59.052 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "refresh_cache-c79d4e81-b8f8-4ca4-8355-90da048bd198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:51:59 compute-0 nova_compute[239545]: 2026-02-02 15:51:59.052 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquired lock "refresh_cache-c79d4e81-b8f8-4ca4-8355-90da048bd198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:51:59 compute-0 nova_compute[239545]: 2026-02-02 15:51:59.052 239549 DEBUG nova.network.neutron [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 02 15:51:59 compute-0 nova_compute[239545]: 2026-02-02 15:51:59.161 239549 DEBUG nova.compute.manager [req-8b4a6522-90ed-4d07-a56d-b1f19c5ce924 req-072f05b8-510f-4e70-822c-2c9ae7fc6a2b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received event network-changed-d01f5485-2544-4646-8ca6-308513fda325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:51:59 compute-0 nova_compute[239545]: 2026-02-02 15:51:59.162 239549 DEBUG nova.compute.manager [req-8b4a6522-90ed-4d07-a56d-b1f19c5ce924 req-072f05b8-510f-4e70-822c-2c9ae7fc6a2b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Refreshing instance network info cache due to event network-changed-d01f5485-2544-4646-8ca6-308513fda325. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:51:59 compute-0 nova_compute[239545]: 2026-02-02 15:51:59.162 239549 DEBUG oslo_concurrency.lockutils [req-8b4a6522-90ed-4d07-a56d-b1f19c5ce924 req-072f05b8-510f-4e70-822c-2c9ae7fc6a2b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-c79d4e81-b8f8-4ca4-8355-90da048bd198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:51:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:59.261 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:51:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:59.262 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:51:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:51:59.262 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:51:59 compute-0 nova_compute[239545]: 2026-02-02 15:51:59.573 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:51:59 compute-0 nova_compute[239545]: 2026-02-02 15:51:59.611 239549 DEBUG nova.network.neutron [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 02 15:51:59 compute-0 ceph-mon[75334]: pgmap v1838: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Feb 02 15:52:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.2 KiB/s wr, 42 op/s
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.615 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047505.613483, bb8fe37f-cd7c-43d8-9900-7d5ff683444d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.615 239549 INFO nova.compute.manager [-] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] VM Stopped (Lifecycle Event)
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.629 239549 DEBUG nova.network.neutron [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Updating instance_info_cache with network_info: [{"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.648 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.652 239549 DEBUG nova.compute.manager [None req-38256de6-5c53-4149-bdfe-af7ea6df8096 - - - - - -] [instance: bb8fe37f-cd7c-43d8-9900-7d5ff683444d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.654 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Releasing lock "refresh_cache-c79d4e81-b8f8-4ca4-8355-90da048bd198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.654 239549 DEBUG nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Instance network_info: |[{"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.655 239549 DEBUG oslo_concurrency.lockutils [req-8b4a6522-90ed-4d07-a56d-b1f19c5ce924 req-072f05b8-510f-4e70-822c-2c9ae7fc6a2b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-c79d4e81-b8f8-4ca4-8355-90da048bd198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.655 239549 DEBUG nova.network.neutron [req-8b4a6522-90ed-4d07-a56d-b1f19c5ce924 req-072f05b8-510f-4e70-822c-2c9ae7fc6a2b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Refreshing network info cache for port d01f5485-2544-4646-8ca6-308513fda325 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.659 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Start _get_guest_xml network_info=[{"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'attachment_id': '7118a500-5e96-4247-8eba-ed3eef5832cb', 'mount_device': '/dev/vda', 'boot_index': 0, 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d4cd7e71-df50-47bc-ab5a-0da62f8b37ec', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd4cd7e71-df50-47bc-ab5a-0da62f8b37ec', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c79d4e81-b8f8-4ca4-8355-90da048bd198', 'attached_at': '', 'detached_at': '', 'volume_id': 'd4cd7e71-df50-47bc-ab5a-0da62f8b37ec', 'serial': 'd4cd7e71-df50-47bc-ab5a-0da62f8b37ec'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.666 239549 WARNING nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.672 239549 DEBUG nova.virt.libvirt.host [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.673 239549 DEBUG nova.virt.libvirt.host [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.684 239549 DEBUG nova.virt.libvirt.host [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.684 239549 DEBUG nova.virt.libvirt.host [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.685 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.685 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T15:29:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7b3bc58e-2e4f-458d-8419-20d6ee2a81c6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.686 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.686 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.686 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.686 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.686 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.687 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.687 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.687 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.687 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.687 239549 DEBUG nova.virt.hardware [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.706 239549 DEBUG nova.storage.rbd_utils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image c79d4e81-b8f8-4ca4-8355-90da048bd198_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:52:00 compute-0 nova_compute[239545]: 2026-02-02 15:52:00.709 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:52:01 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 02 15:52:01 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1838254230' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.256 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.384 239549 DEBUG os_brick.encryptors [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Using volume encryption metadata '{'encryption_key_id': '04371444-3ca4-4b9f-91af-78ea834184f1', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d4cd7e71-df50-47bc-ab5a-0da62f8b37ec', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd4cd7e71-df50-47bc-ab5a-0da62f8b37ec', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c79d4e81-b8f8-4ca4-8355-90da048bd198', 'attached_at': '', 'detached_at': '', 'volume_id': 'd4cd7e71-df50-47bc-ab5a-0da62f8b37ec', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.388 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.404 239549 DEBUG barbicanclient.v1.secrets [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/04371444-3ca4-4b9f-91af-78ea834184f1 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.404 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.428 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.430 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.452 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.453 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.481 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.482 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.506 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.506 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.531 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.532 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.557 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.557 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.659 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.660 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.686 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.687 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.709 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.710 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.737 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.738 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.764 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.764 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.788 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.788 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.812 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.813 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.838 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.838 239549 INFO barbicanclient.base [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Calculated Secrets uuid ref: secrets/04371444-3ca4-4b9f-91af-78ea834184f1
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.857 239549 DEBUG barbicanclient.client [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.858 239549 DEBUG nova.virt.libvirt.host [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <usage type="volume">
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <volume>d4cd7e71-df50-47bc-ab5a-0da62f8b37ec</volume>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   </usage>
Feb 02 15:52:01 compute-0 nova_compute[239545]: </secret>
Feb 02 15:52:01 compute-0 nova_compute[239545]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.888 239549 DEBUG nova.network.neutron [req-8b4a6522-90ed-4d07-a56d-b1f19c5ce924 req-072f05b8-510f-4e70-822c-2c9ae7fc6a2b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Updated VIF entry in instance network info cache for port d01f5485-2544-4646-8ca6-308513fda325. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.888 239549 DEBUG nova.network.neutron [req-8b4a6522-90ed-4d07-a56d-b1f19c5ce924 req-072f05b8-510f-4e70-822c-2c9ae7fc6a2b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Updating instance_info_cache with network_info: [{"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.892 239549 DEBUG nova.virt.libvirt.vif [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:51:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-95220137',display_name='tempest-TestEncryptedCinderVolumes-server-95220137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-95220137',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGghUGTIcKcbmFyDjDExJq0q+VsuYW4BqumxGHKLx3E1e/6oKedlb5/fmggown6dVAhqPLOwmstclEUWmmD7KyDyLHDlHuBYQ6150Bpk3MrMabPI6fo5dl75qL/VQaUJ/g==',key_name='tempest-TestEncryptedCinderVolumes-841766027',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e1abae6c1404ce2b24265e7136ffe6a',ramdisk_id='',reservation_id='r-8455trtl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-987785960',owner_user_name='tempest-TestEncryptedCinderVolumes-987785960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:51:56Z,user_data=None,user_id='16b55bfc98574e0096db4f19bcdcbb2e',uuid=c79d4e81-b8f8-4ca4-8355-90da048bd198,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.892 239549 DEBUG nova.network.os_vif_util [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converting VIF {"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.893 239549 DEBUG nova.network.os_vif_util [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:bb:68,bridge_name='br-int',has_traffic_filtering=True,id=d01f5485-2544-4646-8ca6-308513fda325,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01f5485-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.894 239549 DEBUG nova.objects.instance [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'pci_devices' on Instance uuid c79d4e81-b8f8-4ca4-8355-90da048bd198 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.906 239549 DEBUG oslo_concurrency.lockutils [req-8b4a6522-90ed-4d07-a56d-b1f19c5ce924 req-072f05b8-510f-4e70-822c-2c9ae7fc6a2b d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-c79d4e81-b8f8-4ca4-8355-90da048bd198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.909 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] End _get_guest_xml xml=<domain type="kvm">
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <uuid>c79d4e81-b8f8-4ca4-8355-90da048bd198</uuid>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <name>instance-0000001d</name>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <memory>131072</memory>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <vcpu>1</vcpu>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <metadata>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-95220137</nova:name>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <nova:creationTime>2026-02-02 15:52:00</nova:creationTime>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <nova:flavor name="m1.nano">
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <nova:memory>128</nova:memory>
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <nova:disk>1</nova:disk>
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <nova:swap>0</nova:swap>
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <nova:ephemeral>0</nova:ephemeral>
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <nova:vcpus>1</nova:vcpus>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       </nova:flavor>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <nova:owner>
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <nova:user uuid="16b55bfc98574e0096db4f19bcdcbb2e">tempest-TestEncryptedCinderVolumes-987785960-project-member</nova:user>
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <nova:project uuid="6e1abae6c1404ce2b24265e7136ffe6a">tempest-TestEncryptedCinderVolumes-987785960</nova:project>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       </nova:owner>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <nova:ports>
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <nova:port uuid="d01f5485-2544-4646-8ca6-308513fda325">
Feb 02 15:52:01 compute-0 nova_compute[239545]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:         </nova:port>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       </nova:ports>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     </nova:instance>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   </metadata>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <sysinfo type="smbios">
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <system>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <entry name="manufacturer">RDO</entry>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <entry name="product">OpenStack Compute</entry>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <entry name="serial">c79d4e81-b8f8-4ca4-8355-90da048bd198</entry>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <entry name="uuid">c79d4e81-b8f8-4ca4-8355-90da048bd198</entry>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <entry name="family">Virtual Machine</entry>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     </system>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   </sysinfo>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <os>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <boot dev="hd"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <smbios mode="sysinfo"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   </os>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <features>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <acpi/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <apic/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <vmcoreinfo/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   </features>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <clock offset="utc">
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <timer name="pit" tickpolicy="delay"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <timer name="hpet" present="no"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   </clock>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <cpu mode="host-model" match="exact">
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <topology sockets="1" cores="1" threads="1"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   </cpu>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   <devices>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <disk type="network" device="cdrom">
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <driver type="raw" cache="none"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <source protocol="rbd" name="vms/c79d4e81-b8f8-4ca4-8355-90da048bd198_disk.config">
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       </source>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <target dev="sda" bus="sata"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <disk type="network" device="disk">
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <source protocol="rbd" name="volumes/volume-d4cd7e71-df50-47bc-ab5a-0da62f8b37ec">
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <host name="192.168.122.100" port="6789"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       </source>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <auth username="openstack">
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <secret type="ceph" uuid="e43470b2-6632-573a-87d3-0f5428ec59e9"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       </auth>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <target dev="vda" bus="virtio"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <serial>d4cd7e71-df50-47bc-ab5a-0da62f8b37ec</serial>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <encryption format="luks">
Feb 02 15:52:01 compute-0 nova_compute[239545]:         <secret type="passphrase" uuid="93b5659c-8550-4f29-a9da-bc8ee12cef83"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       </encryption>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     </disk>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <interface type="ethernet">
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <mac address="fa:16:3e:82:bb:68"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <driver name="vhost" rx_queue_size="512"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <mtu size="1442"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <target dev="tapd01f5485-25"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     </interface>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <serial type="pty">
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <log file="/var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198/console.log" append="off"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     </serial>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <video>
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <model type="virtio"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     </video>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <input type="tablet" bus="usb"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <rng model="virtio">
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <backend model="random">/dev/urandom</backend>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     </rng>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="pci" model="pcie-root-port"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <controller type="usb" index="0"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     <memballoon model="virtio">
Feb 02 15:52:01 compute-0 nova_compute[239545]:       <stats period="10"/>
Feb 02 15:52:01 compute-0 nova_compute[239545]:     </memballoon>
Feb 02 15:52:01 compute-0 nova_compute[239545]:   </devices>
Feb 02 15:52:01 compute-0 nova_compute[239545]: </domain>
Feb 02 15:52:01 compute-0 nova_compute[239545]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.910 239549 DEBUG nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Preparing to wait for external event network-vif-plugged-d01f5485-2544-4646-8ca6-308513fda325 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.910 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.910 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.911 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.911 239549 DEBUG nova.virt.libvirt.vif [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T15:51:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-95220137',display_name='tempest-TestEncryptedCinderVolumes-server-95220137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-95220137',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGghUGTIcKcbmFyDjDExJq0q+VsuYW4BqumxGHKLx3E1e/6oKedlb5/fmggown6dVAhqPLOwmstclEUWmmD7KyDyLHDlHuBYQ6150Bpk3MrMabPI6fo5dl75qL/VQaUJ/g==',key_name='tempest-TestEncryptedCinderVolumes-841766027',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e1abae6c1404ce2b24265e7136ffe6a',ramdisk_id='',reservation_id='r-8455trtl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-987785960',owner_user_name='tempest-TestEncryptedCinderVolumes-987785960-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T15:51:56Z,user_data=None,user_id='16b55bfc98574e0096db4f19bcdcbb2e',uuid=c79d4e81-b8f8-4ca4-8355-90da048bd198,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.912 239549 DEBUG nova.network.os_vif_util [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converting VIF {"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.912 239549 DEBUG nova.network.os_vif_util [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:bb:68,bridge_name='br-int',has_traffic_filtering=True,id=d01f5485-2544-4646-8ca6-308513fda325,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01f5485-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.913 239549 DEBUG os_vif [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:bb:68,bridge_name='br-int',has_traffic_filtering=True,id=d01f5485-2544-4646-8ca6-308513fda325,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01f5485-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.914 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.915 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.915 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.918 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.918 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd01f5485-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.918 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd01f5485-25, col_values=(('external_ids', {'iface-id': 'd01f5485-2544-4646-8ca6-308513fda325', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:bb:68', 'vm-uuid': 'c79d4e81-b8f8-4ca4-8355-90da048bd198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.920 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:01 compute-0 NetworkManager[49171]: <info>  [1770047521.9217] manager: (tapd01f5485-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.924 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.925 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.926 239549 INFO os_vif [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:bb:68,bridge_name='br-int',has_traffic_filtering=True,id=d01f5485-2544-4646-8ca6-308513fda325,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01f5485-25')
Feb 02 15:52:01 compute-0 ceph-mon[75334]: pgmap v1839: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.2 KiB/s wr, 42 op/s
Feb 02 15:52:01 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1838254230' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.969 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.969 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.969 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] No VIF found with MAC fa:16:3e:82:bb:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.970 239549 INFO nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Using config drive
Feb 02 15:52:01 compute-0 nova_compute[239545]: 2026-02-02 15:52:01.989 239549 DEBUG nova.storage.rbd_utils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image c79d4e81-b8f8-4ca4-8355-90da048bd198_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:52:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.302 239549 INFO nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Creating config drive at /var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198/disk.config
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.306 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvzi4gvfu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.429 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvzi4gvfu" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.466 239549 DEBUG nova.storage.rbd_utils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] rbd image c79d4e81-b8f8-4ca4-8355-90da048bd198_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.471 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198/disk.config c79d4e81-b8f8-4ca4-8355-90da048bd198_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.601 239549 DEBUG oslo_concurrency.processutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198/disk.config c79d4e81-b8f8-4ca4-8355-90da048bd198_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.602 239549 INFO nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Deleting local config drive /var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198/disk.config because it was imported into RBD.
Feb 02 15:52:02 compute-0 kernel: tapd01f5485-25: entered promiscuous mode
Feb 02 15:52:02 compute-0 NetworkManager[49171]: <info>  [1770047522.6469] manager: (tapd01f5485-25): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Feb 02 15:52:02 compute-0 ovn_controller[144995]: 2026-02-02T15:52:02Z|00278|binding|INFO|Claiming lport d01f5485-2544-4646-8ca6-308513fda325 for this chassis.
Feb 02 15:52:02 compute-0 ovn_controller[144995]: 2026-02-02T15:52:02Z|00279|binding|INFO|d01f5485-2544-4646-8ca6-308513fda325: Claiming fa:16:3e:82:bb:68 10.100.0.4
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.648 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:02 compute-0 ovn_controller[144995]: 2026-02-02T15:52:02Z|00280|binding|INFO|Setting lport d01f5485-2544-4646-8ca6-308513fda325 ovn-installed in OVS
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.655 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:02 compute-0 ovn_controller[144995]: 2026-02-02T15:52:02Z|00281|binding|INFO|Setting lport d01f5485-2544-4646-8ca6-308513fda325 up in Southbound
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.658 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.658 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:bb:68 10.100.0.4'], port_security=['fa:16:3e:82:bb:68 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c79d4e81-b8f8-4ca4-8355-90da048bd198', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-571a8d26-1b08-4233-a158-71a28cbbf88c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e1abae6c1404ce2b24265e7136ffe6a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '189fd68e-8be4-418b-963a-7de1d59bfc2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7394ccd-eb0f-47a9-85af-ffa4a04fcde8, chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=d01f5485-2544-4646-8ca6-308513fda325) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.660 154982 INFO neutron.agent.ovn.metadata.agent [-] Port d01f5485-2544-4646-8ca6-308513fda325 in datapath 571a8d26-1b08-4233-a158-71a28cbbf88c bound to our chassis
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.664 154982 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 571a8d26-1b08-4233-a158-71a28cbbf88c
Feb 02 15:52:02 compute-0 systemd-udevd[275138]: Network interface NamePolicy= disabled on kernel command line.
Feb 02 15:52:02 compute-0 systemd-machined[207609]: New machine qemu-29-instance-0000001d.
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.673 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[acd35a31-5bf4-4d5c-b158-935e0e1cc955]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.675 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap571a8d26-11 in ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.676 245965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap571a8d26-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.676 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a316bd92-5cf1-4b42-a8b1-e02cd12730ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.677 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[166bd975-938d-457d-a18e-bc2021d017d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.685 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cc804c-24b2-45a2-b40d-3fba8b3e4606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 NetworkManager[49171]: <info>  [1770047522.6886] device (tapd01f5485-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 02 15:52:02 compute-0 NetworkManager[49171]: <info>  [1770047522.6892] device (tapd01f5485-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.693 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a04d553d-0ace-4617-a31a-d1aaec1f8e7a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.711 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[e74e07a1-3d43-4f23-843b-d97aadaece1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 NetworkManager[49171]: <info>  [1770047522.7181] manager: (tap571a8d26-10): new Veth device (/org/freedesktop/NetworkManager/Devices/141)
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.718 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba830b2-702c-4f50-9607-f4f3e3cfd0e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.746 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[7f47fd4a-f7b1-4aca-aba4-e33fb41f2c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.749 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[c16b679e-949f-4e02-8e0d-13e8ae6b5cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 NetworkManager[49171]: <info>  [1770047522.7638] device (tap571a8d26-10): carrier: link connected
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.768 245979 DEBUG oslo.privsep.daemon [-] privsep: reply[a16eedf9-8a48-4e42-8f9e-3f308aaf9817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.780 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[ced936fa-ee30-43fa-86e0-9aa043154de0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap571a8d26-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:4f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500867, 'reachable_time': 38616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275170, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.792 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4a4db2-fafc-4c7d-979a-862d59d637cc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:4fa3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500867, 'tstamp': 500867}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275171, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.802 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[24f54ad2-d473-425a-a36c-552d23b7c51f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap571a8d26-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:4f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500867, 'reachable_time': 38616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275172, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.823 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[eccd6ee5-1448-4f4c-8a4f-482f2a1460b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.855 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[50da4adf-1075-4833-849e-dd6fa869029f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.856 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap571a8d26-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.856 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.857 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap571a8d26-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.858 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:02 compute-0 NetworkManager[49171]: <info>  [1770047522.8590] manager: (tap571a8d26-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Feb 02 15:52:02 compute-0 kernel: tap571a8d26-10: entered promiscuous mode
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.861 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap571a8d26-10, col_values=(('external_ids', {'iface-id': '394690c2-9066-491c-bd5b-f924947b57f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.860 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.862 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:02 compute-0 ovn_controller[144995]: 2026-02-02T15:52:02Z|00282|binding|INFO|Releasing lport 394690c2-9066-491c-bd5b-f924947b57f3 from this chassis (sb_readonly=0)
Feb 02 15:52:02 compute-0 nova_compute[239545]: 2026-02-02 15:52:02.866 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.867 154982 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/571a8d26-1b08-4233-a158-71a28cbbf88c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/571a8d26-1b08-4233-a158-71a28cbbf88c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.868 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[2b13be81-0cba-4ddf-9281-3bcbfbf2bd0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.868 154982 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: global
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     log         /dev/log local0 debug
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     log-tag     haproxy-metadata-proxy-571a8d26-1b08-4233-a158-71a28cbbf88c
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     user        root
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     group       root
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     maxconn     1024
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     pidfile     /var/lib/neutron/external/pids/571a8d26-1b08-4233-a158-71a28cbbf88c.pid.haproxy
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     daemon
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: defaults
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     log global
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     mode http
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     option httplog
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     option dontlognull
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     option http-server-close
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     option forwardfor
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     retries                 3
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     timeout http-request    30s
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     timeout connect         30s
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     timeout client          32s
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     timeout server          32s
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     timeout http-keep-alive 30s
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: listen listener
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     bind 169.254.169.254:80
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     server metadata /var/lib/neutron/metadata_proxy
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:     http-request add-header X-OVN-Network-ID 571a8d26-1b08-4233-a158-71a28cbbf88c
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 02 15:52:02 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:02.869 154982 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'env', 'PROCESS_TAG=haproxy-571a8d26-1b08-4233-a158-71a28cbbf88c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/571a8d26-1b08-4233-a158-71a28cbbf88c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 02 15:52:03 compute-0 podman[275211]: 2026-02-02 15:52:03.207454305 +0000 UTC m=+0.057761485 container create ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 02 15:52:03 compute-0 systemd[1]: Started libpod-conmon-ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c.scope.
Feb 02 15:52:03 compute-0 podman[275211]: 2026-02-02 15:52:03.172983025 +0000 UTC m=+0.023290235 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 02 15:52:03 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/090ec97dddd8d502989fe0bcb473d174866989c5e1353eea1ed87b2ac17a2eb8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:03 compute-0 podman[275211]: 2026-02-02 15:52:03.289446737 +0000 UTC m=+0.139753917 container init ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:52:03 compute-0 podman[275211]: 2026-02-02 15:52:03.293095248 +0000 UTC m=+0.143402428 container start ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 02 15:52:03 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[275255]: [NOTICE]   (275259) : New worker (275261) forked
Feb 02 15:52:03 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[275255]: [NOTICE]   (275259) : Loading success.
Feb 02 15:52:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:03 compute-0 ceph-mon[75334]: pgmap v1840: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Feb 02 15:52:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 341 B/s wr, 12 op/s
Feb 02 15:52:04 compute-0 nova_compute[239545]: 2026-02-02 15:52:04.574 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:04 compute-0 nova_compute[239545]: 2026-02-02 15:52:04.685 239549 DEBUG nova.compute.manager [req-806bec21-066c-4a89-a716-b1edda615e6a req-02f6d77c-e9e4-4751-a764-3472d529b8e5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received event network-vif-plugged-d01f5485-2544-4646-8ca6-308513fda325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:52:04 compute-0 nova_compute[239545]: 2026-02-02 15:52:04.685 239549 DEBUG oslo_concurrency.lockutils [req-806bec21-066c-4a89-a716-b1edda615e6a req-02f6d77c-e9e4-4751-a764-3472d529b8e5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:04 compute-0 nova_compute[239545]: 2026-02-02 15:52:04.685 239549 DEBUG oslo_concurrency.lockutils [req-806bec21-066c-4a89-a716-b1edda615e6a req-02f6d77c-e9e4-4751-a764-3472d529b8e5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:04 compute-0 nova_compute[239545]: 2026-02-02 15:52:04.685 239549 DEBUG oslo_concurrency.lockutils [req-806bec21-066c-4a89-a716-b1edda615e6a req-02f6d77c-e9e4-4751-a764-3472d529b8e5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:04 compute-0 nova_compute[239545]: 2026-02-02 15:52:04.686 239549 DEBUG nova.compute.manager [req-806bec21-066c-4a89-a716-b1edda615e6a req-02f6d77c-e9e4-4751-a764-3472d529b8e5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Processing event network-vif-plugged-d01f5485-2544-4646-8ca6-308513fda325 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.579 239549 DEBUG nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.580 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047525.5789397, c79d4e81-b8f8-4ca4-8355-90da048bd198 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.581 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] VM Started (Lifecycle Event)
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.585 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.589 239549 INFO nova.virt.libvirt.driver [-] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Instance spawned successfully.
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.589 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.612 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.619 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.622 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.623 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.623 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.624 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.624 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.624 239549 DEBUG nova.virt.libvirt.driver [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.643 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.643 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047525.5803072, c79d4e81-b8f8-4ca4-8355-90da048bd198 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.643 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] VM Paused (Lifecycle Event)
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.701 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.705 239549 DEBUG nova.virt.driver [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] Emitting event <LifecycleEvent: 1770047525.5827649, c79d4e81-b8f8-4ca4-8355-90da048bd198 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.706 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] VM Resumed (Lifecycle Event)
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.732 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.736 239549 DEBUG nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.753 239549 INFO nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Took 7.72 seconds to spawn the instance on the hypervisor.
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.754 239549 DEBUG nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.799 239549 INFO nova.compute.manager [None req-47922235-cf07-462d-a109-fb84d0fe9c58 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.841 239549 INFO nova.compute.manager [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Took 9.90 seconds to build instance.
Feb 02 15:52:05 compute-0 nova_compute[239545]: 2026-02-02 15:52:05.857 239549 DEBUG oslo_concurrency.lockutils [None req-05f483d1-8357-40d7-8621-e21f2b06569a 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:05 compute-0 ceph-mon[75334]: pgmap v1841: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 341 B/s wr, 12 op/s
Feb 02 15:52:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 12 KiB/s wr, 19 op/s
Feb 02 15:52:06 compute-0 nova_compute[239545]: 2026-02-02 15:52:06.755 239549 DEBUG nova.compute.manager [req-3ecb1401-6950-486c-8b69-6277c2356314 req-e93ba548-287f-4173-bbaf-abd1074d054a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received event network-vif-plugged-d01f5485-2544-4646-8ca6-308513fda325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:52:06 compute-0 nova_compute[239545]: 2026-02-02 15:52:06.755 239549 DEBUG oslo_concurrency.lockutils [req-3ecb1401-6950-486c-8b69-6277c2356314 req-e93ba548-287f-4173-bbaf-abd1074d054a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:06 compute-0 nova_compute[239545]: 2026-02-02 15:52:06.755 239549 DEBUG oslo_concurrency.lockutils [req-3ecb1401-6950-486c-8b69-6277c2356314 req-e93ba548-287f-4173-bbaf-abd1074d054a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:06 compute-0 nova_compute[239545]: 2026-02-02 15:52:06.756 239549 DEBUG oslo_concurrency.lockutils [req-3ecb1401-6950-486c-8b69-6277c2356314 req-e93ba548-287f-4173-bbaf-abd1074d054a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:06 compute-0 nova_compute[239545]: 2026-02-02 15:52:06.756 239549 DEBUG nova.compute.manager [req-3ecb1401-6950-486c-8b69-6277c2356314 req-e93ba548-287f-4173-bbaf-abd1074d054a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] No waiting events found dispatching network-vif-plugged-d01f5485-2544-4646-8ca6-308513fda325 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:52:06 compute-0 nova_compute[239545]: 2026-02-02 15:52:06.756 239549 WARNING nova.compute.manager [req-3ecb1401-6950-486c-8b69-6277c2356314 req-e93ba548-287f-4173-bbaf-abd1074d054a d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received unexpected event network-vif-plugged-d01f5485-2544-4646-8ca6-308513fda325 for instance with vm_state active and task_state None.
Feb 02 15:52:06 compute-0 nova_compute[239545]: 2026-02-02 15:52:06.921 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:07 compute-0 ceph-mon[75334]: pgmap v1842: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 12 KiB/s wr, 19 op/s
Feb 02 15:52:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 12 op/s
Feb 02 15:52:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:09 compute-0 nova_compute[239545]: 2026-02-02 15:52:09.577 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:09 compute-0 ceph-mon[75334]: pgmap v1843: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 12 op/s
Feb 02 15:52:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 769 KiB/s rd, 12 KiB/s wr, 36 op/s
Feb 02 15:52:10 compute-0 sudo[275276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:52:10 compute-0 sudo[275276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:52:10 compute-0 sudo[275276]: pam_unix(sudo:session): session closed for user root
Feb 02 15:52:10 compute-0 sudo[275301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:52:10 compute-0 sudo[275301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:52:10 compute-0 nova_compute[239545]: 2026-02-02 15:52:10.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:10 compute-0 nova_compute[239545]: 2026-02-02 15:52:10.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:52:10 compute-0 nova_compute[239545]: 2026-02-02 15:52:10.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:52:10 compute-0 sudo[275301]: pam_unix(sudo:session): session closed for user root
Feb 02 15:52:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:52:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:52:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:52:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:52:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:52:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:52:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:52:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:52:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:52:10 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:52:10 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:52:10 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:52:10 compute-0 nova_compute[239545]: 2026-02-02 15:52:10.753 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:52:10 compute-0 nova_compute[239545]: 2026-02-02 15:52:10.753 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:52:10 compute-0 nova_compute[239545]: 2026-02-02 15:52:10.754 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:52:10 compute-0 nova_compute[239545]: 2026-02-02 15:52:10.754 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:52:10 compute-0 sudo[275357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:52:10 compute-0 sudo[275357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:52:10 compute-0 sudo[275357]: pam_unix(sudo:session): session closed for user root
Feb 02 15:52:10 compute-0 sudo[275382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:52:10 compute-0 sudo[275382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:52:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:52:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:52:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:52:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:52:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:52:10 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:52:11 compute-0 podman[275419]: 2026-02-02 15:52:11.125721481 +0000 UTC m=+0.038649363 container create 50b6fc0cb00f434992165e6bd112dc76d099ecb264402d705a657aead936a55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_napier, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:52:11 compute-0 systemd[1]: Started libpod-conmon-50b6fc0cb00f434992165e6bd112dc76d099ecb264402d705a657aead936a55f.scope.
Feb 02 15:52:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:52:11 compute-0 podman[275419]: 2026-02-02 15:52:11.110034915 +0000 UTC m=+0.022962827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:52:11 compute-0 podman[275419]: 2026-02-02 15:52:11.210490093 +0000 UTC m=+0.123417985 container init 50b6fc0cb00f434992165e6bd112dc76d099ecb264402d705a657aead936a55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:52:11 compute-0 podman[275419]: 2026-02-02 15:52:11.217518016 +0000 UTC m=+0.130445928 container start 50b6fc0cb00f434992165e6bd112dc76d099ecb264402d705a657aead936a55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_napier, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:52:11 compute-0 podman[275419]: 2026-02-02 15:52:11.221938265 +0000 UTC m=+0.134866197 container attach 50b6fc0cb00f434992165e6bd112dc76d099ecb264402d705a657aead936a55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 15:52:11 compute-0 heuristic_napier[275436]: 167 167
Feb 02 15:52:11 compute-0 systemd[1]: libpod-50b6fc0cb00f434992165e6bd112dc76d099ecb264402d705a657aead936a55f.scope: Deactivated successfully.
Feb 02 15:52:11 compute-0 podman[275419]: 2026-02-02 15:52:11.22621176 +0000 UTC m=+0.139139652 container died 50b6fc0cb00f434992165e6bd112dc76d099ecb264402d705a657aead936a55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_napier, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c76b041f4d212b3bf95762af22ef213af9927b60b3e57998e60b2b7d5a645e-merged.mount: Deactivated successfully.
Feb 02 15:52:11 compute-0 podman[275419]: 2026-02-02 15:52:11.265052539 +0000 UTC m=+0.177980421 container remove 50b6fc0cb00f434992165e6bd112dc76d099ecb264402d705a657aead936a55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_napier, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:52:11 compute-0 systemd[1]: libpod-conmon-50b6fc0cb00f434992165e6bd112dc76d099ecb264402d705a657aead936a55f.scope: Deactivated successfully.
Feb 02 15:52:11 compute-0 podman[275459]: 2026-02-02 15:52:11.419961919 +0000 UTC m=+0.048005505 container create 8d92f6e3cb057e6d9af4cd4dde863f9a43966fa6f7547f8514d3b650aad3e07b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_euler, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 02 15:52:11 compute-0 systemd[1]: Started libpod-conmon-8d92f6e3cb057e6d9af4cd4dde863f9a43966fa6f7547f8514d3b650aad3e07b.scope.
Feb 02 15:52:11 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81698d00155edccb8455466dc4777ca21ffb29ec614dddfbd47715eafff5e22c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81698d00155edccb8455466dc4777ca21ffb29ec614dddfbd47715eafff5e22c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81698d00155edccb8455466dc4777ca21ffb29ec614dddfbd47715eafff5e22c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81698d00155edccb8455466dc4777ca21ffb29ec614dddfbd47715eafff5e22c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81698d00155edccb8455466dc4777ca21ffb29ec614dddfbd47715eafff5e22c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:11 compute-0 podman[275459]: 2026-02-02 15:52:11.399185536 +0000 UTC m=+0.027229182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:52:11 compute-0 podman[275459]: 2026-02-02 15:52:11.507846697 +0000 UTC m=+0.135890333 container init 8d92f6e3cb057e6d9af4cd4dde863f9a43966fa6f7547f8514d3b650aad3e07b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 02 15:52:11 compute-0 podman[275459]: 2026-02-02 15:52:11.516602983 +0000 UTC m=+0.144646579 container start 8d92f6e3cb057e6d9af4cd4dde863f9a43966fa6f7547f8514d3b650aad3e07b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb 02 15:52:11 compute-0 podman[275459]: 2026-02-02 15:52:11.520315275 +0000 UTC m=+0.148358891 container attach 8d92f6e3cb057e6d9af4cd4dde863f9a43966fa6f7547f8514d3b650aad3e07b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_euler, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:52:11 compute-0 nova_compute[239545]: 2026-02-02 15:52:11.851 239549 DEBUG nova.compute.manager [req-cf911241-b87b-4c7c-97e0-a6c934492562 req-17b3be48-8ec7-42de-8492-399a83bd940c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received event network-changed-d01f5485-2544-4646-8ca6-308513fda325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:52:11 compute-0 nova_compute[239545]: 2026-02-02 15:52:11.852 239549 DEBUG nova.compute.manager [req-cf911241-b87b-4c7c-97e0-a6c934492562 req-17b3be48-8ec7-42de-8492-399a83bd940c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Refreshing instance network info cache due to event network-changed-d01f5485-2544-4646-8ca6-308513fda325. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 02 15:52:11 compute-0 nova_compute[239545]: 2026-02-02 15:52:11.853 239549 DEBUG oslo_concurrency.lockutils [req-cf911241-b87b-4c7c-97e0-a6c934492562 req-17b3be48-8ec7-42de-8492-399a83bd940c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "refresh_cache-c79d4e81-b8f8-4ca4-8355-90da048bd198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:52:11 compute-0 nova_compute[239545]: 2026-02-02 15:52:11.853 239549 DEBUG oslo_concurrency.lockutils [req-cf911241-b87b-4c7c-97e0-a6c934492562 req-17b3be48-8ec7-42de-8492-399a83bd940c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquired lock "refresh_cache-c79d4e81-b8f8-4ca4-8355-90da048bd198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:52:11 compute-0 nova_compute[239545]: 2026-02-02 15:52:11.853 239549 DEBUG nova.network.neutron [req-cf911241-b87b-4c7c-97e0-a6c934492562 req-17b3be48-8ec7-42de-8492-399a83bd940c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Refreshing network info cache for port d01f5485-2544-4646-8ca6-308513fda325 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 02 15:52:11 compute-0 determined_euler[275477]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:52:11 compute-0 determined_euler[275477]: --> All data devices are unavailable
Feb 02 15:52:11 compute-0 nova_compute[239545]: 2026-02-02 15:52:11.923 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:11 compute-0 systemd[1]: libpod-8d92f6e3cb057e6d9af4cd4dde863f9a43966fa6f7547f8514d3b650aad3e07b.scope: Deactivated successfully.
Feb 02 15:52:11 compute-0 podman[275459]: 2026-02-02 15:52:11.940485359 +0000 UTC m=+0.568528955 container died 8d92f6e3cb057e6d9af4cd4dde863f9a43966fa6f7547f8514d3b650aad3e07b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_euler, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-81698d00155edccb8455466dc4777ca21ffb29ec614dddfbd47715eafff5e22c-merged.mount: Deactivated successfully.
Feb 02 15:52:11 compute-0 podman[275459]: 2026-02-02 15:52:11.978071996 +0000 UTC m=+0.606115592 container remove 8d92f6e3cb057e6d9af4cd4dde863f9a43966fa6f7547f8514d3b650aad3e07b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_euler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:52:11 compute-0 ceph-mon[75334]: pgmap v1844: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 769 KiB/s rd, 12 KiB/s wr, 36 op/s
Feb 02 15:52:12 compute-0 systemd[1]: libpod-conmon-8d92f6e3cb057e6d9af4cd4dde863f9a43966fa6f7547f8514d3b650aad3e07b.scope: Deactivated successfully.
Feb 02 15:52:12 compute-0 sudo[275382]: pam_unix(sudo:session): session closed for user root
Feb 02 15:52:12 compute-0 sudo[275510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:52:12 compute-0 sudo[275510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:52:12 compute-0 sudo[275510]: pam_unix(sudo:session): session closed for user root
Feb 02 15:52:12 compute-0 sudo[275535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:52:12 compute-0 sudo[275535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:52:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 02 15:52:12 compute-0 podman[275572]: 2026-02-02 15:52:12.448304995 +0000 UTC m=+0.035656411 container create b3d9ef966eaf861357d9b6b402cc1a73a3a06b7f1d6a22a841e472334539bf52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:52:12 compute-0 systemd[1]: Started libpod-conmon-b3d9ef966eaf861357d9b6b402cc1a73a3a06b7f1d6a22a841e472334539bf52.scope.
Feb 02 15:52:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:52:12 compute-0 podman[275572]: 2026-02-02 15:52:12.517772848 +0000 UTC m=+0.105124294 container init b3d9ef966eaf861357d9b6b402cc1a73a3a06b7f1d6a22a841e472334539bf52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:52:12 compute-0 podman[275572]: 2026-02-02 15:52:12.522580548 +0000 UTC m=+0.109931964 container start b3d9ef966eaf861357d9b6b402cc1a73a3a06b7f1d6a22a841e472334539bf52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 02 15:52:12 compute-0 flamboyant_booth[275589]: 167 167
Feb 02 15:52:12 compute-0 systemd[1]: libpod-b3d9ef966eaf861357d9b6b402cc1a73a3a06b7f1d6a22a841e472334539bf52.scope: Deactivated successfully.
Feb 02 15:52:12 compute-0 podman[275572]: 2026-02-02 15:52:12.432852054 +0000 UTC m=+0.020203500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:52:12 compute-0 podman[275572]: 2026-02-02 15:52:12.530130103 +0000 UTC m=+0.117481539 container attach b3d9ef966eaf861357d9b6b402cc1a73a3a06b7f1d6a22a841e472334539bf52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:52:12 compute-0 podman[275572]: 2026-02-02 15:52:12.530432721 +0000 UTC m=+0.117784127 container died b3d9ef966eaf861357d9b6b402cc1a73a3a06b7f1d6a22a841e472334539bf52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:52:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-effb1b37ace7021b55a3414ee0b2bb9cf06dd7d621849676e7124a9f6eed88d5-merged.mount: Deactivated successfully.
Feb 02 15:52:12 compute-0 podman[275572]: 2026-02-02 15:52:12.607286696 +0000 UTC m=+0.194638112 container remove b3d9ef966eaf861357d9b6b402cc1a73a3a06b7f1d6a22a841e472334539bf52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:52:12 compute-0 systemd[1]: libpod-conmon-b3d9ef966eaf861357d9b6b402cc1a73a3a06b7f1d6a22a841e472334539bf52.scope: Deactivated successfully.
Feb 02 15:52:12 compute-0 podman[275615]: 2026-02-02 15:52:12.785631566 +0000 UTC m=+0.056027253 container create 09f52244c16af4c0df9131be5c29d58cdd1e1d8b8388fde0510a0b48fafa48b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:52:12 compute-0 systemd[1]: Started libpod-conmon-09f52244c16af4c0df9131be5c29d58cdd1e1d8b8388fde0510a0b48fafa48b8.scope.
Feb 02 15:52:12 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6a7b2e0536fd1062dea23759894cbaeb35910d8af3b6c9b4063e8d61d0da76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6a7b2e0536fd1062dea23759894cbaeb35910d8af3b6c9b4063e8d61d0da76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6a7b2e0536fd1062dea23759894cbaeb35910d8af3b6c9b4063e8d61d0da76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6a7b2e0536fd1062dea23759894cbaeb35910d8af3b6c9b4063e8d61d0da76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:12 compute-0 podman[275615]: 2026-02-02 15:52:12.763944891 +0000 UTC m=+0.034340648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:52:12 compute-0 podman[275615]: 2026-02-02 15:52:12.875695667 +0000 UTC m=+0.146091364 container init 09f52244c16af4c0df9131be5c29d58cdd1e1d8b8388fde0510a0b48fafa48b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noyce, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:52:12 compute-0 podman[275615]: 2026-02-02 15:52:12.881743036 +0000 UTC m=+0.152138703 container start 09f52244c16af4c0df9131be5c29d58cdd1e1d8b8388fde0510a0b48fafa48b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:52:12 compute-0 nova_compute[239545]: 2026-02-02 15:52:12.885 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:52:12 compute-0 podman[275615]: 2026-02-02 15:52:12.888483343 +0000 UTC m=+0.158879030 container attach 09f52244c16af4c0df9131be5c29d58cdd1e1d8b8388fde0510a0b48fafa48b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:52:12 compute-0 nova_compute[239545]: 2026-02-02 15:52:12.907 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:52:12 compute-0 nova_compute[239545]: 2026-02-02 15:52:12.908 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:52:12 compute-0 nova_compute[239545]: 2026-02-02 15:52:12.908 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:13 compute-0 distracted_noyce[275631]: {
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:     "0": [
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:         {
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "devices": [
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "/dev/loop3"
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             ],
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_name": "ceph_lv0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_size": "21470642176",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "name": "ceph_lv0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "tags": {
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.cluster_name": "ceph",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.crush_device_class": "",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.encrypted": "0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.objectstore": "bluestore",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.osd_id": "0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.type": "block",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.vdo": "0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.with_tpm": "0"
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             },
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "type": "block",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "vg_name": "ceph_vg0"
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:         }
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:     ],
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:     "1": [
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:         {
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "devices": [
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "/dev/loop4"
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             ],
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_name": "ceph_lv1",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_size": "21470642176",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "name": "ceph_lv1",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "tags": {
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.cluster_name": "ceph",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.crush_device_class": "",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.encrypted": "0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.objectstore": "bluestore",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.osd_id": "1",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.type": "block",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.vdo": "0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.with_tpm": "0"
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             },
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "type": "block",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "vg_name": "ceph_vg1"
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:         }
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:     ],
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:     "2": [
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:         {
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "devices": [
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "/dev/loop5"
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             ],
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_name": "ceph_lv2",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_size": "21470642176",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "name": "ceph_lv2",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "tags": {
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.cluster_name": "ceph",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.crush_device_class": "",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.encrypted": "0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.objectstore": "bluestore",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.osd_id": "2",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.type": "block",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.vdo": "0",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:                 "ceph.with_tpm": "0"
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             },
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "type": "block",
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:             "vg_name": "ceph_vg2"
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:         }
Feb 02 15:52:13 compute-0 distracted_noyce[275631]:     ]
Feb 02 15:52:13 compute-0 distracted_noyce[275631]: }
Feb 02 15:52:13 compute-0 systemd[1]: libpod-09f52244c16af4c0df9131be5c29d58cdd1e1d8b8388fde0510a0b48fafa48b8.scope: Deactivated successfully.
Feb 02 15:52:13 compute-0 podman[275615]: 2026-02-02 15:52:13.165344512 +0000 UTC m=+0.435740179 container died 09f52244c16af4c0df9131be5c29d58cdd1e1d8b8388fde0510a0b48fafa48b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noyce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e6a7b2e0536fd1062dea23759894cbaeb35910d8af3b6c9b4063e8d61d0da76-merged.mount: Deactivated successfully.
Feb 02 15:52:13 compute-0 podman[275615]: 2026-02-02 15:52:13.267446351 +0000 UTC m=+0.537842018 container remove 09f52244c16af4c0df9131be5c29d58cdd1e1d8b8388fde0510a0b48fafa48b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:52:13 compute-0 systemd[1]: libpod-conmon-09f52244c16af4c0df9131be5c29d58cdd1e1d8b8388fde0510a0b48fafa48b8.scope: Deactivated successfully.
Feb 02 15:52:13 compute-0 sudo[275535]: pam_unix(sudo:session): session closed for user root
Feb 02 15:52:13 compute-0 sudo[275655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:52:13 compute-0 sudo[275655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:52:13 compute-0 sudo[275655]: pam_unix(sudo:session): session closed for user root
Feb 02 15:52:13 compute-0 sudo[275680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:52:13 compute-0 sudo[275680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:52:13 compute-0 podman[275718]: 2026-02-02 15:52:13.738244133 +0000 UTC m=+0.052265579 container create 9bde38d8039a81e5963d5e8fa1a1bc9cf3d409ad7b6940c2fcc2765ef3b6d323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bose, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb 02 15:52:13 compute-0 systemd[1]: Started libpod-conmon-9bde38d8039a81e5963d5e8fa1a1bc9cf3d409ad7b6940c2fcc2765ef3b6d323.scope.
Feb 02 15:52:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:13 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:52:13 compute-0 podman[275718]: 2026-02-02 15:52:13.7154028 +0000 UTC m=+0.029424276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:52:13 compute-0 podman[275718]: 2026-02-02 15:52:13.826008919 +0000 UTC m=+0.140030385 container init 9bde38d8039a81e5963d5e8fa1a1bc9cf3d409ad7b6940c2fcc2765ef3b6d323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bose, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:52:13 compute-0 podman[275718]: 2026-02-02 15:52:13.832138499 +0000 UTC m=+0.146159945 container start 9bde38d8039a81e5963d5e8fa1a1bc9cf3d409ad7b6940c2fcc2765ef3b6d323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bose, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:52:13 compute-0 podman[275718]: 2026-02-02 15:52:13.83662635 +0000 UTC m=+0.150647816 container attach 9bde38d8039a81e5963d5e8fa1a1bc9cf3d409ad7b6940c2fcc2765ef3b6d323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:52:13 compute-0 bold_bose[275734]: 167 167
Feb 02 15:52:13 compute-0 systemd[1]: libpod-9bde38d8039a81e5963d5e8fa1a1bc9cf3d409ad7b6940c2fcc2765ef3b6d323.scope: Deactivated successfully.
Feb 02 15:52:13 compute-0 podman[275718]: 2026-02-02 15:52:13.838982388 +0000 UTC m=+0.153003834 container died 9bde38d8039a81e5963d5e8fa1a1bc9cf3d409ad7b6940c2fcc2765ef3b6d323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bose, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:52:13 compute-0 nova_compute[239545]: 2026-02-02 15:52:13.876 239549 DEBUG nova.network.neutron [req-cf911241-b87b-4c7c-97e0-a6c934492562 req-17b3be48-8ec7-42de-8492-399a83bd940c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Updated VIF entry in instance network info cache for port d01f5485-2544-4646-8ca6-308513fda325. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 02 15:52:13 compute-0 nova_compute[239545]: 2026-02-02 15:52:13.878 239549 DEBUG nova.network.neutron [req-cf911241-b87b-4c7c-97e0-a6c934492562 req-17b3be48-8ec7-42de-8492-399a83bd940c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Updating instance_info_cache with network_info: [{"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d949638da17cd305b2dc68ec0b7b2f01df27d46ab8be4fa89427fae11d64a7a-merged.mount: Deactivated successfully.
Feb 02 15:52:13 compute-0 podman[275718]: 2026-02-02 15:52:13.908827141 +0000 UTC m=+0.222848587 container remove 9bde38d8039a81e5963d5e8fa1a1bc9cf3d409ad7b6940c2fcc2765ef3b6d323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:52:13 compute-0 systemd[1]: libpod-conmon-9bde38d8039a81e5963d5e8fa1a1bc9cf3d409ad7b6940c2fcc2765ef3b6d323.scope: Deactivated successfully.
Feb 02 15:52:13 compute-0 nova_compute[239545]: 2026-02-02 15:52:13.918 239549 DEBUG oslo_concurrency.lockutils [req-cf911241-b87b-4c7c-97e0-a6c934492562 req-17b3be48-8ec7-42de-8492-399a83bd940c d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Releasing lock "refresh_cache-c79d4e81-b8f8-4ca4-8355-90da048bd198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:52:14 compute-0 ceph-mon[75334]: pgmap v1845: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 02 15:52:14 compute-0 podman[275760]: 2026-02-02 15:52:14.0462313 +0000 UTC m=+0.045866692 container create 9cc115510f9315d6cd6be512c44dbd8997f6f0ee227ca9aec2580897b1833f21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:52:14 compute-0 systemd[1]: Started libpod-conmon-9cc115510f9315d6cd6be512c44dbd8997f6f0ee227ca9aec2580897b1833f21.scope.
Feb 02 15:52:14 compute-0 podman[275760]: 2026-02-02 15:52:14.023292775 +0000 UTC m=+0.022928197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:52:14 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/185713e426224c6095cc97e3b0b9679e6d771d1f8d8ff2f0c2c5965daee1183a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/185713e426224c6095cc97e3b0b9679e6d771d1f8d8ff2f0c2c5965daee1183a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/185713e426224c6095cc97e3b0b9679e6d771d1f8d8ff2f0c2c5965daee1183a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/185713e426224c6095cc97e3b0b9679e6d771d1f8d8ff2f0c2c5965daee1183a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:52:14 compute-0 podman[275760]: 2026-02-02 15:52:14.137982474 +0000 UTC m=+0.137617896 container init 9cc115510f9315d6cd6be512c44dbd8997f6f0ee227ca9aec2580897b1833f21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wilbur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:52:14 compute-0 podman[275760]: 2026-02-02 15:52:14.144207577 +0000 UTC m=+0.143842969 container start 9cc115510f9315d6cd6be512c44dbd8997f6f0ee227ca9aec2580897b1833f21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 02 15:52:14 compute-0 podman[275760]: 2026-02-02 15:52:14.148749849 +0000 UTC m=+0.148385261 container attach 9cc115510f9315d6cd6be512c44dbd8997f6f0ee227ca9aec2580897b1833f21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:52:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 02 15:52:14 compute-0 nova_compute[239545]: 2026-02-02 15:52:14.578 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:14 compute-0 lvm[275856]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:52:14 compute-0 lvm[275854]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:52:14 compute-0 lvm[275856]: VG ceph_vg1 finished
Feb 02 15:52:14 compute-0 lvm[275854]: VG ceph_vg0 finished
Feb 02 15:52:14 compute-0 lvm[275858]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:52:14 compute-0 lvm[275858]: VG ceph_vg2 finished
Feb 02 15:52:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:52:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:52:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:52:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:52:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:52:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:52:14 compute-0 peaceful_wilbur[275777]: {}
Feb 02 15:52:14 compute-0 systemd[1]: libpod-9cc115510f9315d6cd6be512c44dbd8997f6f0ee227ca9aec2580897b1833f21.scope: Deactivated successfully.
Feb 02 15:52:14 compute-0 podman[275760]: 2026-02-02 15:52:14.858762823 +0000 UTC m=+0.858398215 container died 9cc115510f9315d6cd6be512c44dbd8997f6f0ee227ca9aec2580897b1833f21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 02 15:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-185713e426224c6095cc97e3b0b9679e6d771d1f8d8ff2f0c2c5965daee1183a-merged.mount: Deactivated successfully.
Feb 02 15:52:14 compute-0 podman[275760]: 2026-02-02 15:52:14.969414762 +0000 UTC m=+0.969050154 container remove 9cc115510f9315d6cd6be512c44dbd8997f6f0ee227ca9aec2580897b1833f21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_wilbur, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 15:52:14 compute-0 systemd[1]: libpod-conmon-9cc115510f9315d6cd6be512c44dbd8997f6f0ee227ca9aec2580897b1833f21.scope: Deactivated successfully.
Feb 02 15:52:15 compute-0 sudo[275680]: pam_unix(sudo:session): session closed for user root
Feb 02 15:52:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:52:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:52:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:52:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:52:15 compute-0 sudo[275871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:52:15 compute-0 sudo[275871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:52:15 compute-0 sudo[275871]: pam_unix(sudo:session): session closed for user root
Feb 02 15:52:16 compute-0 ceph-mon[75334]: pgmap v1846: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 02 15:52:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:52:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:52:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Feb 02 15:52:16 compute-0 nova_compute[239545]: 2026-02-02 15:52:16.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:16 compute-0 nova_compute[239545]: 2026-02-02 15:52:16.547 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:16 compute-0 nova_compute[239545]: 2026-02-02 15:52:16.926 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:17 compute-0 ovn_controller[144995]: 2026-02-02T15:52:17Z|00071|pinctrl(ovn_pinctrl0)|WARN|Dropped 1 log messages in last 361 seconds (most recently, 361 seconds ago) due to excessive rate
Feb 02 15:52:17 compute-0 ovn_controller[144995]: 2026-02-02T15:52:17Z|00072|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.4
Feb 02 15:52:17 compute-0 ovn_controller[144995]: 2026-02-02T15:52:17Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:82:bb:68 10.100.0.4
Feb 02 15:52:17 compute-0 nova_compute[239545]: 2026-02-02 15:52:17.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:17 compute-0 nova_compute[239545]: 2026-02-02 15:52:17.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:17 compute-0 nova_compute[239545]: 2026-02-02 15:52:17.566 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:17 compute-0 nova_compute[239545]: 2026-02-02 15:52:17.566 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:17 compute-0 nova_compute[239545]: 2026-02-02 15:52:17.567 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:17 compute-0 nova_compute[239545]: 2026-02-02 15:52:17.567 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:52:17 compute-0 nova_compute[239545]: 2026-02-02 15:52:17.567 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:52:18 compute-0 ceph-mon[75334]: pgmap v1847: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Feb 02 15:52:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:52:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388533385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.061 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.148 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.149 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.155 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.155 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.155 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:52:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.298 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.299 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3697MB free_disk=59.94235258549452GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.299 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.300 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.381 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.382 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance c79d4e81-b8f8-4ca4-8355-90da048bd198 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.382 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.382 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.394 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing inventories for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.411 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating ProviderTree inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.411 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.428 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing aggregate associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.446 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing trait associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, traits: COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 15:52:18 compute-0 nova_compute[239545]: 2026-02-02 15:52:18.524 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:52:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1388533385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:52:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:52:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841238456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:52:19 compute-0 nova_compute[239545]: 2026-02-02 15:52:19.111 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:52:19 compute-0 nova_compute[239545]: 2026-02-02 15:52:19.119 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:52:19 compute-0 nova_compute[239545]: 2026-02-02 15:52:19.135 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:52:19 compute-0 nova_compute[239545]: 2026-02-02 15:52:19.161 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:52:19 compute-0 nova_compute[239545]: 2026-02-02 15:52:19.162 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:19 compute-0 nova_compute[239545]: 2026-02-02 15:52:19.580 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:20 compute-0 ceph-mon[75334]: pgmap v1848: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Feb 02 15:52:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2841238456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:52:20 compute-0 nova_compute[239545]: 2026-02-02 15:52:20.158 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 67 op/s
Feb 02 15:52:20 compute-0 nova_compute[239545]: 2026-02-02 15:52:20.182 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:20 compute-0 nova_compute[239545]: 2026-02-02 15:52:20.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:21 compute-0 ceph-mon[75334]: pgmap v1849: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 67 op/s
Feb 02 15:52:21 compute-0 nova_compute[239545]: 2026-02-02 15:52:21.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:52:21 compute-0 nova_compute[239545]: 2026-02-02 15:52:21.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:52:21 compute-0 ovn_controller[144995]: 2026-02-02T15:52:21Z|00074|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.4
Feb 02 15:52:21 compute-0 ovn_controller[144995]: 2026-02-02T15:52:21Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:82:bb:68 10.100.0.4
Feb 02 15:52:21 compute-0 nova_compute[239545]: 2026-02-02 15:52:21.928 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 365 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.0 MiB/s wr, 88 op/s
Feb 02 15:52:22 compute-0 ovn_controller[144995]: 2026-02-02T15:52:22Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:bb:68 10.100.0.4
Feb 02 15:52:22 compute-0 ovn_controller[144995]: 2026-02-02T15:52:22Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:bb:68 10.100.0.4
Feb 02 15:52:23 compute-0 ceph-mon[75334]: pgmap v1850: 305 pgs: 305 active+clean; 365 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.0 MiB/s wr, 88 op/s
Feb 02 15:52:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 365 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 47 op/s
Feb 02 15:52:24 compute-0 nova_compute[239545]: 2026-02-02 15:52:24.582 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:25 compute-0 ceph-mon[75334]: pgmap v1851: 305 pgs: 305 active+clean; 365 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 47 op/s
Feb 02 15:52:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 369 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Feb 02 15:52:26 compute-0 nova_compute[239545]: 2026-02-02 15:52:26.930 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:27 compute-0 ceph-mon[75334]: pgmap v1852: 305 pgs: 305 active+clean; 369 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Feb 02 15:52:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:52:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/339784961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:52:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:52:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/339784961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:52:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 369 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Feb 02 15:52:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/339784961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:52:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/339784961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:52:28 compute-0 podman[275942]: 2026-02-02 15:52:28.325503259 +0000 UTC m=+0.061819706 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 15:52:28 compute-0 podman[275941]: 2026-02-02 15:52:28.349370849 +0000 UTC m=+0.088556327 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 02 15:52:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:29 compute-0 ceph-mon[75334]: pgmap v1853: 305 pgs: 305 active+clean; 369 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Feb 02 15:52:29 compute-0 nova_compute[239545]: 2026-02-02 15:52:29.585 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 369 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Feb 02 15:52:31 compute-0 ceph-mon[75334]: pgmap v1854: 305 pgs: 305 active+clean; 369 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Feb 02 15:52:31 compute-0 nova_compute[239545]: 2026-02-02 15:52:31.972 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 369 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 47 op/s
Feb 02 15:52:33 compute-0 ceph-mon[75334]: pgmap v1855: 305 pgs: 305 active+clean; 369 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 47 op/s
Feb 02 15:52:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 373 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 445 KiB/s wr, 3 op/s
Feb 02 15:52:34 compute-0 nova_compute[239545]: 2026-02-02 15:52:34.586 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:35 compute-0 ceph-mon[75334]: pgmap v1856: 305 pgs: 305 active+clean; 373 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 445 KiB/s wr, 3 op/s
Feb 02 15:52:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 794 KiB/s wr, 7 op/s
Feb 02 15:52:36 compute-0 nova_compute[239545]: 2026-02-02 15:52:36.975 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:37 compute-0 ceph-mon[75334]: pgmap v1857: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 794 KiB/s wr, 7 op/s
Feb 02 15:52:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 439 KiB/s wr, 5 op/s
Feb 02 15:52:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:39 compute-0 ceph-mon[75334]: pgmap v1858: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 439 KiB/s wr, 5 op/s
Feb 02 15:52:39 compute-0 nova_compute[239545]: 2026-02-02 15:52:39.588 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 439 KiB/s wr, 5 op/s
Feb 02 15:52:40 compute-0 ovn_controller[144995]: 2026-02-02T15:52:40Z|00283|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Feb 02 15:52:41 compute-0 ceph-mon[75334]: pgmap v1859: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 439 KiB/s wr, 5 op/s
Feb 02 15:52:42 compute-0 nova_compute[239545]: 2026-02-02 15:52:42.037 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 439 KiB/s wr, 5 op/s
Feb 02 15:52:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:52:42
Feb 02 15:52:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:52:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:52:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'images', 'backups']
Feb 02 15:52:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:52:43 compute-0 ceph-mon[75334]: pgmap v1860: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 439 KiB/s wr, 5 op/s
Feb 02 15:52:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.135 239549 DEBUG oslo_concurrency.lockutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "c79d4e81-b8f8-4ca4-8355-90da048bd198" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.136 239549 DEBUG oslo_concurrency.lockutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.136 239549 DEBUG oslo_concurrency.lockutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.136 239549 DEBUG oslo_concurrency.lockutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.136 239549 DEBUG oslo_concurrency.lockutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.138 239549 INFO nova.compute.manager [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Terminating instance
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.139 239549 DEBUG nova.compute.manager [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 15:52:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 437 KiB/s wr, 4 op/s
Feb 02 15:52:44 compute-0 kernel: tapd01f5485-25 (unregistering): left promiscuous mode
Feb 02 15:52:44 compute-0 NetworkManager[49171]: <info>  [1770047564.1937] device (tapd01f5485-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 15:52:44 compute-0 ovn_controller[144995]: 2026-02-02T15:52:44Z|00284|binding|INFO|Releasing lport d01f5485-2544-4646-8ca6-308513fda325 from this chassis (sb_readonly=0)
Feb 02 15:52:44 compute-0 ovn_controller[144995]: 2026-02-02T15:52:44Z|00285|binding|INFO|Setting lport d01f5485-2544-4646-8ca6-308513fda325 down in Southbound
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.200 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 ovn_controller[144995]: 2026-02-02T15:52:44Z|00286|binding|INFO|Removing iface tapd01f5485-25 ovn-installed in OVS
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.203 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.206 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.216 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:bb:68 10.100.0.4'], port_security=['fa:16:3e:82:bb:68 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c79d4e81-b8f8-4ca4-8355-90da048bd198', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-571a8d26-1b08-4233-a158-71a28cbbf88c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e1abae6c1404ce2b24265e7136ffe6a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '189fd68e-8be4-418b-963a-7de1d59bfc2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7394ccd-eb0f-47a9-85af-ffa4a04fcde8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=d01f5485-2544-4646-8ca6-308513fda325) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.217 154982 INFO neutron.agent.ovn.metadata.agent [-] Port d01f5485-2544-4646-8ca6-308513fda325 in datapath 571a8d26-1b08-4233-a158-71a28cbbf88c unbound from our chassis
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.218 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 571a8d26-1b08-4233-a158-71a28cbbf88c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.219 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fea5d1-6a54-435a-bbf1-040b6a94d4f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.220 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c namespace which is not needed anymore
Feb 02 15:52:44 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Feb 02 15:52:44 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 16.118s CPU time.
Feb 02 15:52:44 compute-0 systemd-machined[207609]: Machine qemu-29-instance-0000001d terminated.
Feb 02 15:52:44 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[275255]: [NOTICE]   (275259) : haproxy version is 2.8.14-c23fe91
Feb 02 15:52:44 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[275255]: [NOTICE]   (275259) : path to executable is /usr/sbin/haproxy
Feb 02 15:52:44 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[275255]: [WARNING]  (275259) : Exiting Master process...
Feb 02 15:52:44 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[275255]: [ALERT]    (275259) : Current worker (275261) exited with code 143 (Terminated)
Feb 02 15:52:44 compute-0 neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c[275255]: [WARNING]  (275259) : All workers exited. Exiting... (0)
Feb 02 15:52:44 compute-0 systemd[1]: libpod-ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c.scope: Deactivated successfully.
Feb 02 15:52:44 compute-0 podman[276011]: 2026-02-02 15:52:44.347128866 +0000 UTC m=+0.049488481 container died ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.406 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.418 239549 INFO nova.virt.libvirt.driver [-] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Instance destroyed successfully.
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.419 239549 DEBUG nova.objects.instance [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lazy-loading 'resources' on Instance uuid c79d4e81-b8f8-4ca4-8355-90da048bd198 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c-userdata-shm.mount: Deactivated successfully.
Feb 02 15:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-090ec97dddd8d502989fe0bcb473d174866989c5e1353eea1ed87b2ac17a2eb8-merged.mount: Deactivated successfully.
Feb 02 15:52:44 compute-0 podman[276011]: 2026-02-02 15:52:44.426948025 +0000 UTC m=+0.129307620 container cleanup ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 15:52:44 compute-0 systemd[1]: libpod-conmon-ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c.scope: Deactivated successfully.
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.433 239549 DEBUG nova.virt.libvirt.vif [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:51:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-95220137',display_name='tempest-TestEncryptedCinderVolumes-server-95220137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-95220137',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGghUGTIcKcbmFyDjDExJq0q+VsuYW4BqumxGHKLx3E1e/6oKedlb5/fmggown6dVAhqPLOwmstclEUWmmD7KyDyLHDlHuBYQ6150Bpk3MrMabPI6fo5dl75qL/VQaUJ/g==',key_name='tempest-TestEncryptedCinderVolumes-841766027',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:52:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6e1abae6c1404ce2b24265e7136ffe6a',ramdisk_id='',reservation_id='r-8455trtl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-987785960',owner_user_name='tempest-TestEncryptedCinderVolumes-987785960-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:52:05Z,user_data=None,user_id='16b55bfc98574e0096db4f19bcdcbb2e',uuid=c79d4e81-b8f8-4ca4-8355-90da048bd198,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.434 239549 DEBUG nova.network.os_vif_util [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converting VIF {"id": "d01f5485-2544-4646-8ca6-308513fda325", "address": "fa:16:3e:82:bb:68", "network": {"id": "571a8d26-1b08-4233-a158-71a28cbbf88c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-205550940-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e1abae6c1404ce2b24265e7136ffe6a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01f5485-25", "ovs_interfaceid": "d01f5485-2544-4646-8ca6-308513fda325", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.435 239549 DEBUG nova.network.os_vif_util [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:bb:68,bridge_name='br-int',has_traffic_filtering=True,id=d01f5485-2544-4646-8ca6-308513fda325,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01f5485-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.435 239549 DEBUG os_vif [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:bb:68,bridge_name='br-int',has_traffic_filtering=True,id=d01f5485-2544-4646-8ca6-308513fda325,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01f5485-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.437 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.437 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd01f5485-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.438 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.439 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.441 239549 INFO os_vif [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:bb:68,bridge_name='br-int',has_traffic_filtering=True,id=d01f5485-2544-4646-8ca6-308513fda325,network=Network(571a8d26-1b08-4233-a158-71a28cbbf88c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01f5485-25')
Feb 02 15:52:44 compute-0 podman[276051]: 2026-02-02 15:52:44.485068209 +0000 UTC m=+0.042210292 container remove ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.491 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[56d0a467-226c-4316-a900-c9908a25cc7c]: (4, ('Mon Feb  2 03:52:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c (ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c)\nac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c\nMon Feb  2 03:52:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c (ac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c)\nac5c0631a162517e75efa747792424208c71a9171ac5ea79ae43939ee15ef66c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.494 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[bce66fa1-e67a-4177-bc04-3e5e9fe02e25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.495 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap571a8d26-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.496 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 kernel: tap571a8d26-10: left promiscuous mode
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.501 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.504 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb9d8c6-44a6-464a-93d9-3176d2b4c19a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.519 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5f2754-4623-4eed-baa8-12a1de83540a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.521 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[948aa64a-ffda-4526-a037-2ae6fa4a887f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.535 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[63305709-adec-4639-a42f-2fce07349b8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500862, 'reachable_time': 29149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276084, 'error': None, 'target': 'ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d571a8d26\x2d1b08\x2d4233\x2da158\x2d71a28cbbf88c.mount: Deactivated successfully.
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.537 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-571a8d26-1b08-4233-a158-71a28cbbf88c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 15:52:44 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:44.537 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[de274767-0f73-4689-a6e3-29e11f08a5a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.583 239549 INFO nova.virt.libvirt.driver [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Deleting instance files /var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198_del
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.584 239549 INFO nova.virt.libvirt.driver [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Deletion of /var/lib/nova/instances/c79d4e81-b8f8-4ca4-8355-90da048bd198_del complete
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.589 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.647 239549 INFO nova.compute.manager [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Took 0.51 seconds to destroy the instance on the hypervisor.
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.648 239549 DEBUG oslo.service.loopingcall [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.648 239549 DEBUG nova.compute.manager [-] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.649 239549 DEBUG nova.network.neutron [-] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 15:52:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:52:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:52:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:52:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:52:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:52:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.951 239549 DEBUG nova.compute.manager [req-3aa06c7d-615b-4401-a087-17f9d75f5a7d req-c67e7819-e707-4106-b465-fc6952abfdaf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received event network-vif-unplugged-d01f5485-2544-4646-8ca6-308513fda325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.952 239549 DEBUG oslo_concurrency.lockutils [req-3aa06c7d-615b-4401-a087-17f9d75f5a7d req-c67e7819-e707-4106-b465-fc6952abfdaf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.952 239549 DEBUG oslo_concurrency.lockutils [req-3aa06c7d-615b-4401-a087-17f9d75f5a7d req-c67e7819-e707-4106-b465-fc6952abfdaf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.953 239549 DEBUG oslo_concurrency.lockutils [req-3aa06c7d-615b-4401-a087-17f9d75f5a7d req-c67e7819-e707-4106-b465-fc6952abfdaf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.953 239549 DEBUG nova.compute.manager [req-3aa06c7d-615b-4401-a087-17f9d75f5a7d req-c67e7819-e707-4106-b465-fc6952abfdaf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] No waiting events found dispatching network-vif-unplugged-d01f5485-2544-4646-8ca6-308513fda325 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:52:44 compute-0 nova_compute[239545]: 2026-02-02 15:52:44.953 239549 DEBUG nova.compute.manager [req-3aa06c7d-615b-4401-a087-17f9d75f5a7d req-c67e7819-e707-4106-b465-fc6952abfdaf d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received event network-vif-unplugged-d01f5485-2544-4646-8ca6-308513fda325 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:52:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:52:45 compute-0 ceph-mon[75334]: pgmap v1861: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 437 KiB/s wr, 4 op/s
Feb 02 15:52:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 624 KiB/s rd, 349 KiB/s wr, 10 op/s
Feb 02 15:52:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:46.661 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 15:52:46 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:46.662 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 15:52:46 compute-0 nova_compute[239545]: 2026-02-02 15:52:46.669 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:46 compute-0 nova_compute[239545]: 2026-02-02 15:52:46.685 239549 DEBUG nova.network.neutron [-] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:52:46 compute-0 nova_compute[239545]: 2026-02-02 15:52:46.709 239549 INFO nova.compute.manager [-] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Took 2.06 seconds to deallocate network for instance.
Feb 02 15:52:46 compute-0 nova_compute[239545]: 2026-02-02 15:52:46.915 239549 INFO nova.compute.manager [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Took 0.21 seconds to detach 1 volumes for instance.
Feb 02 15:52:46 compute-0 nova_compute[239545]: 2026-02-02 15:52:46.959 239549 DEBUG oslo_concurrency.lockutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:46 compute-0 nova_compute[239545]: 2026-02-02 15:52:46.960 239549 DEBUG oslo_concurrency.lockutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.036 239549 DEBUG oslo_concurrency.processutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.064 239549 DEBUG nova.compute.manager [req-4d3a0b03-f931-43a3-994f-c3f049bcb1b3 req-96acb35f-4a2d-4fe8-b7a0-30ba3c457b20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received event network-vif-plugged-d01f5485-2544-4646-8ca6-308513fda325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.065 239549 DEBUG oslo_concurrency.lockutils [req-4d3a0b03-f931-43a3-994f-c3f049bcb1b3 req-96acb35f-4a2d-4fe8-b7a0-30ba3c457b20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.066 239549 DEBUG oslo_concurrency.lockutils [req-4d3a0b03-f931-43a3-994f-c3f049bcb1b3 req-96acb35f-4a2d-4fe8-b7a0-30ba3c457b20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.066 239549 DEBUG oslo_concurrency.lockutils [req-4d3a0b03-f931-43a3-994f-c3f049bcb1b3 req-96acb35f-4a2d-4fe8-b7a0-30ba3c457b20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.066 239549 DEBUG nova.compute.manager [req-4d3a0b03-f931-43a3-994f-c3f049bcb1b3 req-96acb35f-4a2d-4fe8-b7a0-30ba3c457b20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] No waiting events found dispatching network-vif-plugged-d01f5485-2544-4646-8ca6-308513fda325 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.067 239549 WARNING nova.compute.manager [req-4d3a0b03-f931-43a3-994f-c3f049bcb1b3 req-96acb35f-4a2d-4fe8-b7a0-30ba3c457b20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received unexpected event network-vif-plugged-d01f5485-2544-4646-8ca6-308513fda325 for instance with vm_state deleted and task_state None.
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.067 239549 DEBUG nova.compute.manager [req-4d3a0b03-f931-43a3-994f-c3f049bcb1b3 req-96acb35f-4a2d-4fe8-b7a0-30ba3c457b20 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Received event network-vif-deleted-d01f5485-2544-4646-8ca6-308513fda325 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 15:52:47 compute-0 ceph-mon[75334]: pgmap v1862: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 624 KiB/s rd, 349 KiB/s wr, 10 op/s
Feb 02 15:52:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:52:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/540310692' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.594 239549 DEBUG oslo_concurrency.processutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.600 239549 DEBUG nova.compute.provider_tree [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.615 239549 DEBUG nova.scheduler.client.report [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.634 239549 DEBUG oslo_concurrency.lockutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.667 239549 INFO nova.scheduler.client.report [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Deleted allocations for instance c79d4e81-b8f8-4ca4-8355-90da048bd198
Feb 02 15:52:47 compute-0 nova_compute[239545]: 2026-02-02 15:52:47.734 239549 DEBUG oslo_concurrency.lockutils [None req-7746afc2-1b29-488a-8f50-1e784b50aee2 16b55bfc98574e0096db4f19bcdcbb2e 6e1abae6c1404ce2b24265e7136ffe6a - - default default] Lock "c79d4e81-b8f8-4ca4-8355-90da048bd198" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 255 B/s wr, 6 op/s
Feb 02 15:52:48 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/540310692' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:52:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:52:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170140806' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:52:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:52:49 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170140806' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:52:49 compute-0 nova_compute[239545]: 2026-02-02 15:52:49.439 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:49 compute-0 ceph-mon[75334]: pgmap v1863: 305 pgs: 305 active+clean; 377 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 255 B/s wr, 6 op/s
Feb 02 15:52:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3170140806' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:52:49 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3170140806' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:52:49 compute-0 nova_compute[239545]: 2026-02-02 15:52:49.592 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 361 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 938 B/s wr, 23 op/s
Feb 02 15:52:50 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:50.665 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 15:52:51 compute-0 ceph-mon[75334]: pgmap v1864: 305 pgs: 305 active+clean; 361 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 938 B/s wr, 23 op/s
Feb 02 15:52:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:52:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300604420' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:52:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:52:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300604420' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:52:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 357 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.3 KiB/s wr, 37 op/s
Feb 02 15:52:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3300604420' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:52:52 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3300604420' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:52:53 compute-0 ceph-mon[75334]: pgmap v1865: 305 pgs: 305 active+clean; 357 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.3 KiB/s wr, 37 op/s
Feb 02 15:52:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 353 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.3 KiB/s wr, 37 op/s
Feb 02 15:52:54 compute-0 nova_compute[239545]: 2026-02-02 15:52:54.441 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:54 compute-0 nova_compute[239545]: 2026-02-02 15:52:54.593 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632409966596675 of space, bias 1.0, pg target 0.22897229899790025 quantized to 32 (current 32)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029290397162393036 of space, bias 1.0, pg target 0.8787119148717911 quantized to 32 (current 32)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8636478275776745e-06 of space, bias 1.0, pg target 0.0011590943482733024 quantized to 32 (current 32)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677614895015238 of space, bias 1.0, pg target 0.20032844685045714 quantized to 32 (current 32)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4237735919936245e-06 of space, bias 4.0, pg target 0.0017085283103923494 quantized to 16 (current 16)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:52:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:52:55 compute-0 ceph-mon[75334]: pgmap v1866: 305 pgs: 305 active+clean; 353 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.3 KiB/s wr, 37 op/s
Feb 02 15:52:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Feb 02 15:52:56 compute-0 ovn_controller[144995]: 2026-02-02T15:52:56Z|00287|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:52:56 compute-0 nova_compute[239545]: 2026-02-02 15:52:56.791 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:57 compute-0 ceph-mon[75334]: pgmap v1867: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Feb 02 15:52:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Feb 02 15:52:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:52:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:59.262 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:52:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:59.263 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:52:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:52:59.263 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:52:59 compute-0 podman[276110]: 2026-02-02 15:52:59.312279573 +0000 UTC m=+0.050117678 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:52:59 compute-0 podman[276109]: 2026-02-02 15:52:59.372468477 +0000 UTC m=+0.106986600 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 02 15:52:59 compute-0 nova_compute[239545]: 2026-02-02 15:52:59.417 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770047564.4154806, c79d4e81-b8f8-4ca4-8355-90da048bd198 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 15:52:59 compute-0 nova_compute[239545]: 2026-02-02 15:52:59.418 239549 INFO nova.compute.manager [-] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] VM Stopped (Lifecycle Event)
Feb 02 15:52:59 compute-0 nova_compute[239545]: 2026-02-02 15:52:59.437 239549 DEBUG nova.compute.manager [None req-864f1c18-262a-4df0-b4fe-b025ec0f4839 - - - - - -] [instance: c79d4e81-b8f8-4ca4-8355-90da048bd198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 15:52:59 compute-0 nova_compute[239545]: 2026-02-02 15:52:59.444 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:52:59 compute-0 ceph-mon[75334]: pgmap v1868: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Feb 02 15:52:59 compute-0 nova_compute[239545]: 2026-02-02 15:52:59.595 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Feb 02 15:53:01 compute-0 ceph-mon[75334]: pgmap v1869: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Feb 02 15:53:01 compute-0 ovn_controller[144995]: 2026-02-02T15:53:01Z|00288|binding|INFO|Releasing lport a43331b2-e1ad-4aa9-beac-e80c59fa7f31 from this chassis (sb_readonly=0)
Feb 02 15:53:01 compute-0 nova_compute[239545]: 2026-02-02 15:53:01.791 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 597 B/s wr, 22 op/s
Feb 02 15:53:03 compute-0 ceph-mon[75334]: pgmap v1870: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 597 B/s wr, 22 op/s
Feb 02 15:53:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 170 B/s wr, 7 op/s
Feb 02 15:53:04 compute-0 nova_compute[239545]: 2026-02-02 15:53:04.446 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:04 compute-0 nova_compute[239545]: 2026-02-02 15:53:04.597 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:05 compute-0 ceph-mon[75334]: pgmap v1871: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 170 B/s wr, 7 op/s
Feb 02 15:53:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 170 B/s wr, 7 op/s
Feb 02 15:53:07 compute-0 ceph-mon[75334]: pgmap v1872: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 170 B/s wr, 7 op/s
Feb 02 15:53:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:09 compute-0 nova_compute[239545]: 2026-02-02 15:53:09.448 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:09 compute-0 nova_compute[239545]: 2026-02-02 15:53:09.599 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:09 compute-0 ceph-mon[75334]: pgmap v1873: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:10 compute-0 nova_compute[239545]: 2026-02-02 15:53:10.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:53:11 compute-0 nova_compute[239545]: 2026-02-02 15:53:11.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:53:11 compute-0 nova_compute[239545]: 2026-02-02 15:53:11.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:53:11 compute-0 nova_compute[239545]: 2026-02-02 15:53:11.566 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 15:53:11 compute-0 ceph-mon[75334]: pgmap v1874: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:13 compute-0 ceph-mon[75334]: pgmap v1875: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:14 compute-0 nova_compute[239545]: 2026-02-02 15:53:14.451 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:14 compute-0 nova_compute[239545]: 2026-02-02 15:53:14.600 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:53:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:53:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:53:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:53:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:53:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:53:15 compute-0 sudo[276154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:53:15 compute-0 sudo[276154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:53:15 compute-0 sudo[276154]: pam_unix(sudo:session): session closed for user root
Feb 02 15:53:15 compute-0 sudo[276179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:53:15 compute-0 sudo[276179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:53:15 compute-0 ceph-mon[75334]: pgmap v1876: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:15 compute-0 sudo[276179]: pam_unix(sudo:session): session closed for user root
Feb 02 15:53:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:53:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:53:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:53:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:53:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:53:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:53:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:53:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:53:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:53:15 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:53:15 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:53:15 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:53:15 compute-0 sudo[276235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:53:15 compute-0 sudo[276235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:53:15 compute-0 sudo[276235]: pam_unix(sudo:session): session closed for user root
Feb 02 15:53:15 compute-0 sudo[276260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:53:15 compute-0 sudo[276260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:53:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:16 compute-0 podman[276295]: 2026-02-02 15:53:16.228958597 +0000 UTC m=+0.042030308 container create 8b8f3666671b339206e86ee7f3d996e6a112a482c07429004df0c4a2d46d30b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:53:16 compute-0 systemd[1]: Started libpod-conmon-8b8f3666671b339206e86ee7f3d996e6a112a482c07429004df0c4a2d46d30b5.scope.
Feb 02 15:53:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:53:16 compute-0 podman[276295]: 2026-02-02 15:53:16.20964232 +0000 UTC m=+0.022714061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:53:16 compute-0 podman[276295]: 2026-02-02 15:53:16.312835006 +0000 UTC m=+0.125906707 container init 8b8f3666671b339206e86ee7f3d996e6a112a482c07429004df0c4a2d46d30b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:53:16 compute-0 podman[276295]: 2026-02-02 15:53:16.321291125 +0000 UTC m=+0.134362836 container start 8b8f3666671b339206e86ee7f3d996e6a112a482c07429004df0c4a2d46d30b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hoover, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:53:16 compute-0 podman[276295]: 2026-02-02 15:53:16.324945884 +0000 UTC m=+0.138017675 container attach 8b8f3666671b339206e86ee7f3d996e6a112a482c07429004df0c4a2d46d30b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hoover, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:53:16 compute-0 mystifying_hoover[276312]: 167 167
Feb 02 15:53:16 compute-0 systemd[1]: libpod-8b8f3666671b339206e86ee7f3d996e6a112a482c07429004df0c4a2d46d30b5.scope: Deactivated successfully.
Feb 02 15:53:16 compute-0 podman[276295]: 2026-02-02 15:53:16.328273507 +0000 UTC m=+0.141345228 container died 8b8f3666671b339206e86ee7f3d996e6a112a482c07429004df0c4a2d46d30b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hoover, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2069a32b9ec2f09d2f46af4c7b28fa802cace09435452a4326192de62eff70fa-merged.mount: Deactivated successfully.
Feb 02 15:53:16 compute-0 podman[276295]: 2026-02-02 15:53:16.368658543 +0000 UTC m=+0.181730264 container remove 8b8f3666671b339206e86ee7f3d996e6a112a482c07429004df0c4a2d46d30b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 02 15:53:16 compute-0 systemd[1]: libpod-conmon-8b8f3666671b339206e86ee7f3d996e6a112a482c07429004df0c4a2d46d30b5.scope: Deactivated successfully.
Feb 02 15:53:16 compute-0 podman[276335]: 2026-02-02 15:53:16.506025981 +0000 UTC m=+0.052572488 container create dcf171c49576bf8ea28170e70c0a7d7a9caca584fb54628c6fdc697db9be8212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_banach, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:53:16 compute-0 systemd[1]: Started libpod-conmon-dcf171c49576bf8ea28170e70c0a7d7a9caca584fb54628c6fdc697db9be8212.scope.
Feb 02 15:53:16 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7d423fef79c446e78c56f248ae2845ecfe2ed77e5deccb474d83230ead2fd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7d423fef79c446e78c56f248ae2845ecfe2ed77e5deccb474d83230ead2fd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7d423fef79c446e78c56f248ae2845ecfe2ed77e5deccb474d83230ead2fd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7d423fef79c446e78c56f248ae2845ecfe2ed77e5deccb474d83230ead2fd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7d423fef79c446e78c56f248ae2845ecfe2ed77e5deccb474d83230ead2fd3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:16 compute-0 podman[276335]: 2026-02-02 15:53:16.480197454 +0000 UTC m=+0.026744051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:53:16 compute-0 podman[276335]: 2026-02-02 15:53:16.587089601 +0000 UTC m=+0.133636128 container init dcf171c49576bf8ea28170e70c0a7d7a9caca584fb54628c6fdc697db9be8212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 15:53:16 compute-0 podman[276335]: 2026-02-02 15:53:16.592409621 +0000 UTC m=+0.138956128 container start dcf171c49576bf8ea28170e70c0a7d7a9caca584fb54628c6fdc697db9be8212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:53:16 compute-0 podman[276335]: 2026-02-02 15:53:16.596326928 +0000 UTC m=+0.142873445 container attach dcf171c49576bf8ea28170e70c0a7d7a9caca584fb54628c6fdc697db9be8212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 02 15:53:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:53:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:53:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:53:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:53:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:53:16 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:53:17 compute-0 wonderful_banach[276351]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:53:17 compute-0 wonderful_banach[276351]: --> All data devices are unavailable
Feb 02 15:53:17 compute-0 systemd[1]: libpod-dcf171c49576bf8ea28170e70c0a7d7a9caca584fb54628c6fdc697db9be8212.scope: Deactivated successfully.
Feb 02 15:53:17 compute-0 podman[276335]: 2026-02-02 15:53:17.080410009 +0000 UTC m=+0.626956706 container died dcf171c49576bf8ea28170e70c0a7d7a9caca584fb54628c6fdc697db9be8212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd7d423fef79c446e78c56f248ae2845ecfe2ed77e5deccb474d83230ead2fd3-merged.mount: Deactivated successfully.
Feb 02 15:53:17 compute-0 podman[276335]: 2026-02-02 15:53:17.135689952 +0000 UTC m=+0.682236469 container remove dcf171c49576bf8ea28170e70c0a7d7a9caca584fb54628c6fdc697db9be8212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_banach, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 02 15:53:17 compute-0 systemd[1]: libpod-conmon-dcf171c49576bf8ea28170e70c0a7d7a9caca584fb54628c6fdc697db9be8212.scope: Deactivated successfully.
Feb 02 15:53:17 compute-0 sudo[276260]: pam_unix(sudo:session): session closed for user root
Feb 02 15:53:17 compute-0 sudo[276384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:53:17 compute-0 sudo[276384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:53:17 compute-0 sudo[276384]: pam_unix(sudo:session): session closed for user root
Feb 02 15:53:17 compute-0 sudo[276409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:53:17 compute-0 sudo[276409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:53:17 compute-0 nova_compute[239545]: 2026-02-02 15:53:17.561 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:53:17 compute-0 podman[276447]: 2026-02-02 15:53:17.614038881 +0000 UTC m=+0.042522001 container create a7116f2dd1972893897a2d342770a7874bbc4aa54794512de3b00cb636a66f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 15:53:17 compute-0 systemd[1]: Started libpod-conmon-a7116f2dd1972893897a2d342770a7874bbc4aa54794512de3b00cb636a66f86.scope.
Feb 02 15:53:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:53:17 compute-0 podman[276447]: 2026-02-02 15:53:17.592288968 +0000 UTC m=+0.020772178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:53:17 compute-0 podman[276447]: 2026-02-02 15:53:17.694916531 +0000 UTC m=+0.123399671 container init a7116f2dd1972893897a2d342770a7874bbc4aa54794512de3b00cb636a66f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wozniak, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:53:17 compute-0 podman[276447]: 2026-02-02 15:53:17.699732958 +0000 UTC m=+0.128216148 container start a7116f2dd1972893897a2d342770a7874bbc4aa54794512de3b00cb636a66f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wozniak, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:53:17 compute-0 laughing_wozniak[276464]: 167 167
Feb 02 15:53:17 compute-0 podman[276447]: 2026-02-02 15:53:17.703575393 +0000 UTC m=+0.132058533 container attach a7116f2dd1972893897a2d342770a7874bbc4aa54794512de3b00cb636a66f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:53:17 compute-0 systemd[1]: libpod-a7116f2dd1972893897a2d342770a7874bbc4aa54794512de3b00cb636a66f86.scope: Deactivated successfully.
Feb 02 15:53:17 compute-0 podman[276447]: 2026-02-02 15:53:17.704553007 +0000 UTC m=+0.133036157 container died a7116f2dd1972893897a2d342770a7874bbc4aa54794512de3b00cb636a66f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 02 15:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b9418e5895754b37c4cf0309c28ba91a0d9fe47c8a9f0c66589d7dba32705b2-merged.mount: Deactivated successfully.
Feb 02 15:53:17 compute-0 podman[276447]: 2026-02-02 15:53:17.748570684 +0000 UTC m=+0.177053844 container remove a7116f2dd1972893897a2d342770a7874bbc4aa54794512de3b00cb636a66f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:53:17 compute-0 systemd[1]: libpod-conmon-a7116f2dd1972893897a2d342770a7874bbc4aa54794512de3b00cb636a66f86.scope: Deactivated successfully.
Feb 02 15:53:17 compute-0 ceph-mon[75334]: pgmap v1877: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:17 compute-0 podman[276488]: 2026-02-02 15:53:17.921157098 +0000 UTC m=+0.051945932 container create b67fad7513aff6fc87d7695f1680ccaeb40d80ca8de1a6ef42282c096d46393e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_easley, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:53:17 compute-0 systemd[1]: Started libpod-conmon-b67fad7513aff6fc87d7695f1680ccaeb40d80ca8de1a6ef42282c096d46393e.scope.
Feb 02 15:53:17 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc34d9e7c09f0e340006d12aeb972af72d72c0c6e44edf3116e646599bc8514/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc34d9e7c09f0e340006d12aeb972af72d72c0c6e44edf3116e646599bc8514/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc34d9e7c09f0e340006d12aeb972af72d72c0c6e44edf3116e646599bc8514/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc34d9e7c09f0e340006d12aeb972af72d72c0c6e44edf3116e646599bc8514/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:18 compute-0 podman[276488]: 2026-02-02 15:53:17.905769521 +0000 UTC m=+0.036558355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:53:18 compute-0 podman[276488]: 2026-02-02 15:53:18.01236573 +0000 UTC m=+0.143154564 container init b67fad7513aff6fc87d7695f1680ccaeb40d80ca8de1a6ef42282c096d46393e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_easley, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:53:18 compute-0 podman[276488]: 2026-02-02 15:53:18.018405619 +0000 UTC m=+0.149194443 container start b67fad7513aff6fc87d7695f1680ccaeb40d80ca8de1a6ef42282c096d46393e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_easley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 02 15:53:18 compute-0 podman[276488]: 2026-02-02 15:53:18.022477929 +0000 UTC m=+0.153266773 container attach b67fad7513aff6fc87d7695f1680ccaeb40d80ca8de1a6ef42282c096d46393e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_easley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:53:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:18 compute-0 crazy_easley[276505]: {
Feb 02 15:53:18 compute-0 crazy_easley[276505]:     "0": [
Feb 02 15:53:18 compute-0 crazy_easley[276505]:         {
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "devices": [
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "/dev/loop3"
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             ],
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_name": "ceph_lv0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_size": "21470642176",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "name": "ceph_lv0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "tags": {
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.cluster_name": "ceph",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.crush_device_class": "",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.encrypted": "0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.objectstore": "bluestore",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.osd_id": "0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.type": "block",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.vdo": "0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.with_tpm": "0"
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             },
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "type": "block",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "vg_name": "ceph_vg0"
Feb 02 15:53:18 compute-0 crazy_easley[276505]:         }
Feb 02 15:53:18 compute-0 crazy_easley[276505]:     ],
Feb 02 15:53:18 compute-0 crazy_easley[276505]:     "1": [
Feb 02 15:53:18 compute-0 crazy_easley[276505]:         {
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "devices": [
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "/dev/loop4"
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             ],
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_name": "ceph_lv1",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_size": "21470642176",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "name": "ceph_lv1",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "tags": {
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.cluster_name": "ceph",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.crush_device_class": "",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.encrypted": "0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.objectstore": "bluestore",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.osd_id": "1",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.type": "block",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.vdo": "0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.with_tpm": "0"
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             },
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "type": "block",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "vg_name": "ceph_vg1"
Feb 02 15:53:18 compute-0 crazy_easley[276505]:         }
Feb 02 15:53:18 compute-0 crazy_easley[276505]:     ],
Feb 02 15:53:18 compute-0 crazy_easley[276505]:     "2": [
Feb 02 15:53:18 compute-0 crazy_easley[276505]:         {
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "devices": [
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "/dev/loop5"
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             ],
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_name": "ceph_lv2",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_size": "21470642176",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "name": "ceph_lv2",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "tags": {
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.cluster_name": "ceph",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.crush_device_class": "",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.encrypted": "0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.objectstore": "bluestore",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.osd_id": "2",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.type": "block",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.vdo": "0",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:                 "ceph.with_tpm": "0"
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             },
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "type": "block",
Feb 02 15:53:18 compute-0 crazy_easley[276505]:             "vg_name": "ceph_vg2"
Feb 02 15:53:18 compute-0 crazy_easley[276505]:         }
Feb 02 15:53:18 compute-0 crazy_easley[276505]:     ]
Feb 02 15:53:18 compute-0 crazy_easley[276505]: }
Feb 02 15:53:18 compute-0 systemd[1]: libpod-b67fad7513aff6fc87d7695f1680ccaeb40d80ca8de1a6ef42282c096d46393e.scope: Deactivated successfully.
Feb 02 15:53:18 compute-0 podman[276488]: 2026-02-02 15:53:18.358657377 +0000 UTC m=+0.489446191 container died b67fad7513aff6fc87d7695f1680ccaeb40d80ca8de1a6ef42282c096d46393e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-efc34d9e7c09f0e340006d12aeb972af72d72c0c6e44edf3116e646599bc8514-merged.mount: Deactivated successfully.
Feb 02 15:53:18 compute-0 podman[276488]: 2026-02-02 15:53:18.403585447 +0000 UTC m=+0.534374271 container remove b67fad7513aff6fc87d7695f1680ccaeb40d80ca8de1a6ef42282c096d46393e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:53:18 compute-0 systemd[1]: libpod-conmon-b67fad7513aff6fc87d7695f1680ccaeb40d80ca8de1a6ef42282c096d46393e.scope: Deactivated successfully.
Feb 02 15:53:18 compute-0 sudo[276409]: pam_unix(sudo:session): session closed for user root
Feb 02 15:53:18 compute-0 sudo[276526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:53:18 compute-0 sudo[276526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:53:18 compute-0 sudo[276526]: pam_unix(sudo:session): session closed for user root
Feb 02 15:53:18 compute-0 nova_compute[239545]: 2026-02-02 15:53:18.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:53:18 compute-0 nova_compute[239545]: 2026-02-02 15:53:18.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:53:18 compute-0 nova_compute[239545]: 2026-02-02 15:53:18.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:53:18 compute-0 sudo[276551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:53:18 compute-0 sudo[276551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:53:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:18 compute-0 podman[276589]: 2026-02-02 15:53:18.878661575 +0000 UTC m=+0.043252610 container create 04be9904244a76d3807eb1010ba5c391b9391ad41a4d0f4e9fa45fc451f48928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:53:18 compute-0 systemd[1]: Started libpod-conmon-04be9904244a76d3807eb1010ba5c391b9391ad41a4d0f4e9fa45fc451f48928.scope.
Feb 02 15:53:18 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:53:18 compute-0 podman[276589]: 2026-02-02 15:53:18.948844463 +0000 UTC m=+0.113435498 container init 04be9904244a76d3807eb1010ba5c391b9391ad41a4d0f4e9fa45fc451f48928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:53:18 compute-0 podman[276589]: 2026-02-02 15:53:18.956971342 +0000 UTC m=+0.121562357 container start 04be9904244a76d3807eb1010ba5c391b9391ad41a4d0f4e9fa45fc451f48928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 02 15:53:18 compute-0 podman[276589]: 2026-02-02 15:53:18.864287794 +0000 UTC m=+0.028878819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:53:18 compute-0 podman[276589]: 2026-02-02 15:53:18.960313385 +0000 UTC m=+0.124904410 container attach 04be9904244a76d3807eb1010ba5c391b9391ad41a4d0f4e9fa45fc451f48928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:53:18 compute-0 busy_darwin[276606]: 167 167
Feb 02 15:53:18 compute-0 systemd[1]: libpod-04be9904244a76d3807eb1010ba5c391b9391ad41a4d0f4e9fa45fc451f48928.scope: Deactivated successfully.
Feb 02 15:53:18 compute-0 podman[276589]: 2026-02-02 15:53:18.962218281 +0000 UTC m=+0.126809296 container died 04be9904244a76d3807eb1010ba5c391b9391ad41a4d0f4e9fa45fc451f48928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Feb 02 15:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fc23590442e4900c32355d56129a54edfb581be65d82b6608ab4e0c95fe1fc2-merged.mount: Deactivated successfully.
Feb 02 15:53:19 compute-0 podman[276589]: 2026-02-02 15:53:19.015762221 +0000 UTC m=+0.180353236 container remove 04be9904244a76d3807eb1010ba5c391b9391ad41a4d0f4e9fa45fc451f48928 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:53:19 compute-0 systemd[1]: libpod-conmon-04be9904244a76d3807eb1010ba5c391b9391ad41a4d0f4e9fa45fc451f48928.scope: Deactivated successfully.
Feb 02 15:53:19 compute-0 podman[276631]: 2026-02-02 15:53:19.222900701 +0000 UTC m=+0.088944787 container create dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:53:19 compute-0 podman[276631]: 2026-02-02 15:53:19.160788911 +0000 UTC m=+0.026832997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:53:19 compute-0 systemd[1]: Started libpod-conmon-dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51.scope.
Feb 02 15:53:19 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353a51cbdd4c0f9b27a92aaa2f2987a0df0998ee3f51ac62d91ebd4f3b5949/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353a51cbdd4c0f9b27a92aaa2f2987a0df0998ee3f51ac62d91ebd4f3b5949/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353a51cbdd4c0f9b27a92aaa2f2987a0df0998ee3f51ac62d91ebd4f3b5949/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353a51cbdd4c0f9b27a92aaa2f2987a0df0998ee3f51ac62d91ebd4f3b5949/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:53:19 compute-0 podman[276631]: 2026-02-02 15:53:19.309175333 +0000 UTC m=+0.175219409 container init dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:53:19 compute-0 podman[276631]: 2026-02-02 15:53:19.31557598 +0000 UTC m=+0.181620076 container start dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 02 15:53:19 compute-0 podman[276631]: 2026-02-02 15:53:19.319128977 +0000 UTC m=+0.185173033 container attach dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:53:19 compute-0 nova_compute[239545]: 2026-02-02 15:53:19.454 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:19 compute-0 nova_compute[239545]: 2026-02-02 15:53:19.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:53:19 compute-0 nova_compute[239545]: 2026-02-02 15:53:19.567 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:53:19 compute-0 nova_compute[239545]: 2026-02-02 15:53:19.568 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:53:19 compute-0 nova_compute[239545]: 2026-02-02 15:53:19.568 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:53:19 compute-0 nova_compute[239545]: 2026-02-02 15:53:19.568 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:53:19 compute-0 nova_compute[239545]: 2026-02-02 15:53:19.568 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:53:19 compute-0 nova_compute[239545]: 2026-02-02 15:53:19.603 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:19 compute-0 ceph-mon[75334]: pgmap v1878: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:19 compute-0 lvm[276744]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:53:19 compute-0 lvm[276744]: VG ceph_vg0 finished
Feb 02 15:53:19 compute-0 lvm[276747]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:53:19 compute-0 lvm[276747]: VG ceph_vg1 finished
Feb 02 15:53:19 compute-0 lvm[276749]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:53:19 compute-0 lvm[276749]: VG ceph_vg2 finished
Feb 02 15:53:20 compute-0 condescending_shtern[276647]: {}
Feb 02 15:53:20 compute-0 systemd[1]: libpod-dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51.scope: Deactivated successfully.
Feb 02 15:53:20 compute-0 systemd[1]: libpod-dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51.scope: Consumed 1.165s CPU time.
Feb 02 15:53:20 compute-0 podman[276631]: 2026-02-02 15:53:20.075410749 +0000 UTC m=+0.941454805 container died dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd353a51cbdd4c0f9b27a92aaa2f2987a0df0998ee3f51ac62d91ebd4f3b5949-merged.mount: Deactivated successfully.
Feb 02 15:53:20 compute-0 podman[276631]: 2026-02-02 15:53:20.117314534 +0000 UTC m=+0.983358590 container remove dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:53:20 compute-0 systemd[1]: libpod-conmon-dbab7099eb85a0f4a91dad489c388de73720c939410de339ba160de6ffcd6e51.scope: Deactivated successfully.
Feb 02 15:53:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:53:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517522134' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.149 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:53:20 compute-0 sudo[276551]: pam_unix(sudo:session): session closed for user root
Feb 02 15:53:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:53:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:53:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:53:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:53:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.225 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.225 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.225 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:53:20 compute-0 sudo[276767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:53:20 compute-0 sudo[276767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:53:20 compute-0 sudo[276767]: pam_unix(sudo:session): session closed for user root
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.395 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.397 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3910MB free_disk=59.94249573443085GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.397 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.397 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.469 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.470 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.470 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:53:20 compute-0 nova_compute[239545]: 2026-02-02 15:53:20.512 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:53:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2517522134' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:53:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:53:20 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:53:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:53:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3761853830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:53:21 compute-0 nova_compute[239545]: 2026-02-02 15:53:21.061 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:53:21 compute-0 nova_compute[239545]: 2026-02-02 15:53:21.067 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:53:21 compute-0 nova_compute[239545]: 2026-02-02 15:53:21.082 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:53:21 compute-0 nova_compute[239545]: 2026-02-02 15:53:21.109 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:53:21 compute-0 nova_compute[239545]: 2026-02-02 15:53:21.110 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:53:21 compute-0 ceph-mon[75334]: pgmap v1879: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3761853830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:53:22 compute-0 nova_compute[239545]: 2026-02-02 15:53:22.111 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:53:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:22 compute-0 nova_compute[239545]: 2026-02-02 15:53:22.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:53:22 compute-0 nova_compute[239545]: 2026-02-02 15:53:22.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:53:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:23 compute-0 ceph-mon[75334]: pgmap v1880: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Feb 02 15:53:24 compute-0 nova_compute[239545]: 2026-02-02 15:53:24.457 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:24 compute-0 nova_compute[239545]: 2026-02-02 15:53:24.605 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:25 compute-0 ceph-mon[75334]: pgmap v1881: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Feb 02 15:53:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:27 compute-0 ceph-mon[75334]: pgmap v1882: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:53:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/420962993' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:53:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:53:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/420962993' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:53:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/420962993' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:53:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/420962993' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:53:29 compute-0 nova_compute[239545]: 2026-02-02 15:53:29.460 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:29 compute-0 nova_compute[239545]: 2026-02-02 15:53:29.607 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:29 compute-0 ceph-mon[75334]: pgmap v1883: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:30 compute-0 podman[276815]: 2026-02-02 15:53:30.319403835 +0000 UTC m=+0.057404616 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 02 15:53:30 compute-0 podman[276814]: 2026-02-02 15:53:30.345073814 +0000 UTC m=+0.082775757 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:53:31 compute-0 ceph-mon[75334]: pgmap v1884: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:32 compute-0 ovn_controller[144995]: 2026-02-02T15:53:32Z|00289|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Feb 02 15:53:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:33 compute-0 ceph-mon[75334]: pgmap v1885: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:34 compute-0 nova_compute[239545]: 2026-02-02 15:53:34.462 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:34 compute-0 nova_compute[239545]: 2026-02-02 15:53:34.610 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:35 compute-0 ceph-mon[75334]: pgmap v1886: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 02 15:53:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 15:53:37 compute-0 ceph-mon[75334]: pgmap v1887: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Feb 02 15:53:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:39 compute-0 nova_compute[239545]: 2026-02-02 15:53:39.465 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:39 compute-0 nova_compute[239545]: 2026-02-02 15:53:39.611 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:39 compute-0 ceph-mon[75334]: pgmap v1888: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:41 compute-0 ceph-mon[75334]: pgmap v1889: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:53:42
Feb 02 15:53:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:53:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:53:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', '.mgr', 'images', 'backups', 'vms', 'volumes', 'cephfs.cephfs.meta']
Feb 02 15:53:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:53:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:43 compute-0 ceph-mon[75334]: pgmap v1890: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:44 compute-0 nova_compute[239545]: 2026-02-02 15:53:44.466 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:44 compute-0 nova_compute[239545]: 2026-02-02 15:53:44.612 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:53:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:53:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:53:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:53:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:53:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:53:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:53:45 compute-0 ceph-mon[75334]: pgmap v1891: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:47 compute-0 ceph-mon[75334]: pgmap v1892: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:49 compute-0 nova_compute[239545]: 2026-02-02 15:53:49.468 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:49 compute-0 nova_compute[239545]: 2026-02-02 15:53:49.614 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:49 compute-0 ceph-mon[75334]: pgmap v1893: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:51 compute-0 ceph-mon[75334]: pgmap v1894: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:53 compute-0 ceph-mon[75334]: pgmap v1895: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:54 compute-0 nova_compute[239545]: 2026-02-02 15:53:54.470 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:54 compute-0 nova_compute[239545]: 2026-02-02 15:53:54.616 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632409966596675 of space, bias 1.0, pg target 0.22897229899790025 quantized to 32 (current 32)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029290397162393036 of space, bias 1.0, pg target 0.8787119148717911 quantized to 32 (current 32)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8636478275776745e-06 of space, bias 1.0, pg target 0.0011590943482733024 quantized to 32 (current 32)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677614895015238 of space, bias 1.0, pg target 0.20032844685045714 quantized to 32 (current 32)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4237735919936245e-06 of space, bias 4.0, pg target 0.0017085283103923494 quantized to 16 (current 16)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:53:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:53:55 compute-0 ceph-mon[75334]: pgmap v1896: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:58 compute-0 ceph-mon[75334]: pgmap v1897: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:53:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:53:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:53:59.263 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:53:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:53:59.264 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:53:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:53:59.264 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:53:59 compute-0 nova_compute[239545]: 2026-02-02 15:53:59.473 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:53:59 compute-0 nova_compute[239545]: 2026-02-02 15:53:59.617 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:00 compute-0 ceph-mon[75334]: pgmap v1898: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:01 compute-0 podman[276863]: 2026-02-02 15:54:01.317789985 +0000 UTC m=+0.051133763 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:54:01 compute-0 podman[276862]: 2026-02-02 15:54:01.331284644 +0000 UTC m=+0.064335924 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 02 15:54:02 compute-0 ceph-mon[75334]: pgmap v1899: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:04 compute-0 ceph-mon[75334]: pgmap v1900: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:04 compute-0 nova_compute[239545]: 2026-02-02 15:54:04.475 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:04 compute-0 nova_compute[239545]: 2026-02-02 15:54:04.619 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:06 compute-0 ceph-mon[75334]: pgmap v1901: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:08 compute-0 ceph-mon[75334]: pgmap v1902: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:09 compute-0 nova_compute[239545]: 2026-02-02 15:54:09.477 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:09 compute-0 nova_compute[239545]: 2026-02-02 15:54:09.620 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:10 compute-0 ceph-mon[75334]: pgmap v1903: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:10 compute-0 nova_compute[239545]: 2026-02-02 15:54:10.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:12 compute-0 ceph-mon[75334]: pgmap v1904: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:13 compute-0 ceph-mon[75334]: pgmap v1905: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:13 compute-0 nova_compute[239545]: 2026-02-02 15:54:13.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:13 compute-0 nova_compute[239545]: 2026-02-02 15:54:13.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:54:13 compute-0 nova_compute[239545]: 2026-02-02 15:54:13.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:54:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:14 compute-0 nova_compute[239545]: 2026-02-02 15:54:14.479 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:14 compute-0 nova_compute[239545]: 2026-02-02 15:54:14.622 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:14 compute-0 nova_compute[239545]: 2026-02-02 15:54:14.656 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:54:14 compute-0 nova_compute[239545]: 2026-02-02 15:54:14.656 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:54:14 compute-0 nova_compute[239545]: 2026-02-02 15:54:14.656 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:54:14 compute-0 nova_compute[239545]: 2026-02-02 15:54:14.656 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:54:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:54:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:54:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:54:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:54:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:54:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:54:15 compute-0 ceph-mon[75334]: pgmap v1906: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:17 compute-0 ceph-mon[75334]: pgmap v1907: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:18 compute-0 nova_compute[239545]: 2026-02-02 15:54:18.003 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:54:18 compute-0 nova_compute[239545]: 2026-02-02 15:54:18.026 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:54:18 compute-0 nova_compute[239545]: 2026-02-02 15:54:18.027 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:54:18 compute-0 nova_compute[239545]: 2026-02-02 15:54:18.027 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:18 compute-0 nova_compute[239545]: 2026-02-02 15:54:18.027 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 15:54:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:18 compute-0 nova_compute[239545]: 2026-02-02 15:54:18.557 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:18 compute-0 nova_compute[239545]: 2026-02-02 15:54:18.584 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:19 compute-0 ceph-mon[75334]: pgmap v1908: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.481 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.565 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.566 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.566 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.566 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.566 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:54:19 compute-0 nova_compute[239545]: 2026-02-02 15:54:19.624 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:54:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3723716823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.093 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.187 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.188 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.188 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:54:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:20 compute-0 sudo[276930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:54:20 compute-0 sudo[276930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:54:20 compute-0 sudo[276930]: pam_unix(sudo:session): session closed for user root
Feb 02 15:54:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3723716823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.333 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.334 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3962MB free_disk=59.94249573443085GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.334 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.334 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:54:20 compute-0 sudo[276955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:54:20 compute-0 sudo[276955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.538 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.539 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.539 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:54:20 compute-0 nova_compute[239545]: 2026-02-02 15:54:20.731 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:54:20 compute-0 sudo[276955]: pam_unix(sudo:session): session closed for user root
Feb 02 15:54:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:54:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:54:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:54:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:54:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:54:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:54:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:54:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:54:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:54:20 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:54:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:54:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:54:20 compute-0 sudo[277012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:54:20 compute-0 sudo[277012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:54:20 compute-0 sudo[277012]: pam_unix(sudo:session): session closed for user root
Feb 02 15:54:20 compute-0 sudo[277056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:54:20 compute-0 sudo[277056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:54:21 compute-0 podman[277093]: 2026-02-02 15:54:21.141101649 +0000 UTC m=+0.044076150 container create e5619f38fc937b3dfcb737d66ea7c8337c39702e60cf2d24c441c9b3edb89aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:54:21 compute-0 systemd[1]: Started libpod-conmon-e5619f38fc937b3dfcb737d66ea7c8337c39702e60cf2d24c441c9b3edb89aaa.scope.
Feb 02 15:54:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:54:21 compute-0 podman[277093]: 2026-02-02 15:54:21.12396564 +0000 UTC m=+0.026940171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:54:21 compute-0 podman[277093]: 2026-02-02 15:54:21.221694611 +0000 UTC m=+0.124669132 container init e5619f38fc937b3dfcb737d66ea7c8337c39702e60cf2d24c441c9b3edb89aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:54:21 compute-0 podman[277093]: 2026-02-02 15:54:21.227308489 +0000 UTC m=+0.130282990 container start e5619f38fc937b3dfcb737d66ea7c8337c39702e60cf2d24c441c9b3edb89aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_perlman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:54:21 compute-0 eloquent_perlman[277109]: 167 167
Feb 02 15:54:21 compute-0 systemd[1]: libpod-e5619f38fc937b3dfcb737d66ea7c8337c39702e60cf2d24c441c9b3edb89aaa.scope: Deactivated successfully.
Feb 02 15:54:21 compute-0 podman[277093]: 2026-02-02 15:54:21.234509826 +0000 UTC m=+0.137484347 container attach e5619f38fc937b3dfcb737d66ea7c8337c39702e60cf2d24c441c9b3edb89aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:54:21 compute-0 podman[277093]: 2026-02-02 15:54:21.234759352 +0000 UTC m=+0.137733843 container died e5619f38fc937b3dfcb737d66ea7c8337c39702e60cf2d24c441c9b3edb89aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-596062a08e5ddd161db6e4ab8e460169ec0fb352ccf108f1a2175ad74e403336-merged.mount: Deactivated successfully.
Feb 02 15:54:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:54:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542836983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:54:21 compute-0 podman[277093]: 2026-02-02 15:54:21.278793759 +0000 UTC m=+0.181768260 container remove e5619f38fc937b3dfcb737d66ea7c8337c39702e60cf2d24c441c9b3edb89aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:54:21 compute-0 nova_compute[239545]: 2026-02-02 15:54:21.284 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:54:21 compute-0 nova_compute[239545]: 2026-02-02 15:54:21.289 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:54:21 compute-0 systemd[1]: libpod-conmon-e5619f38fc937b3dfcb737d66ea7c8337c39702e60cf2d24c441c9b3edb89aaa.scope: Deactivated successfully.
Feb 02 15:54:21 compute-0 nova_compute[239545]: 2026-02-02 15:54:21.312 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:54:21 compute-0 nova_compute[239545]: 2026-02-02 15:54:21.314 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:54:21 compute-0 nova_compute[239545]: 2026-02-02 15:54:21.314 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:54:21 compute-0 ceph-mon[75334]: pgmap v1909: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:54:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:54:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:54:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:54:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:54:21 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:54:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2542836983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:54:21 compute-0 podman[277135]: 2026-02-02 15:54:21.422453886 +0000 UTC m=+0.053151072 container create 3e54677d82a5ee214fb37e50d2b14341d26aea4ec5941d360e5980c7e7bb960e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 15:54:21 compute-0 systemd[1]: Started libpod-conmon-3e54677d82a5ee214fb37e50d2b14341d26aea4ec5941d360e5980c7e7bb960e.scope.
Feb 02 15:54:21 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c5f992ba165eeb7f4248c41958f02a7e48422f6661c5d4b67e1d8439cecfa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c5f992ba165eeb7f4248c41958f02a7e48422f6661c5d4b67e1d8439cecfa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c5f992ba165eeb7f4248c41958f02a7e48422f6661c5d4b67e1d8439cecfa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c5f992ba165eeb7f4248c41958f02a7e48422f6661c5d4b67e1d8439cecfa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c5f992ba165eeb7f4248c41958f02a7e48422f6661c5d4b67e1d8439cecfa2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:21 compute-0 podman[277135]: 2026-02-02 15:54:21.399229767 +0000 UTC m=+0.029927033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:54:21 compute-0 podman[277135]: 2026-02-02 15:54:21.494171581 +0000 UTC m=+0.124868817 container init 3e54677d82a5ee214fb37e50d2b14341d26aea4ec5941d360e5980c7e7bb960e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:54:21 compute-0 podman[277135]: 2026-02-02 15:54:21.500345002 +0000 UTC m=+0.131042198 container start 3e54677d82a5ee214fb37e50d2b14341d26aea4ec5941d360e5980c7e7bb960e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:54:21 compute-0 podman[277135]: 2026-02-02 15:54:21.504068763 +0000 UTC m=+0.134766009 container attach 3e54677d82a5ee214fb37e50d2b14341d26aea4ec5941d360e5980c7e7bb960e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:54:21 compute-0 friendly_archimedes[277151]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:54:21 compute-0 friendly_archimedes[277151]: --> All data devices are unavailable
Feb 02 15:54:21 compute-0 systemd[1]: libpod-3e54677d82a5ee214fb37e50d2b14341d26aea4ec5941d360e5980c7e7bb960e.scope: Deactivated successfully.
Feb 02 15:54:21 compute-0 podman[277135]: 2026-02-02 15:54:21.885339765 +0000 UTC m=+0.516036961 container died 3e54677d82a5ee214fb37e50d2b14341d26aea4ec5941d360e5980c7e7bb960e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-78c5f992ba165eeb7f4248c41958f02a7e48422f6661c5d4b67e1d8439cecfa2-merged.mount: Deactivated successfully.
Feb 02 15:54:21 compute-0 podman[277135]: 2026-02-02 15:54:21.939235785 +0000 UTC m=+0.569932991 container remove 3e54677d82a5ee214fb37e50d2b14341d26aea4ec5941d360e5980c7e7bb960e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:54:21 compute-0 systemd[1]: libpod-conmon-3e54677d82a5ee214fb37e50d2b14341d26aea4ec5941d360e5980c7e7bb960e.scope: Deactivated successfully.
Feb 02 15:54:21 compute-0 sudo[277056]: pam_unix(sudo:session): session closed for user root
Feb 02 15:54:22 compute-0 sudo[277184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:54:22 compute-0 sudo[277184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:54:22 compute-0 sudo[277184]: pam_unix(sudo:session): session closed for user root
Feb 02 15:54:22 compute-0 sudo[277209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:54:22 compute-0 sudo[277209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:54:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:22 compute-0 nova_compute[239545]: 2026-02-02 15:54:22.314 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:22 compute-0 nova_compute[239545]: 2026-02-02 15:54:22.315 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:22 compute-0 podman[277245]: 2026-02-02 15:54:22.348774199 +0000 UTC m=+0.043821303 container create ce0e9b193be150dcf64451367d06a1b2d7fcbe7ebf47ba62ddcd9196d4cc7930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:54:22 compute-0 systemd[1]: Started libpod-conmon-ce0e9b193be150dcf64451367d06a1b2d7fcbe7ebf47ba62ddcd9196d4cc7930.scope.
Feb 02 15:54:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:54:22 compute-0 podman[277245]: 2026-02-02 15:54:22.420905545 +0000 UTC m=+0.115952739 container init ce0e9b193be150dcf64451367d06a1b2d7fcbe7ebf47ba62ddcd9196d4cc7930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcnulty, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:54:22 compute-0 podman[277245]: 2026-02-02 15:54:22.325010607 +0000 UTC m=+0.020057741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:54:22 compute-0 podman[277245]: 2026-02-02 15:54:22.426536603 +0000 UTC m=+0.121583717 container start ce0e9b193be150dcf64451367d06a1b2d7fcbe7ebf47ba62ddcd9196d4cc7930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcnulty, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 15:54:22 compute-0 vibrant_mcnulty[277261]: 167 167
Feb 02 15:54:22 compute-0 systemd[1]: libpod-ce0e9b193be150dcf64451367d06a1b2d7fcbe7ebf47ba62ddcd9196d4cc7930.scope: Deactivated successfully.
Feb 02 15:54:22 compute-0 podman[277245]: 2026-02-02 15:54:22.431796022 +0000 UTC m=+0.126843126 container attach ce0e9b193be150dcf64451367d06a1b2d7fcbe7ebf47ba62ddcd9196d4cc7930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:54:22 compute-0 podman[277245]: 2026-02-02 15:54:22.432167121 +0000 UTC m=+0.127214235 container died ce0e9b193be150dcf64451367d06a1b2d7fcbe7ebf47ba62ddcd9196d4cc7930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcnulty, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-872b9f8605a8fbb650781b042abcfe273edc771fba20be3bccf7985e859702a9-merged.mount: Deactivated successfully.
Feb 02 15:54:22 compute-0 podman[277245]: 2026-02-02 15:54:22.475158713 +0000 UTC m=+0.170205847 container remove ce0e9b193be150dcf64451367d06a1b2d7fcbe7ebf47ba62ddcd9196d4cc7930 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:54:22 compute-0 systemd[1]: libpod-conmon-ce0e9b193be150dcf64451367d06a1b2d7fcbe7ebf47ba62ddcd9196d4cc7930.scope: Deactivated successfully.
Feb 02 15:54:22 compute-0 podman[277284]: 2026-02-02 15:54:22.611279065 +0000 UTC m=+0.041020225 container create e64b6e537b0d2d83fec986f8e88e0cef6ff7bddc967a1462f4eac6ddd146a4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bassi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:54:22 compute-0 systemd[1]: Started libpod-conmon-e64b6e537b0d2d83fec986f8e88e0cef6ff7bddc967a1462f4eac6ddd146a4fa.scope.
Feb 02 15:54:22 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0655b97345414ac4c98952fc5edcdd843ab4426290666562129f6f6bdd6f3efe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0655b97345414ac4c98952fc5edcdd843ab4426290666562129f6f6bdd6f3efe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0655b97345414ac4c98952fc5edcdd843ab4426290666562129f6f6bdd6f3efe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0655b97345414ac4c98952fc5edcdd843ab4426290666562129f6f6bdd6f3efe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:22 compute-0 podman[277284]: 2026-02-02 15:54:22.59473401 +0000 UTC m=+0.024475160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:54:22 compute-0 podman[277284]: 2026-02-02 15:54:22.706557497 +0000 UTC m=+0.136298707 container init e64b6e537b0d2d83fec986f8e88e0cef6ff7bddc967a1462f4eac6ddd146a4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:54:22 compute-0 podman[277284]: 2026-02-02 15:54:22.714297887 +0000 UTC m=+0.144039047 container start e64b6e537b0d2d83fec986f8e88e0cef6ff7bddc967a1462f4eac6ddd146a4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bassi, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:54:22 compute-0 podman[277284]: 2026-02-02 15:54:22.717998807 +0000 UTC m=+0.147739937 container attach e64b6e537b0d2d83fec986f8e88e0cef6ff7bddc967a1462f4eac6ddd146a4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:54:22 compute-0 sweet_bassi[277300]: {
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:     "0": [
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:         {
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "devices": [
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "/dev/loop3"
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             ],
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_name": "ceph_lv0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_size": "21470642176",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "name": "ceph_lv0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "tags": {
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.cluster_name": "ceph",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.crush_device_class": "",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.encrypted": "0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.objectstore": "bluestore",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.osd_id": "0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.type": "block",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.vdo": "0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.with_tpm": "0"
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             },
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "type": "block",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "vg_name": "ceph_vg0"
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:         }
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:     ],
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:     "1": [
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:         {
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "devices": [
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "/dev/loop4"
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             ],
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_name": "ceph_lv1",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_size": "21470642176",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "name": "ceph_lv1",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "tags": {
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.cluster_name": "ceph",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.crush_device_class": "",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.encrypted": "0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.objectstore": "bluestore",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.osd_id": "1",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.type": "block",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.vdo": "0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.with_tpm": "0"
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             },
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "type": "block",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "vg_name": "ceph_vg1"
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:         }
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:     ],
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:     "2": [
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:         {
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "devices": [
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "/dev/loop5"
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             ],
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_name": "ceph_lv2",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_size": "21470642176",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "name": "ceph_lv2",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "tags": {
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.cluster_name": "ceph",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.crush_device_class": "",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.encrypted": "0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.objectstore": "bluestore",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.osd_id": "2",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.type": "block",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.vdo": "0",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:                 "ceph.with_tpm": "0"
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             },
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "type": "block",
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:             "vg_name": "ceph_vg2"
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:         }
Feb 02 15:54:22 compute-0 sweet_bassi[277300]:     ]
Feb 02 15:54:22 compute-0 sweet_bassi[277300]: }
Feb 02 15:54:23 compute-0 systemd[1]: libpod-e64b6e537b0d2d83fec986f8e88e0cef6ff7bddc967a1462f4eac6ddd146a4fa.scope: Deactivated successfully.
Feb 02 15:54:23 compute-0 podman[277284]: 2026-02-02 15:54:23.005692939 +0000 UTC m=+0.435434069 container died e64b6e537b0d2d83fec986f8e88e0cef6ff7bddc967a1462f4eac6ddd146a4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 02 15:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0655b97345414ac4c98952fc5edcdd843ab4426290666562129f6f6bdd6f3efe-merged.mount: Deactivated successfully.
Feb 02 15:54:23 compute-0 podman[277284]: 2026-02-02 15:54:23.040678916 +0000 UTC m=+0.470420066 container remove e64b6e537b0d2d83fec986f8e88e0cef6ff7bddc967a1462f4eac6ddd146a4fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 15:54:23 compute-0 systemd[1]: libpod-conmon-e64b6e537b0d2d83fec986f8e88e0cef6ff7bddc967a1462f4eac6ddd146a4fa.scope: Deactivated successfully.
Feb 02 15:54:23 compute-0 sudo[277209]: pam_unix(sudo:session): session closed for user root
Feb 02 15:54:23 compute-0 sudo[277322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:54:23 compute-0 sudo[277322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:54:23 compute-0 sudo[277322]: pam_unix(sudo:session): session closed for user root
Feb 02 15:54:23 compute-0 sudo[277347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:54:23 compute-0 sudo[277347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:54:23 compute-0 ceph-mon[75334]: pgmap v1910: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:23 compute-0 podman[277384]: 2026-02-02 15:54:23.420454001 +0000 UTC m=+0.033910430 container create 7c6042a133be1dcc9ff49b95b50e28d5aa0420b4a57ac63d535a1790b2951427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_banzai, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:54:23 compute-0 systemd[1]: Started libpod-conmon-7c6042a133be1dcc9ff49b95b50e28d5aa0420b4a57ac63d535a1790b2951427.scope.
Feb 02 15:54:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:54:23 compute-0 podman[277384]: 2026-02-02 15:54:23.407961655 +0000 UTC m=+0.021418094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:54:23 compute-0 podman[277384]: 2026-02-02 15:54:23.510745291 +0000 UTC m=+0.124201740 container init 7c6042a133be1dcc9ff49b95b50e28d5aa0420b4a57ac63d535a1790b2951427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:54:23 compute-0 podman[277384]: 2026-02-02 15:54:23.517957307 +0000 UTC m=+0.131413726 container start 7c6042a133be1dcc9ff49b95b50e28d5aa0420b4a57ac63d535a1790b2951427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 15:54:23 compute-0 podman[277384]: 2026-02-02 15:54:23.521135276 +0000 UTC m=+0.134591725 container attach 7c6042a133be1dcc9ff49b95b50e28d5aa0420b4a57ac63d535a1790b2951427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_banzai, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:54:23 compute-0 funny_banzai[277401]: 167 167
Feb 02 15:54:23 compute-0 systemd[1]: libpod-7c6042a133be1dcc9ff49b95b50e28d5aa0420b4a57ac63d535a1790b2951427.scope: Deactivated successfully.
Feb 02 15:54:23 compute-0 podman[277384]: 2026-02-02 15:54:23.524129149 +0000 UTC m=+0.137585568 container died 7c6042a133be1dcc9ff49b95b50e28d5aa0420b4a57ac63d535a1790b2951427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_banzai, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-be15d1a8df0a66b989f9340595c85ed3042f01a1e269a8e2d83f2050733638c1-merged.mount: Deactivated successfully.
Feb 02 15:54:23 compute-0 nova_compute[239545]: 2026-02-02 15:54:23.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:23 compute-0 nova_compute[239545]: 2026-02-02 15:54:23.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:54:23 compute-0 podman[277384]: 2026-02-02 15:54:23.559004533 +0000 UTC m=+0.172460952 container remove 7c6042a133be1dcc9ff49b95b50e28d5aa0420b4a57ac63d535a1790b2951427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:54:23 compute-0 systemd[1]: libpod-conmon-7c6042a133be1dcc9ff49b95b50e28d5aa0420b4a57ac63d535a1790b2951427.scope: Deactivated successfully.
Feb 02 15:54:23 compute-0 podman[277424]: 2026-02-02 15:54:23.717524463 +0000 UTC m=+0.056257748 container create a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:54:23 compute-0 systemd[1]: Started libpod-conmon-a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e.scope.
Feb 02 15:54:23 compute-0 podman[277424]: 2026-02-02 15:54:23.685874208 +0000 UTC m=+0.024607543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:54:23 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31671cbe8c8a9cbfefb3dffa2be5bbc0f8a7372551ddb7a0c73251930b123354/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31671cbe8c8a9cbfefb3dffa2be5bbc0f8a7372551ddb7a0c73251930b123354/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31671cbe8c8a9cbfefb3dffa2be5bbc0f8a7372551ddb7a0c73251930b123354/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31671cbe8c8a9cbfefb3dffa2be5bbc0f8a7372551ddb7a0c73251930b123354/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:54:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:23 compute-0 podman[277424]: 2026-02-02 15:54:23.813375028 +0000 UTC m=+0.152108323 container init a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:54:23 compute-0 podman[277424]: 2026-02-02 15:54:23.818678278 +0000 UTC m=+0.157411533 container start a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:54:23 compute-0 podman[277424]: 2026-02-02 15:54:23.822438211 +0000 UTC m=+0.161171486 container attach a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_panini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 02 15:54:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:24 compute-0 lvm[277518]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:54:24 compute-0 lvm[277519]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:54:24 compute-0 lvm[277518]: VG ceph_vg1 finished
Feb 02 15:54:24 compute-0 lvm[277519]: VG ceph_vg0 finished
Feb 02 15:54:24 compute-0 lvm[277521]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:54:24 compute-0 lvm[277521]: VG ceph_vg2 finished
Feb 02 15:54:24 compute-0 nova_compute[239545]: 2026-02-02 15:54:24.483 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:24 compute-0 vigorous_panini[277440]: {}
Feb 02 15:54:24 compute-0 systemd[1]: libpod-a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e.scope: Deactivated successfully.
Feb 02 15:54:24 compute-0 systemd[1]: libpod-a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e.scope: Consumed 1.150s CPU time.
Feb 02 15:54:24 compute-0 podman[277424]: 2026-02-02 15:54:24.592339836 +0000 UTC m=+0.931073081 container died a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_panini, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:54:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-31671cbe8c8a9cbfefb3dffa2be5bbc0f8a7372551ddb7a0c73251930b123354-merged.mount: Deactivated successfully.
Feb 02 15:54:24 compute-0 nova_compute[239545]: 2026-02-02 15:54:24.626 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:24 compute-0 podman[277424]: 2026-02-02 15:54:24.634460917 +0000 UTC m=+0.973194192 container remove a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 02 15:54:24 compute-0 systemd[1]: libpod-conmon-a2f564f2692929b4efeead20beea319d04b6156cd27f22b7f1733b412f43686e.scope: Deactivated successfully.
Feb 02 15:54:24 compute-0 sudo[277347]: pam_unix(sudo:session): session closed for user root
Feb 02 15:54:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:54:24 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:54:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:54:24 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:54:24 compute-0 sudo[277535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:54:24 compute-0 sudo[277535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:54:24 compute-0 sudo[277535]: pam_unix(sudo:session): session closed for user root
Feb 02 15:54:25 compute-0 ceph-mon[75334]: pgmap v1911: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:54:25 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:54:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:27 compute-0 ceph-mon[75334]: pgmap v1912: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:54:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2038190540' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:54:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:54:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2038190540' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:54:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:28 compute-0 nova_compute[239545]: 2026-02-02 15:54:28.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2038190540' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:54:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2038190540' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:54:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:29 compute-0 nova_compute[239545]: 2026-02-02 15:54:29.486 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:29 compute-0 nova_compute[239545]: 2026-02-02 15:54:29.627 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:29 compute-0 ceph-mon[75334]: pgmap v1913: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:31 compute-0 ceph-mon[75334]: pgmap v1914: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:32 compute-0 podman[277561]: 2026-02-02 15:54:32.303523494 +0000 UTC m=+0.047060573 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 15:54:32 compute-0 podman[277560]: 2026-02-02 15:54:32.325725787 +0000 UTC m=+0.070898237 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 02 15:54:33 compute-0 nova_compute[239545]: 2026-02-02 15:54:33.559 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:33 compute-0 nova_compute[239545]: 2026-02-02 15:54:33.560 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 15:54:33 compute-0 nova_compute[239545]: 2026-02-02 15:54:33.585 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 15:54:33 compute-0 ceph-mon[75334]: pgmap v1915: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:34 compute-0 nova_compute[239545]: 2026-02-02 15:54:34.488 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:34 compute-0 nova_compute[239545]: 2026-02-02 15:54:34.629 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:35 compute-0 ceph-mon[75334]: pgmap v1916: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:37 compute-0 ceph-mon[75334]: pgmap v1917: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:39 compute-0 nova_compute[239545]: 2026-02-02 15:54:39.490 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:39 compute-0 nova_compute[239545]: 2026-02-02 15:54:39.630 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:39 compute-0 ceph-mon[75334]: pgmap v1918: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:41 compute-0 ceph-mon[75334]: pgmap v1919: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:54:42
Feb 02 15:54:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:54:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:54:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'images']
Feb 02 15:54:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:54:43 compute-0 ceph-mon[75334]: pgmap v1920: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:44 compute-0 nova_compute[239545]: 2026-02-02 15:54:44.491 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:44 compute-0 nova_compute[239545]: 2026-02-02 15:54:44.631 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:54:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:54:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:54:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:54:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:54:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:54:44 compute-0 nova_compute[239545]: 2026-02-02 15:54:44.947 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:44 compute-0 nova_compute[239545]: 2026-02-02 15:54:44.973 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Triggering sync for uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 02 15:54:44 compute-0 nova_compute[239545]: 2026-02-02 15:54:44.974 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:54:44 compute-0 nova_compute[239545]: 2026-02-02 15:54:44.974 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:54:45 compute-0 nova_compute[239545]: 2026-02-02 15:54:45.001 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:54:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:54:45 compute-0 nova_compute[239545]: 2026-02-02 15:54:45.052 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:54:45 compute-0 ceph-mon[75334]: pgmap v1921: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:47 compute-0 ceph-mon[75334]: pgmap v1922: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:49 compute-0 nova_compute[239545]: 2026-02-02 15:54:49.493 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:49 compute-0 nova_compute[239545]: 2026-02-02 15:54:49.633 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:49 compute-0 ceph-mon[75334]: pgmap v1923: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:51 compute-0 ceph-mon[75334]: pgmap v1924: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:53 compute-0 ceph-mon[75334]: pgmap v1925: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:54 compute-0 nova_compute[239545]: 2026-02-02 15:54:54.496 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:54 compute-0 nova_compute[239545]: 2026-02-02 15:54:54.635 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632409966596675 of space, bias 1.0, pg target 0.22897229899790025 quantized to 32 (current 32)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029290397162393036 of space, bias 1.0, pg target 0.8787119148717911 quantized to 32 (current 32)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8636478275776745e-06 of space, bias 1.0, pg target 0.0011590943482733024 quantized to 32 (current 32)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677614895015238 of space, bias 1.0, pg target 0.20032844685045714 quantized to 32 (current 32)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4237735919936245e-06 of space, bias 4.0, pg target 0.0017085283103923494 quantized to 16 (current 16)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:54:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:54:55 compute-0 ceph-mon[75334]: pgmap v1926: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:57 compute-0 ceph-mon[75334]: pgmap v1927: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:54:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:54:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:54:59.264 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:54:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:54:59.265 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:54:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:54:59.265 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:54:59 compute-0 nova_compute[239545]: 2026-02-02 15:54:59.498 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:59 compute-0 nova_compute[239545]: 2026-02-02 15:54:59.636 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:54:59 compute-0 ceph-mon[75334]: pgmap v1928: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:01 compute-0 ceph-mon[75334]: pgmap v1929: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:03 compute-0 podman[277607]: 2026-02-02 15:55:03.310124085 +0000 UTC m=+0.043165618 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:55:03 compute-0 podman[277606]: 2026-02-02 15:55:03.337402322 +0000 UTC m=+0.074474373 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:55:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:03 compute-0 ceph-mon[75334]: pgmap v1930: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:04 compute-0 nova_compute[239545]: 2026-02-02 15:55:04.500 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:04 compute-0 nova_compute[239545]: 2026-02-02 15:55:04.638 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:05 compute-0 ceph-mon[75334]: pgmap v1931: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:07 compute-0 ceph-mon[75334]: pgmap v1932: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:09 compute-0 nova_compute[239545]: 2026-02-02 15:55:09.502 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:09 compute-0 nova_compute[239545]: 2026-02-02 15:55:09.639 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:09 compute-0 ceph-mon[75334]: pgmap v1933: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:10 compute-0 nova_compute[239545]: 2026-02-02 15:55:10.570 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:55:11 compute-0 ceph-mon[75334]: pgmap v1934: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:13 compute-0 ceph-mon[75334]: pgmap v1935: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:14 compute-0 nova_compute[239545]: 2026-02-02 15:55:14.505 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:14 compute-0 nova_compute[239545]: 2026-02-02 15:55:14.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:55:14 compute-0 nova_compute[239545]: 2026-02-02 15:55:14.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:55:14 compute-0 nova_compute[239545]: 2026-02-02 15:55:14.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:55:14 compute-0 nova_compute[239545]: 2026-02-02 15:55:14.642 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:55:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:55:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:55:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:55:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:55:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:55:15 compute-0 nova_compute[239545]: 2026-02-02 15:55:15.692 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:55:15 compute-0 nova_compute[239545]: 2026-02-02 15:55:15.693 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:55:15 compute-0 nova_compute[239545]: 2026-02-02 15:55:15.693 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:55:15 compute-0 nova_compute[239545]: 2026-02-02 15:55:15.693 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:55:15 compute-0 ceph-mon[75334]: pgmap v1936: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:17 compute-0 nova_compute[239545]: 2026-02-02 15:55:17.832 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:55:17 compute-0 nova_compute[239545]: 2026-02-02 15:55:17.862 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:55:17 compute-0 nova_compute[239545]: 2026-02-02 15:55:17.862 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:55:17 compute-0 ceph-mon[75334]: pgmap v1937: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:19 compute-0 nova_compute[239545]: 2026-02-02 15:55:19.507 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:19 compute-0 nova_compute[239545]: 2026-02-02 15:55:19.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:55:19 compute-0 nova_compute[239545]: 2026-02-02 15:55:19.643 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:19 compute-0 ceph-mon[75334]: pgmap v1938: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:20 compute-0 nova_compute[239545]: 2026-02-02 15:55:20.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:55:20 compute-0 nova_compute[239545]: 2026-02-02 15:55:20.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:55:20 compute-0 nova_compute[239545]: 2026-02-02 15:55:20.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:55:20 compute-0 nova_compute[239545]: 2026-02-02 15:55:20.580 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:55:20 compute-0 nova_compute[239545]: 2026-02-02 15:55:20.581 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:55:20 compute-0 nova_compute[239545]: 2026-02-02 15:55:20.581 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:55:20 compute-0 nova_compute[239545]: 2026-02-02 15:55:20.581 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:55:20 compute-0 nova_compute[239545]: 2026-02-02 15:55:20.582 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:55:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:55:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3763143357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.097 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.160 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.161 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.161 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.314 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.315 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3975MB free_disk=59.94249573443085GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.315 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.316 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.403 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.404 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.404 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:55:21 compute-0 nova_compute[239545]: 2026-02-02 15:55:21.445 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:55:21 compute-0 ceph-mon[75334]: pgmap v1939: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3763143357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:55:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:55:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/414339573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:55:22 compute-0 nova_compute[239545]: 2026-02-02 15:55:22.014 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:55:22 compute-0 nova_compute[239545]: 2026-02-02 15:55:22.021 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:55:22 compute-0 nova_compute[239545]: 2026-02-02 15:55:22.041 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:55:22 compute-0 nova_compute[239545]: 2026-02-02 15:55:22.043 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:55:22 compute-0 nova_compute[239545]: 2026-02-02 15:55:22.043 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:55:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/414339573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:55:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:24 compute-0 ceph-mon[75334]: pgmap v1940: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.015546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047724015581, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2048, "num_deletes": 251, "total_data_size": 3422629, "memory_usage": 3479056, "flush_reason": "Manual Compaction"}
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047724031344, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3366709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36860, "largest_seqno": 38907, "table_properties": {"data_size": 3357380, "index_size": 5887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18767, "raw_average_key_size": 20, "raw_value_size": 3338798, "raw_average_value_size": 3578, "num_data_blocks": 261, "num_entries": 933, "num_filter_entries": 933, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770047498, "oldest_key_time": 1770047498, "file_creation_time": 1770047724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 15844 microseconds, and 5545 cpu microseconds.
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.031388) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3366709 bytes OK
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.031405) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.033120) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.033132) EVENT_LOG_v1 {"time_micros": 1770047724033128, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.033149) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3414078, prev total WAL file size 3414078, number of live WAL files 2.
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.033651) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3287KB)], [77(10177KB)]
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047724033682, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 13788419, "oldest_snapshot_seqno": -1}
Feb 02 15:55:24 compute-0 nova_compute[239545]: 2026-02-02 15:55:24.045 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:55:24 compute-0 nova_compute[239545]: 2026-02-02 15:55:24.046 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7145 keys, 12102600 bytes, temperature: kUnknown
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047724084067, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12102600, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12048199, "index_size": 35409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 179812, "raw_average_key_size": 25, "raw_value_size": 11913497, "raw_average_value_size": 1667, "num_data_blocks": 1410, "num_entries": 7145, "num_filter_entries": 7145, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770047724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.084503) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12102600 bytes
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.086234) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 273.0 rd, 239.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 7663, records dropped: 518 output_compression: NoCompression
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.086281) EVENT_LOG_v1 {"time_micros": 1770047724086261, "job": 44, "event": "compaction_finished", "compaction_time_micros": 50509, "compaction_time_cpu_micros": 18077, "output_level": 6, "num_output_files": 1, "total_output_size": 12102600, "num_input_records": 7663, "num_output_records": 7145, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047724086970, "job": 44, "event": "table_file_deletion", "file_number": 79}
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047724088389, "job": 44, "event": "table_file_deletion", "file_number": 77}
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.033554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.088443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.088449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.088451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.088452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:55:24 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:55:24.088454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:55:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:24 compute-0 nova_compute[239545]: 2026-02-02 15:55:24.509 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:24 compute-0 nova_compute[239545]: 2026-02-02 15:55:24.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:55:24 compute-0 nova_compute[239545]: 2026-02-02 15:55:24.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:55:24 compute-0 nova_compute[239545]: 2026-02-02 15:55:24.646 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:24 compute-0 sudo[277695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:55:24 compute-0 sudo[277695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:55:24 compute-0 sudo[277695]: pam_unix(sudo:session): session closed for user root
Feb 02 15:55:24 compute-0 sudo[277720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:55:24 compute-0 sudo[277720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:55:25 compute-0 sudo[277720]: pam_unix(sudo:session): session closed for user root
Feb 02 15:55:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:55:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:55:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:55:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:55:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:55:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:55:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:55:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:55:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:55:25 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:55:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:55:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:55:25 compute-0 sudo[277776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:55:25 compute-0 sudo[277776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:55:25 compute-0 sudo[277776]: pam_unix(sudo:session): session closed for user root
Feb 02 15:55:25 compute-0 sudo[277801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:55:25 compute-0 sudo[277801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:55:25 compute-0 podman[277838]: 2026-02-02 15:55:25.85125675 +0000 UTC m=+0.079096688 container create 670577fc52244447184e1d1bd71725c85c00f794cf98f07ceaf3fe770c11ec42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:55:25 compute-0 podman[277838]: 2026-02-02 15:55:25.806596076 +0000 UTC m=+0.034436104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:55:25 compute-0 systemd[1]: Started libpod-conmon-670577fc52244447184e1d1bd71725c85c00f794cf98f07ceaf3fe770c11ec42.scope.
Feb 02 15:55:25 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:55:25 compute-0 podman[277838]: 2026-02-02 15:55:25.95900108 +0000 UTC m=+0.186841038 container init 670577fc52244447184e1d1bd71725c85c00f794cf98f07ceaf3fe770c11ec42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_proskuriakova, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:55:25 compute-0 podman[277838]: 2026-02-02 15:55:25.965413766 +0000 UTC m=+0.193253704 container start 670577fc52244447184e1d1bd71725c85c00f794cf98f07ceaf3fe770c11ec42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 15:55:25 compute-0 nifty_proskuriakova[277854]: 167 167
Feb 02 15:55:25 compute-0 systemd[1]: libpod-670577fc52244447184e1d1bd71725c85c00f794cf98f07ceaf3fe770c11ec42.scope: Deactivated successfully.
Feb 02 15:55:25 compute-0 podman[277838]: 2026-02-02 15:55:25.971938627 +0000 UTC m=+0.199778615 container attach 670577fc52244447184e1d1bd71725c85c00f794cf98f07ceaf3fe770c11ec42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 02 15:55:25 compute-0 podman[277838]: 2026-02-02 15:55:25.97250544 +0000 UTC m=+0.200345388 container died 670577fc52244447184e1d1bd71725c85c00f794cf98f07ceaf3fe770c11ec42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 02 15:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-632f354c5720838a4cd419f2d6c70a62255a601954220042b3bfb3fb587753ca-merged.mount: Deactivated successfully.
Feb 02 15:55:26 compute-0 podman[277838]: 2026-02-02 15:55:26.009757424 +0000 UTC m=+0.237597362 container remove 670577fc52244447184e1d1bd71725c85c00f794cf98f07ceaf3fe770c11ec42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:55:26 compute-0 systemd[1]: libpod-conmon-670577fc52244447184e1d1bd71725c85c00f794cf98f07ceaf3fe770c11ec42.scope: Deactivated successfully.
Feb 02 15:55:26 compute-0 ceph-mon[75334]: pgmap v1941: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:55:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:55:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:55:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:55:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:55:26 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:55:26 compute-0 podman[277878]: 2026-02-02 15:55:26.167366147 +0000 UTC m=+0.066322787 container create 735d9d59611d3318bfdcb66359739e454e9660da24fc0fea33a36fcea3e4647e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:55:26 compute-0 systemd[1]: Started libpod-conmon-735d9d59611d3318bfdcb66359739e454e9660da24fc0fea33a36fcea3e4647e.scope.
Feb 02 15:55:26 compute-0 podman[277878]: 2026-02-02 15:55:26.123487521 +0000 UTC m=+0.022444261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:55:26 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03d909d3e460600f4e4518d203b886a69cbf6ef47e81f6f4e9474931e81f1cf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03d909d3e460600f4e4518d203b886a69cbf6ef47e81f6f4e9474931e81f1cf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03d909d3e460600f4e4518d203b886a69cbf6ef47e81f6f4e9474931e81f1cf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03d909d3e460600f4e4518d203b886a69cbf6ef47e81f6f4e9474931e81f1cf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03d909d3e460600f4e4518d203b886a69cbf6ef47e81f6f4e9474931e81f1cf4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:26 compute-0 podman[277878]: 2026-02-02 15:55:26.310574898 +0000 UTC m=+0.209531558 container init 735d9d59611d3318bfdcb66359739e454e9660da24fc0fea33a36fcea3e4647e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 02 15:55:26 compute-0 podman[277878]: 2026-02-02 15:55:26.317432816 +0000 UTC m=+0.216389456 container start 735d9d59611d3318bfdcb66359739e454e9660da24fc0fea33a36fcea3e4647e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:55:26 compute-0 podman[277878]: 2026-02-02 15:55:26.326518348 +0000 UTC m=+0.225474988 container attach 735d9d59611d3318bfdcb66359739e454e9660da24fc0fea33a36fcea3e4647e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:55:26 compute-0 goofy_fermi[277895]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:55:26 compute-0 goofy_fermi[277895]: --> All data devices are unavailable
Feb 02 15:55:26 compute-0 systemd[1]: libpod-735d9d59611d3318bfdcb66359739e454e9660da24fc0fea33a36fcea3e4647e.scope: Deactivated successfully.
Feb 02 15:55:26 compute-0 podman[277878]: 2026-02-02 15:55:26.756477746 +0000 UTC m=+0.655434426 container died 735d9d59611d3318bfdcb66359739e454e9660da24fc0fea33a36fcea3e4647e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 02 15:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-03d909d3e460600f4e4518d203b886a69cbf6ef47e81f6f4e9474931e81f1cf4-merged.mount: Deactivated successfully.
Feb 02 15:55:26 compute-0 podman[277878]: 2026-02-02 15:55:26.800938717 +0000 UTC m=+0.699895347 container remove 735d9d59611d3318bfdcb66359739e454e9660da24fc0fea33a36fcea3e4647e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 02 15:55:26 compute-0 systemd[1]: libpod-conmon-735d9d59611d3318bfdcb66359739e454e9660da24fc0fea33a36fcea3e4647e.scope: Deactivated successfully.
Feb 02 15:55:26 compute-0 sudo[277801]: pam_unix(sudo:session): session closed for user root
Feb 02 15:55:26 compute-0 sudo[277929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:55:26 compute-0 sudo[277929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:55:26 compute-0 sudo[277929]: pam_unix(sudo:session): session closed for user root
Feb 02 15:55:26 compute-0 sudo[277954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:55:26 compute-0 sudo[277954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:55:27 compute-0 podman[277990]: 2026-02-02 15:55:27.190687029 +0000 UTC m=+0.038308349 container create 70ada39515d17caca32d1dbdb226427f7aae86068add1d4d6e5f91a0fc2fe12f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 15:55:27 compute-0 systemd[1]: Started libpod-conmon-70ada39515d17caca32d1dbdb226427f7aae86068add1d4d6e5f91a0fc2fe12f.scope.
Feb 02 15:55:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:55:27 compute-0 podman[277990]: 2026-02-02 15:55:27.258042371 +0000 UTC m=+0.105663711 container init 70ada39515d17caca32d1dbdb226427f7aae86068add1d4d6e5f91a0fc2fe12f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:55:27 compute-0 podman[277990]: 2026-02-02 15:55:27.263489394 +0000 UTC m=+0.111110714 container start 70ada39515d17caca32d1dbdb226427f7aae86068add1d4d6e5f91a0fc2fe12f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:55:27 compute-0 podman[277990]: 2026-02-02 15:55:27.266840527 +0000 UTC m=+0.114461867 container attach 70ada39515d17caca32d1dbdb226427f7aae86068add1d4d6e5f91a0fc2fe12f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:55:27 compute-0 cranky_keller[278007]: 167 167
Feb 02 15:55:27 compute-0 systemd[1]: libpod-70ada39515d17caca32d1dbdb226427f7aae86068add1d4d6e5f91a0fc2fe12f.scope: Deactivated successfully.
Feb 02 15:55:27 compute-0 podman[277990]: 2026-02-02 15:55:27.267689987 +0000 UTC m=+0.115311307 container died 70ada39515d17caca32d1dbdb226427f7aae86068add1d4d6e5f91a0fc2fe12f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 15:55:27 compute-0 podman[277990]: 2026-02-02 15:55:27.172892484 +0000 UTC m=+0.020513834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bf7657e4d7efc0425c9dc8e5a92736d06c924949f324d5e249472f1db28f76c-merged.mount: Deactivated successfully.
Feb 02 15:55:27 compute-0 podman[277990]: 2026-02-02 15:55:27.30002242 +0000 UTC m=+0.147643740 container remove 70ada39515d17caca32d1dbdb226427f7aae86068add1d4d6e5f91a0fc2fe12f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:55:27 compute-0 systemd[1]: libpod-conmon-70ada39515d17caca32d1dbdb226427f7aae86068add1d4d6e5f91a0fc2fe12f.scope: Deactivated successfully.
Feb 02 15:55:27 compute-0 podman[278030]: 2026-02-02 15:55:27.430473758 +0000 UTC m=+0.036310542 container create a8ad45af984d8337568dbb63d700e35190c25b9788d9c13fd4d0c72977d42587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 02 15:55:27 compute-0 systemd[1]: Started libpod-conmon-a8ad45af984d8337568dbb63d700e35190c25b9788d9c13fd4d0c72977d42587.scope.
Feb 02 15:55:27 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad29d10363b66323dfb500124ffba9f2c9277313db99e2214248f573cd5726bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad29d10363b66323dfb500124ffba9f2c9277313db99e2214248f573cd5726bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad29d10363b66323dfb500124ffba9f2c9277313db99e2214248f573cd5726bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad29d10363b66323dfb500124ffba9f2c9277313db99e2214248f573cd5726bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:27 compute-0 podman[278030]: 2026-02-02 15:55:27.502018361 +0000 UTC m=+0.107855175 container init a8ad45af984d8337568dbb63d700e35190c25b9788d9c13fd4d0c72977d42587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_agnesi, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:55:27 compute-0 podman[278030]: 2026-02-02 15:55:27.414535006 +0000 UTC m=+0.020371850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:55:27 compute-0 podman[278030]: 2026-02-02 15:55:27.508720546 +0000 UTC m=+0.114557340 container start a8ad45af984d8337568dbb63d700e35190c25b9788d9c13fd4d0c72977d42587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 02 15:55:27 compute-0 podman[278030]: 2026-02-02 15:55:27.512382365 +0000 UTC m=+0.118234759 container attach a8ad45af984d8337568dbb63d700e35190c25b9788d9c13fd4d0c72977d42587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_agnesi, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]: {
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:     "0": [
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:         {
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "devices": [
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "/dev/loop3"
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             ],
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_name": "ceph_lv0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_size": "21470642176",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "name": "ceph_lv0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "tags": {
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.cluster_name": "ceph",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.crush_device_class": "",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.encrypted": "0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.objectstore": "bluestore",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.osd_id": "0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.type": "block",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.vdo": "0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.with_tpm": "0"
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             },
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "type": "block",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "vg_name": "ceph_vg0"
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:         }
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:     ],
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:     "1": [
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:         {
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "devices": [
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "/dev/loop4"
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             ],
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_name": "ceph_lv1",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_size": "21470642176",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "name": "ceph_lv1",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "tags": {
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.cluster_name": "ceph",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.crush_device_class": "",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.encrypted": "0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.objectstore": "bluestore",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.osd_id": "1",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.type": "block",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.vdo": "0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.with_tpm": "0"
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             },
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "type": "block",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "vg_name": "ceph_vg1"
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:         }
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:     ],
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:     "2": [
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:         {
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "devices": [
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "/dev/loop5"
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             ],
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_name": "ceph_lv2",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_size": "21470642176",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "name": "ceph_lv2",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "tags": {
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.cluster_name": "ceph",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.crush_device_class": "",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.encrypted": "0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.objectstore": "bluestore",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.osd_id": "2",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.type": "block",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.vdo": "0",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:                 "ceph.with_tpm": "0"
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             },
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "type": "block",
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:             "vg_name": "ceph_vg2"
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:         }
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]:     ]
Feb 02 15:55:27 compute-0 vibrant_agnesi[278047]: }
Feb 02 15:55:27 compute-0 systemd[1]: libpod-a8ad45af984d8337568dbb63d700e35190c25b9788d9c13fd4d0c72977d42587.scope: Deactivated successfully.
Feb 02 15:55:27 compute-0 podman[278056]: 2026-02-02 15:55:27.827104309 +0000 UTC m=+0.023580199 container died a8ad45af984d8337568dbb63d700e35190c25b9788d9c13fd4d0c72977d42587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_agnesi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb 02 15:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad29d10363b66323dfb500124ffba9f2c9277313db99e2214248f573cd5726bb-merged.mount: Deactivated successfully.
Feb 02 15:55:27 compute-0 podman[278056]: 2026-02-02 15:55:27.891789295 +0000 UTC m=+0.088265145 container remove a8ad45af984d8337568dbb63d700e35190c25b9788d9c13fd4d0c72977d42587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 02 15:55:27 compute-0 systemd[1]: libpod-conmon-a8ad45af984d8337568dbb63d700e35190c25b9788d9c13fd4d0c72977d42587.scope: Deactivated successfully.
Feb 02 15:55:27 compute-0 sudo[277954]: pam_unix(sudo:session): session closed for user root
Feb 02 15:55:27 compute-0 sudo[278071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:55:27 compute-0 sudo[278071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:55:27 compute-0 sudo[278071]: pam_unix(sudo:session): session closed for user root
Feb 02 15:55:28 compute-0 sudo[278096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:55:28 compute-0 sudo[278096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:55:28 compute-0 ceph-mon[75334]: pgmap v1942: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:28 compute-0 podman[278133]: 2026-02-02 15:55:28.29267664 +0000 UTC m=+0.033400698 container create dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:55:28 compute-0 systemd[1]: Started libpod-conmon-dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9.scope.
Feb 02 15:55:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:55:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3854512949' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:55:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:55:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3854512949' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:55:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:55:28 compute-0 podman[278133]: 2026-02-02 15:55:28.369220927 +0000 UTC m=+0.109945005 container init dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:55:28 compute-0 podman[278133]: 2026-02-02 15:55:28.278348259 +0000 UTC m=+0.019072337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:55:28 compute-0 podman[278133]: 2026-02-02 15:55:28.375788438 +0000 UTC m=+0.116512526 container start dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_archimedes, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:55:28 compute-0 systemd[1]: libpod-dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9.scope: Deactivated successfully.
Feb 02 15:55:28 compute-0 thirsty_archimedes[278150]: 167 167
Feb 02 15:55:28 compute-0 conmon[278150]: conmon dca7a23c37335060b532 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9.scope/container/memory.events
Feb 02 15:55:28 compute-0 podman[278133]: 2026-02-02 15:55:28.379454767 +0000 UTC m=+0.120178875 container attach dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 02 15:55:28 compute-0 podman[278133]: 2026-02-02 15:55:28.380178136 +0000 UTC m=+0.120902234 container died dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_archimedes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6034c43bb6869ac5637d2cfd4b3decc1d17def3c279fcf929f5a6bfbfab9dceb-merged.mount: Deactivated successfully.
Feb 02 15:55:28 compute-0 podman[278133]: 2026-02-02 15:55:28.419751415 +0000 UTC m=+0.160475473 container remove dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_archimedes, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:55:28 compute-0 systemd[1]: libpod-conmon-dca7a23c37335060b532e42ef4649380a7429e4e6b0dd705c9b2c0a3b9ff99f9.scope: Deactivated successfully.
Feb 02 15:55:28 compute-0 podman[278172]: 2026-02-02 15:55:28.580909066 +0000 UTC m=+0.048075810 container create 947ce12b9b6e1cbf3c76435c6caca1917061c30afeb2ad8d0063296d0a75e131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cori, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:55:28 compute-0 systemd[1]: Started libpod-conmon-947ce12b9b6e1cbf3c76435c6caca1917061c30afeb2ad8d0063296d0a75e131.scope.
Feb 02 15:55:28 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e4f5be49a9fb98eadafda5a3c12e3ba4b88adff935fa5605d47262d175cabc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e4f5be49a9fb98eadafda5a3c12e3ba4b88adff935fa5605d47262d175cabc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e4f5be49a9fb98eadafda5a3c12e3ba4b88adff935fa5605d47262d175cabc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e4f5be49a9fb98eadafda5a3c12e3ba4b88adff935fa5605d47262d175cabc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:55:28 compute-0 podman[278172]: 2026-02-02 15:55:28.559240705 +0000 UTC m=+0.026407539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:55:28 compute-0 podman[278172]: 2026-02-02 15:55:28.668135784 +0000 UTC m=+0.135302608 container init 947ce12b9b6e1cbf3c76435c6caca1917061c30afeb2ad8d0063296d0a75e131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:55:28 compute-0 podman[278172]: 2026-02-02 15:55:28.675584886 +0000 UTC m=+0.142751660 container start 947ce12b9b6e1cbf3c76435c6caca1917061c30afeb2ad8d0063296d0a75e131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cori, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 02 15:55:28 compute-0 podman[278172]: 2026-02-02 15:55:28.679609065 +0000 UTC m=+0.146775889 container attach 947ce12b9b6e1cbf3c76435c6caca1917061c30afeb2ad8d0063296d0a75e131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:55:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3854512949' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:55:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3854512949' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:55:29 compute-0 lvm[278267]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:55:29 compute-0 lvm[278267]: VG ceph_vg0 finished
Feb 02 15:55:29 compute-0 lvm[278268]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:55:29 compute-0 lvm[278268]: VG ceph_vg1 finished
Feb 02 15:55:29 compute-0 lvm[278270]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:55:29 compute-0 lvm[278270]: VG ceph_vg2 finished
Feb 02 15:55:29 compute-0 compassionate_cori[278189]: {}
Feb 02 15:55:29 compute-0 systemd[1]: libpod-947ce12b9b6e1cbf3c76435c6caca1917061c30afeb2ad8d0063296d0a75e131.scope: Deactivated successfully.
Feb 02 15:55:29 compute-0 podman[278172]: 2026-02-02 15:55:29.378093906 +0000 UTC m=+0.845260640 container died 947ce12b9b6e1cbf3c76435c6caca1917061c30afeb2ad8d0063296d0a75e131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cori, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7e4f5be49a9fb98eadafda5a3c12e3ba4b88adff935fa5605d47262d175cabc-merged.mount: Deactivated successfully.
Feb 02 15:55:29 compute-0 podman[278172]: 2026-02-02 15:55:29.451842573 +0000 UTC m=+0.919009307 container remove 947ce12b9b6e1cbf3c76435c6caca1917061c30afeb2ad8d0063296d0a75e131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:55:29 compute-0 systemd[1]: libpod-conmon-947ce12b9b6e1cbf3c76435c6caca1917061c30afeb2ad8d0063296d0a75e131.scope: Deactivated successfully.
Feb 02 15:55:29 compute-0 sudo[278096]: pam_unix(sudo:session): session closed for user root
Feb 02 15:55:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:55:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:55:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:55:29 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:55:29 compute-0 nova_compute[239545]: 2026-02-02 15:55:29.543 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:29 compute-0 sudo[278286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:55:29 compute-0 sudo[278286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:55:29 compute-0 sudo[278286]: pam_unix(sudo:session): session closed for user root
Feb 02 15:55:29 compute-0 nova_compute[239545]: 2026-02-02 15:55:29.648 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:30 compute-0 ceph-mon[75334]: pgmap v1943: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:55:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:55:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:32 compute-0 ceph-mon[75334]: pgmap v1944: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:33 compute-0 ceph-mon[75334]: pgmap v1945: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:34 compute-0 podman[278313]: 2026-02-02 15:55:34.317807023 +0000 UTC m=+0.052481608 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 02 15:55:34 compute-0 podman[278312]: 2026-02-02 15:55:34.345596695 +0000 UTC m=+0.079505790 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:55:34 compute-0 nova_compute[239545]: 2026-02-02 15:55:34.546 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:34 compute-0 nova_compute[239545]: 2026-02-02 15:55:34.650 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:35 compute-0 ceph-mon[75334]: pgmap v1946: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:37 compute-0 ceph-mon[75334]: pgmap v1947: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:39 compute-0 ceph-mon[75334]: pgmap v1948: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:39 compute-0 nova_compute[239545]: 2026-02-02 15:55:39.548 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:39 compute-0 nova_compute[239545]: 2026-02-02 15:55:39.653 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:41 compute-0 ceph-mon[75334]: pgmap v1949: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:55:42
Feb 02 15:55:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:55:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:55:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms', '.mgr', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control']
Feb 02 15:55:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:55:43 compute-0 ceph-mon[75334]: pgmap v1950: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:44 compute-0 nova_compute[239545]: 2026-02-02 15:55:44.549 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:44 compute-0 nova_compute[239545]: 2026-02-02 15:55:44.656 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:55:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:55:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:55:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:55:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:55:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:55:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:55:45 compute-0 ceph-mon[75334]: pgmap v1951: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:47 compute-0 ceph-mon[75334]: pgmap v1952: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:49 compute-0 nova_compute[239545]: 2026-02-02 15:55:49.551 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:49 compute-0 nova_compute[239545]: 2026-02-02 15:55:49.658 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:49 compute-0 ceph-mon[75334]: pgmap v1953: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:51 compute-0 ceph-mon[75334]: pgmap v1954: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:53 compute-0 ceph-mon[75334]: pgmap v1955: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:54 compute-0 nova_compute[239545]: 2026-02-02 15:55:54.552 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:54 compute-0 nova_compute[239545]: 2026-02-02 15:55:54.659 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632409966596675 of space, bias 1.0, pg target 0.22897229899790025 quantized to 32 (current 32)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029290397162393036 of space, bias 1.0, pg target 0.8787119148717911 quantized to 32 (current 32)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8636478275776745e-06 of space, bias 1.0, pg target 0.0011590943482733024 quantized to 32 (current 32)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677614895015238 of space, bias 1.0, pg target 0.20032844685045714 quantized to 32 (current 32)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4237735919936245e-06 of space, bias 4.0, pg target 0.0017085283103923494 quantized to 16 (current 16)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:55:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:55 compute-0 ceph-mon[75334]: pgmap v1956: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:58 compute-0 ceph-mon[75334]: pgmap v1957: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:55:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:55:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:55:59.265 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:55:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:55:59.266 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:55:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:55:59.266 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:55:59 compute-0 nova_compute[239545]: 2026-02-02 15:55:59.554 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:55:59 compute-0 nova_compute[239545]: 2026-02-02 15:55:59.661 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:00 compute-0 ceph-mon[75334]: pgmap v1958: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:02 compute-0 ceph-mon[75334]: pgmap v1959: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:04 compute-0 ceph-mon[75334]: pgmap v1960: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:04 compute-0 nova_compute[239545]: 2026-02-02 15:56:04.557 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:04 compute-0 nova_compute[239545]: 2026-02-02 15:56:04.663 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:05 compute-0 podman[278356]: 2026-02-02 15:56:05.31093693 +0000 UTC m=+0.046726366 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 02 15:56:05 compute-0 podman[278355]: 2026-02-02 15:56:05.339748307 +0000 UTC m=+0.082011482 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:56:06 compute-0 ceph-mon[75334]: pgmap v1961: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:08 compute-0 ceph-mon[75334]: pgmap v1962: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:09 compute-0 nova_compute[239545]: 2026-02-02 15:56:09.561 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:09 compute-0 nova_compute[239545]: 2026-02-02 15:56:09.666 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:10 compute-0 ceph-mon[75334]: pgmap v1963: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:11 compute-0 nova_compute[239545]: 2026-02-02 15:56:11.547 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:12 compute-0 ceph-mon[75334]: pgmap v1964: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:14 compute-0 ceph-mon[75334]: pgmap v1965: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:14 compute-0 nova_compute[239545]: 2026-02-02 15:56:14.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:14 compute-0 nova_compute[239545]: 2026-02-02 15:56:14.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:56:14 compute-0 nova_compute[239545]: 2026-02-02 15:56:14.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:56:14 compute-0 nova_compute[239545]: 2026-02-02 15:56:14.562 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:14 compute-0 nova_compute[239545]: 2026-02-02 15:56:14.667 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:56:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:56:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:56:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:56:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:56:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:56:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:15 compute-0 nova_compute[239545]: 2026-02-02 15:56:15.709 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:56:15 compute-0 nova_compute[239545]: 2026-02-02 15:56:15.710 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:56:15 compute-0 nova_compute[239545]: 2026-02-02 15:56:15.710 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:56:15 compute-0 nova_compute[239545]: 2026-02-02 15:56:15.710 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:56:16 compute-0 ceph-mon[75334]: pgmap v1966: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:17 compute-0 nova_compute[239545]: 2026-02-02 15:56:17.596 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:56:17 compute-0 nova_compute[239545]: 2026-02-02 15:56:17.629 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:56:17 compute-0 nova_compute[239545]: 2026-02-02 15:56:17.630 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:56:18 compute-0 ceph-mon[75334]: pgmap v1967: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:19 compute-0 nova_compute[239545]: 2026-02-02 15:56:19.565 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:19 compute-0 nova_compute[239545]: 2026-02-02 15:56:19.669 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:20 compute-0 ceph-mon[75334]: pgmap v1968: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:20 compute-0 nova_compute[239545]: 2026-02-02 15:56:20.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:20 compute-0 nova_compute[239545]: 2026-02-02 15:56:20.605 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:21 compute-0 nova_compute[239545]: 2026-02-02 15:56:21.600 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:22 compute-0 ceph-mon[75334]: pgmap v1969: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:22 compute-0 nova_compute[239545]: 2026-02-02 15:56:22.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:22 compute-0 nova_compute[239545]: 2026-02-02 15:56:22.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:22 compute-0 nova_compute[239545]: 2026-02-02 15:56:22.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:22 compute-0 nova_compute[239545]: 2026-02-02 15:56:22.584 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:56:22 compute-0 nova_compute[239545]: 2026-02-02 15:56:22.585 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:56:22 compute-0 nova_compute[239545]: 2026-02-02 15:56:22.585 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:56:22 compute-0 nova_compute[239545]: 2026-02-02 15:56:22.585 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:56:22 compute-0 nova_compute[239545]: 2026-02-02 15:56:22.585 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:56:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:56:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40309344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.101 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.175 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.175 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.175 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.336 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.337 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3955MB free_disk=59.94249573443085GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.337 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.338 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:56:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/40309344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.570 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.570 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.570 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:56:23 compute-0 nova_compute[239545]: 2026-02-02 15:56:23.600 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:56:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:56:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/850010148' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:56:24 compute-0 nova_compute[239545]: 2026-02-02 15:56:24.126 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:56:24 compute-0 nova_compute[239545]: 2026-02-02 15:56:24.132 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:56:24 compute-0 nova_compute[239545]: 2026-02-02 15:56:24.150 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:56:24 compute-0 nova_compute[239545]: 2026-02-02 15:56:24.151 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:56:24 compute-0 nova_compute[239545]: 2026-02-02 15:56:24.151 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:56:24 compute-0 ceph-mon[75334]: pgmap v1970: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/850010148' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:56:24 compute-0 nova_compute[239545]: 2026-02-02 15:56:24.567 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:24 compute-0 nova_compute[239545]: 2026-02-02 15:56:24.670 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:56:25 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8641 writes, 39K keys, 8641 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8641 writes, 8641 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1436 writes, 6457 keys, 1436 commit groups, 1.0 writes per commit group, ingest: 9.16 MB, 0.02 MB/s
                                           Interval WAL: 1436 writes, 1436 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    135.4      0.34              0.09        22    0.016       0      0       0.0       0.0
                                             L6      1/0   11.54 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9    185.2    155.5      1.18              0.43        21    0.056    118K    12K       0.0       0.0
                                            Sum      1/0   11.54 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9    143.4    150.9      1.52              0.52        43    0.035    118K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9    185.3    188.9      0.33              0.12        10    0.033     36K   2582       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    185.2    155.5      1.18              0.43        21    0.056    118K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    136.5      0.34              0.09        21    0.016       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.045, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.22 GB write, 0.08 MB/s write, 0.21 GB read, 0.07 MB/s read, 1.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1f12ef8d0#2 capacity: 304.00 MB usage: 25.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000265 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1704,24.77 MB,8.14747%) FilterBlock(44,344.67 KB,0.110722%) IndexBlock(44,666.52 KB,0.21411%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 02 15:56:26 compute-0 nova_compute[239545]: 2026-02-02 15:56:26.152 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:26 compute-0 nova_compute[239545]: 2026-02-02 15:56:26.153 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:56:26 compute-0 nova_compute[239545]: 2026-02-02 15:56:26.153 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:56:26 compute-0 ceph-mon[75334]: pgmap v1971: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:56:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2888263901' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:56:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:56:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2888263901' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:56:28 compute-0 ceph-mon[75334]: pgmap v1972: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2888263901' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:56:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2888263901' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:56:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:29 compute-0 nova_compute[239545]: 2026-02-02 15:56:29.569 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:29 compute-0 sudo[278445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:56:29 compute-0 nova_compute[239545]: 2026-02-02 15:56:29.673 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:29 compute-0 sudo[278445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:56:29 compute-0 sudo[278445]: pam_unix(sudo:session): session closed for user root
Feb 02 15:56:29 compute-0 sudo[278470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:56:29 compute-0 sudo[278470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:56:30 compute-0 sudo[278470]: pam_unix(sudo:session): session closed for user root
Feb 02 15:56:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:56:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:56:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:56:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:56:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:56:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:56:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:56:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:56:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:56:30 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:56:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:56:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:56:30 compute-0 sudo[278525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:56:30 compute-0 sudo[278525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:56:30 compute-0 sudo[278525]: pam_unix(sudo:session): session closed for user root
Feb 02 15:56:30 compute-0 sudo[278550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:56:30 compute-0 sudo[278550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:56:30 compute-0 ceph-mon[75334]: pgmap v1973: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:56:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:56:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:56:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:56:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:56:30 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:56:30 compute-0 podman[278588]: 2026-02-02 15:56:30.590340115 +0000 UTC m=+0.036784762 container create e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_darwin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:56:30 compute-0 systemd[1]: Started libpod-conmon-e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d.scope.
Feb 02 15:56:30 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:56:30 compute-0 podman[278588]: 2026-02-02 15:56:30.572655043 +0000 UTC m=+0.019099700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:56:30 compute-0 podman[278588]: 2026-02-02 15:56:30.679168563 +0000 UTC m=+0.125613290 container init e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 02 15:56:30 compute-0 podman[278588]: 2026-02-02 15:56:30.686176985 +0000 UTC m=+0.132621662 container start e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_darwin, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:56:30 compute-0 podman[278588]: 2026-02-02 15:56:30.691278529 +0000 UTC m=+0.137723176 container attach e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:56:30 compute-0 systemd[1]: libpod-e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d.scope: Deactivated successfully.
Feb 02 15:56:30 compute-0 upbeat_darwin[278605]: 167 167
Feb 02 15:56:30 compute-0 conmon[278605]: conmon e30621efa7ba72f9c9b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d.scope/container/memory.events
Feb 02 15:56:30 compute-0 podman[278588]: 2026-02-02 15:56:30.69374578 +0000 UTC m=+0.140190467 container died e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e79d4186fd2533ff9c016f09af44a40571de56dc31abda9bd8f31aa43a564dc-merged.mount: Deactivated successfully.
Feb 02 15:56:30 compute-0 podman[278588]: 2026-02-02 15:56:30.75208519 +0000 UTC m=+0.198529847 container remove e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:56:30 compute-0 systemd[1]: libpod-conmon-e30621efa7ba72f9c9b1cb3f6e5a895fd89ac5f2aa63f8b768a85aaf3f50309d.scope: Deactivated successfully.
Feb 02 15:56:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:30 compute-0 podman[278629]: 2026-02-02 15:56:30.928480334 +0000 UTC m=+0.049124275 container create 75b4a17d1595af348d1bff09c86e38cd6907165c236abe3ba8eafc56d58ed60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:56:30 compute-0 systemd[1]: Started libpod-conmon-75b4a17d1595af348d1bff09c86e38cd6907165c236abe3ba8eafc56d58ed60c.scope.
Feb 02 15:56:31 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:56:31 compute-0 podman[278629]: 2026-02-02 15:56:30.911495908 +0000 UTC m=+0.032139869 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17fd68a68664ba9d805902c2a0c6ea6a50b9504ff9365c994c431620a94cc27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17fd68a68664ba9d805902c2a0c6ea6a50b9504ff9365c994c431620a94cc27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17fd68a68664ba9d805902c2a0c6ea6a50b9504ff9365c994c431620a94cc27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17fd68a68664ba9d805902c2a0c6ea6a50b9504ff9365c994c431620a94cc27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17fd68a68664ba9d805902c2a0c6ea6a50b9504ff9365c994c431620a94cc27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:31 compute-0 podman[278629]: 2026-02-02 15:56:31.041493024 +0000 UTC m=+0.162136995 container init 75b4a17d1595af348d1bff09c86e38cd6907165c236abe3ba8eafc56d58ed60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:56:31 compute-0 podman[278629]: 2026-02-02 15:56:31.048584418 +0000 UTC m=+0.169228369 container start 75b4a17d1595af348d1bff09c86e38cd6907165c236abe3ba8eafc56d58ed60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 02 15:56:31 compute-0 podman[278629]: 2026-02-02 15:56:31.059191478 +0000 UTC m=+0.179835469 container attach 75b4a17d1595af348d1bff09c86e38cd6907165c236abe3ba8eafc56d58ed60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 02 15:56:31 compute-0 infallible_brown[278646]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:56:31 compute-0 infallible_brown[278646]: --> All data devices are unavailable
Feb 02 15:56:31 compute-0 systemd[1]: libpod-75b4a17d1595af348d1bff09c86e38cd6907165c236abe3ba8eafc56d58ed60c.scope: Deactivated successfully.
Feb 02 15:56:31 compute-0 podman[278666]: 2026-02-02 15:56:31.605125739 +0000 UTC m=+0.028392316 container died 75b4a17d1595af348d1bff09c86e38cd6907165c236abe3ba8eafc56d58ed60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_brown, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b17fd68a68664ba9d805902c2a0c6ea6a50b9504ff9365c994c431620a94cc27-merged.mount: Deactivated successfully.
Feb 02 15:56:31 compute-0 podman[278666]: 2026-02-02 15:56:31.650289056 +0000 UTC m=+0.073555613 container remove 75b4a17d1595af348d1bff09c86e38cd6907165c236abe3ba8eafc56d58ed60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_brown, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 02 15:56:31 compute-0 systemd[1]: libpod-conmon-75b4a17d1595af348d1bff09c86e38cd6907165c236abe3ba8eafc56d58ed60c.scope: Deactivated successfully.
Feb 02 15:56:31 compute-0 sudo[278550]: pam_unix(sudo:session): session closed for user root
Feb 02 15:56:31 compute-0 sudo[278680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:56:31 compute-0 sudo[278680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:56:31 compute-0 sudo[278680]: pam_unix(sudo:session): session closed for user root
Feb 02 15:56:31 compute-0 sudo[278705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:56:31 compute-0 sudo[278705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:56:32 compute-0 podman[278742]: 2026-02-02 15:56:32.128855836 +0000 UTC m=+0.053527952 container create edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:56:32 compute-0 systemd[1]: Started libpod-conmon-edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4.scope.
Feb 02 15:56:32 compute-0 podman[278742]: 2026-02-02 15:56:32.100526692 +0000 UTC m=+0.025198788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:56:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:56:32 compute-0 podman[278742]: 2026-02-02 15:56:32.256476654 +0000 UTC m=+0.181148750 container init edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilbur, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 15:56:32 compute-0 podman[278742]: 2026-02-02 15:56:32.263449775 +0000 UTC m=+0.188121851 container start edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilbur, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:56:32 compute-0 podman[278742]: 2026-02-02 15:56:32.266745717 +0000 UTC m=+0.191417823 container attach edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilbur, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:56:32 compute-0 systemd[1]: libpod-edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4.scope: Deactivated successfully.
Feb 02 15:56:32 compute-0 laughing_wilbur[278758]: 167 167
Feb 02 15:56:32 compute-0 conmon[278758]: conmon edeba70eb63c8235c3f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4.scope/container/memory.events
Feb 02 15:56:32 compute-0 podman[278742]: 2026-02-02 15:56:32.27014875 +0000 UTC m=+0.194820826 container died edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilbur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 15:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-22700d794f2a14f3fb6ddc2828bd801f60286f67359090666d9e95eb0dc34767-merged.mount: Deactivated successfully.
Feb 02 15:56:32 compute-0 podman[278742]: 2026-02-02 15:56:32.363219281 +0000 UTC m=+0.287891357 container remove edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilbur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:56:32 compute-0 systemd[1]: libpod-conmon-edeba70eb63c8235c3f269c0c944dd09d197fbf110ae5a6042e664bb6a2de4d4.scope: Deactivated successfully.
Feb 02 15:56:32 compute-0 ceph-mon[75334]: pgmap v1974: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:32 compute-0 podman[278782]: 2026-02-02 15:56:32.543465299 +0000 UTC m=+0.060980175 container create 6c18eea9e3a2fdb5de757f8e5bd7570b08620808bd16fd240be135139fabae54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 15:56:32 compute-0 systemd[1]: Started libpod-conmon-6c18eea9e3a2fdb5de757f8e5bd7570b08620808bd16fd240be135139fabae54.scope.
Feb 02 15:56:32 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:56:32 compute-0 podman[278782]: 2026-02-02 15:56:32.507980529 +0000 UTC m=+0.025495425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f558e639cac74b0f7b68582e69fbc21e9e58f2c90b9140674a5cf4610f8751ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f558e639cac74b0f7b68582e69fbc21e9e58f2c90b9140674a5cf4610f8751ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f558e639cac74b0f7b68582e69fbc21e9e58f2c90b9140674a5cf4610f8751ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f558e639cac74b0f7b68582e69fbc21e9e58f2c90b9140674a5cf4610f8751ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:32 compute-0 podman[278782]: 2026-02-02 15:56:32.629906178 +0000 UTC m=+0.147421064 container init 6c18eea9e3a2fdb5de757f8e5bd7570b08620808bd16fd240be135139fabae54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elbakyan, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:56:32 compute-0 podman[278782]: 2026-02-02 15:56:32.636213273 +0000 UTC m=+0.153728169 container start 6c18eea9e3a2fdb5de757f8e5bd7570b08620808bd16fd240be135139fabae54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 02 15:56:32 compute-0 podman[278782]: 2026-02-02 15:56:32.66672796 +0000 UTC m=+0.184242836 container attach 6c18eea9e3a2fdb5de757f8e5bd7570b08620808bd16fd240be135139fabae54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elbakyan, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]: {
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:     "0": [
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:         {
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "devices": [
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "/dev/loop3"
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             ],
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_name": "ceph_lv0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_size": "21470642176",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "name": "ceph_lv0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "tags": {
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.cluster_name": "ceph",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.crush_device_class": "",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.encrypted": "0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.objectstore": "bluestore",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.osd_id": "0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.type": "block",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.vdo": "0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.with_tpm": "0"
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             },
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "type": "block",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "vg_name": "ceph_vg0"
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:         }
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:     ],
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:     "1": [
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:         {
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "devices": [
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "/dev/loop4"
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             ],
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_name": "ceph_lv1",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_size": "21470642176",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "name": "ceph_lv1",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "tags": {
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.cluster_name": "ceph",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.crush_device_class": "",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.encrypted": "0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.objectstore": "bluestore",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.osd_id": "1",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.type": "block",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.vdo": "0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.with_tpm": "0"
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             },
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "type": "block",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "vg_name": "ceph_vg1"
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:         }
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:     ],
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:     "2": [
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:         {
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "devices": [
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "/dev/loop5"
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             ],
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_name": "ceph_lv2",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_size": "21470642176",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "name": "ceph_lv2",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "tags": {
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.cluster_name": "ceph",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.crush_device_class": "",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.encrypted": "0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.objectstore": "bluestore",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.osd_id": "2",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.type": "block",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.vdo": "0",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:                 "ceph.with_tpm": "0"
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             },
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "type": "block",
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:             "vg_name": "ceph_vg2"
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:         }
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]:     ]
Feb 02 15:56:32 compute-0 vigilant_elbakyan[278799]: }
Feb 02 15:56:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:32 compute-0 systemd[1]: libpod-6c18eea9e3a2fdb5de757f8e5bd7570b08620808bd16fd240be135139fabae54.scope: Deactivated successfully.
Feb 02 15:56:32 compute-0 podman[278782]: 2026-02-02 15:56:32.928273901 +0000 UTC m=+0.445788787 container died 6c18eea9e3a2fdb5de757f8e5bd7570b08620808bd16fd240be135139fabae54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f558e639cac74b0f7b68582e69fbc21e9e58f2c90b9140674a5cf4610f8751ce-merged.mount: Deactivated successfully.
Feb 02 15:56:33 compute-0 podman[278782]: 2026-02-02 15:56:33.036574885 +0000 UTC m=+0.554089761 container remove 6c18eea9e3a2fdb5de757f8e5bd7570b08620808bd16fd240be135139fabae54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:56:33 compute-0 systemd[1]: libpod-conmon-6c18eea9e3a2fdb5de757f8e5bd7570b08620808bd16fd240be135139fabae54.scope: Deactivated successfully.
Feb 02 15:56:33 compute-0 sudo[278705]: pam_unix(sudo:session): session closed for user root
Feb 02 15:56:33 compute-0 sudo[278822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:56:33 compute-0 sudo[278822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:56:33 compute-0 sudo[278822]: pam_unix(sudo:session): session closed for user root
Feb 02 15:56:33 compute-0 sudo[278847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:56:33 compute-0 sudo[278847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:56:33 compute-0 podman[278884]: 2026-02-02 15:56:33.578822307 +0000 UTC m=+0.122666398 container create 67cdb27ce5a931ae973b31f48616ce70c3cdc7a94f13c8b1f049364c5e3ebf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sutherland, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 02 15:56:33 compute-0 podman[278884]: 2026-02-02 15:56:33.494697015 +0000 UTC m=+0.038541096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:56:33 compute-0 systemd[1]: Started libpod-conmon-67cdb27ce5a931ae973b31f48616ce70c3cdc7a94f13c8b1f049364c5e3ebf98.scope.
Feb 02 15:56:33 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:56:33 compute-0 podman[278884]: 2026-02-02 15:56:33.690209287 +0000 UTC m=+0.234053358 container init 67cdb27ce5a931ae973b31f48616ce70c3cdc7a94f13c8b1f049364c5e3ebf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sutherland, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 15:56:33 compute-0 podman[278884]: 2026-02-02 15:56:33.695682741 +0000 UTC m=+0.239526802 container start 67cdb27ce5a931ae973b31f48616ce70c3cdc7a94f13c8b1f049364c5e3ebf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:56:33 compute-0 festive_sutherland[278901]: 167 167
Feb 02 15:56:33 compute-0 systemd[1]: libpod-67cdb27ce5a931ae973b31f48616ce70c3cdc7a94f13c8b1f049364c5e3ebf98.scope: Deactivated successfully.
Feb 02 15:56:33 compute-0 podman[278884]: 2026-02-02 15:56:33.729355896 +0000 UTC m=+0.273199997 container attach 67cdb27ce5a931ae973b31f48616ce70c3cdc7a94f13c8b1f049364c5e3ebf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:56:33 compute-0 podman[278884]: 2026-02-02 15:56:33.729679854 +0000 UTC m=+0.273523935 container died 67cdb27ce5a931ae973b31f48616ce70c3cdc7a94f13c8b1f049364c5e3ebf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:56:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-38c80deeab782ec789365fcad41cdca6f63563e294a4060a4c5c3673481706be-merged.mount: Deactivated successfully.
Feb 02 15:56:33 compute-0 podman[278884]: 2026-02-02 15:56:33.78785364 +0000 UTC m=+0.331697731 container remove 67cdb27ce5a931ae973b31f48616ce70c3cdc7a94f13c8b1f049364c5e3ebf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:56:33 compute-0 systemd[1]: libpod-conmon-67cdb27ce5a931ae973b31f48616ce70c3cdc7a94f13c8b1f049364c5e3ebf98.scope: Deactivated successfully.
Feb 02 15:56:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:33 compute-0 podman[278926]: 2026-02-02 15:56:33.979338314 +0000 UTC m=+0.043907598 container create 1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_perlman, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:56:34 compute-0 systemd[1]: Started libpod-conmon-1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72.scope.
Feb 02 15:56:34 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1363aa6f571dc8484007f09b62cffe72b730ce100203913fbf0468b7c16726ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1363aa6f571dc8484007f09b62cffe72b730ce100203913fbf0468b7c16726ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1363aa6f571dc8484007f09b62cffe72b730ce100203913fbf0468b7c16726ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1363aa6f571dc8484007f09b62cffe72b730ce100203913fbf0468b7c16726ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:56:34 compute-0 podman[278926]: 2026-02-02 15:56:33.964599462 +0000 UTC m=+0.029168736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:56:34 compute-0 podman[278926]: 2026-02-02 15:56:34.090227292 +0000 UTC m=+0.154796596 container init 1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:56:34 compute-0 podman[278926]: 2026-02-02 15:56:34.096186028 +0000 UTC m=+0.160755302 container start 1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 15:56:34 compute-0 podman[278926]: 2026-02-02 15:56:34.138083075 +0000 UTC m=+0.202652369 container attach 1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_perlman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 15:56:34 compute-0 ceph-mon[75334]: pgmap v1975: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:34 compute-0 nova_compute[239545]: 2026-02-02 15:56:34.571 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:34 compute-0 nova_compute[239545]: 2026-02-02 15:56:34.724 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:34 compute-0 lvm[279021]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:56:34 compute-0 lvm[279020]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:56:34 compute-0 lvm[279021]: VG ceph_vg1 finished
Feb 02 15:56:34 compute-0 lvm[279020]: VG ceph_vg0 finished
Feb 02 15:56:34 compute-0 lvm[279023]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:56:34 compute-0 lvm[279023]: VG ceph_vg2 finished
Feb 02 15:56:34 compute-0 cool_perlman[278942]: {}
Feb 02 15:56:34 compute-0 systemd[1]: libpod-1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72.scope: Deactivated successfully.
Feb 02 15:56:34 compute-0 systemd[1]: libpod-1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72.scope: Consumed 1.146s CPU time.
Feb 02 15:56:34 compute-0 podman[278926]: 2026-02-02 15:56:34.890637161 +0000 UTC m=+0.955206435 container died 1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:56:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1363aa6f571dc8484007f09b62cffe72b730ce100203913fbf0468b7c16726ed-merged.mount: Deactivated successfully.
Feb 02 15:56:34 compute-0 podman[278926]: 2026-02-02 15:56:34.95994395 +0000 UTC m=+1.024513234 container remove 1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:56:34 compute-0 systemd[1]: libpod-conmon-1d30420dbbb13ed30655a430916c92706f39c74685a31091054227f26e8dcc72.scope: Deactivated successfully.
Feb 02 15:56:34 compute-0 sudo[278847]: pam_unix(sudo:session): session closed for user root
Feb 02 15:56:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:56:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:56:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:56:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:56:35 compute-0 sudo[279040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:56:35 compute-0 sudo[279040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:56:35 compute-0 sudo[279040]: pam_unix(sudo:session): session closed for user root
Feb 02 15:56:36 compute-0 ceph-mon[75334]: pgmap v1976: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:56:36 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:56:36 compute-0 podman[279066]: 2026-02-02 15:56:36.328016663 +0000 UTC m=+0.059712814 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:56:36 compute-0 podman[279065]: 2026-02-02 15:56:36.355766013 +0000 UTC m=+0.087250559 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 02 15:56:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:38 compute-0 ceph-mon[75334]: pgmap v1977: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:39 compute-0 nova_compute[239545]: 2026-02-02 15:56:39.574 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:39 compute-0 nova_compute[239545]: 2026-02-02 15:56:39.726 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:40 compute-0 ceph-mon[75334]: pgmap v1978: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:42 compute-0 ceph-mon[75334]: pgmap v1979: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:56:42
Feb 02 15:56:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:56:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:56:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control']
Feb 02 15:56:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:56:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:44 compute-0 ceph-mon[75334]: pgmap v1980: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:44 compute-0 nova_compute[239545]: 2026-02-02 15:56:44.576 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:44 compute-0 nova_compute[239545]: 2026-02-02 15:56:44.728 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:56:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:56:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:56:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:56:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:56:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:56:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:56:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:56:46 compute-0 ceph-mon[75334]: pgmap v1981: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:48 compute-0 ceph-mon[75334]: pgmap v1982: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:49 compute-0 nova_compute[239545]: 2026-02-02 15:56:49.577 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:49 compute-0 nova_compute[239545]: 2026-02-02 15:56:49.729 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:51 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:51 compute-0 ceph-mon[75334]: pgmap v1983: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:52 compute-0 ceph-mon[75334]: pgmap v1984: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:53 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:54 compute-0 nova_compute[239545]: 2026-02-02 15:56:54.579 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632409966596675 of space, bias 1.0, pg target 0.22897229899790025 quantized to 32 (current 32)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029290397162393036 of space, bias 1.0, pg target 0.8787119148717911 quantized to 32 (current 32)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8636478275776745e-06 of space, bias 1.0, pg target 0.0011590943482733024 quantized to 32 (current 32)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677614895015238 of space, bias 1.0, pg target 0.20032844685045714 quantized to 32 (current 32)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4237735919936245e-06 of space, bias 4.0, pg target 0.0017085283103923494 quantized to 16 (current 16)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:56:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:56:54 compute-0 nova_compute[239545]: 2026-02-02 15:56:54.731 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:54 compute-0 ceph-mon[75334]: pgmap v1985: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:55 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:56 compute-0 ceph-mon[75334]: pgmap v1986: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:57 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:56:58 compute-0 ceph-mon[75334]: pgmap v1987: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:56:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:56:59.267 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:56:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:56:59.268 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:56:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:56:59.269 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:56:59 compute-0 nova_compute[239545]: 2026-02-02 15:56:59.581 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:59 compute-0 nova_compute[239545]: 2026-02-02 15:56:59.735 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:56:59 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:00 compute-0 ceph-mon[75334]: pgmap v1988: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:01 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:02 compute-0 ceph-mon[75334]: pgmap v1989: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:03 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:03 compute-0 ceph-mon[75334]: pgmap v1990: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:04 compute-0 nova_compute[239545]: 2026-02-02 15:57:04.584 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:04 compute-0 nova_compute[239545]: 2026-02-02 15:57:04.736 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:05 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:06 compute-0 ceph-mon[75334]: pgmap v1991: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:07 compute-0 podman[279115]: 2026-02-02 15:57:07.313583919 +0000 UTC m=+0.051175696 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 02 15:57:07 compute-0 podman[279114]: 2026-02-02 15:57:07.387351696 +0000 UTC m=+0.122526114 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 02 15:57:07 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:08 compute-0 ceph-mon[75334]: pgmap v1992: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:09 compute-0 nova_compute[239545]: 2026-02-02 15:57:09.586 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:09 compute-0 nova_compute[239545]: 2026-02-02 15:57:09.738 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:09 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:10 compute-0 ceph-mon[75334]: pgmap v1993: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:11 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:12 compute-0 ceph-mon[75334]: pgmap v1994: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:13 compute-0 nova_compute[239545]: 2026-02-02 15:57:13.548 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:57:13 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:14 compute-0 nova_compute[239545]: 2026-02-02 15:57:14.588 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:57:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:57:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:57:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:57:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:57:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:57:14 compute-0 nova_compute[239545]: 2026-02-02 15:57:14.767 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:14 compute-0 ceph-mon[75334]: pgmap v1995: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:15 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:16 compute-0 nova_compute[239545]: 2026-02-02 15:57:16.548 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:57:16 compute-0 nova_compute[239545]: 2026-02-02 15:57:16.549 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:57:16 compute-0 nova_compute[239545]: 2026-02-02 15:57:16.549 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:57:16 compute-0 ceph-mon[75334]: pgmap v1996: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:17 compute-0 nova_compute[239545]: 2026-02-02 15:57:17.054 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:57:17 compute-0 nova_compute[239545]: 2026-02-02 15:57:17.055 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:57:17 compute-0 nova_compute[239545]: 2026-02-02 15:57:17.055 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:57:17 compute-0 nova_compute[239545]: 2026-02-02 15:57:17.056 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:57:17 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:18 compute-0 nova_compute[239545]: 2026-02-02 15:57:18.346 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:57:18 compute-0 nova_compute[239545]: 2026-02-02 15:57:18.360 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:57:18 compute-0 nova_compute[239545]: 2026-02-02 15:57:18.361 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:57:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:18 compute-0 ceph-mon[75334]: pgmap v1997: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:19 compute-0 nova_compute[239545]: 2026-02-02 15:57:19.590 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:19 compute-0 nova_compute[239545]: 2026-02-02 15:57:19.771 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:19 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:19 compute-0 ceph-mon[75334]: pgmap v1998: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:21 compute-0 nova_compute[239545]: 2026-02-02 15:57:21.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:57:21 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:22 compute-0 nova_compute[239545]: 2026-02-02 15:57:22.542 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:57:22 compute-0 ceph-mon[75334]: pgmap v1999: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:23 compute-0 nova_compute[239545]: 2026-02-02 15:57:23.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:57:23 compute-0 nova_compute[239545]: 2026-02-02 15:57:23.576 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:57:23 compute-0 nova_compute[239545]: 2026-02-02 15:57:23.576 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:57:23 compute-0 nova_compute[239545]: 2026-02-02 15:57:23.576 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:57:23 compute-0 nova_compute[239545]: 2026-02-02 15:57:23.577 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:57:23 compute-0 nova_compute[239545]: 2026-02-02 15:57:23.577 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:57:23 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:57:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2510406905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.125 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.223 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.224 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.224 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.373 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.374 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3956MB free_disk=59.94249573443085GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.375 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.375 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.447 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.447 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.447 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.461 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing inventories for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.477 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating ProviderTree inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.478 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.496 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing aggregate associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.517 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing trait associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, traits: COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.555 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.592 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:24 compute-0 nova_compute[239545]: 2026-02-02 15:57:24.775 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:24 compute-0 ceph-mon[75334]: pgmap v2000: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2510406905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:57:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:57:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964149443' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:57:25 compute-0 nova_compute[239545]: 2026-02-02 15:57:25.179 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:57:25 compute-0 nova_compute[239545]: 2026-02-02 15:57:25.187 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:57:25 compute-0 nova_compute[239545]: 2026-02-02 15:57:25.205 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:57:25 compute-0 nova_compute[239545]: 2026-02-02 15:57:25.209 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:57:25 compute-0 nova_compute[239545]: 2026-02-02 15:57:25.209 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:57:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:57:25 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 118K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 30K writes, 11K syncs, 2.68 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1700 writes, 5419 keys, 1700 commit groups, 1.0 writes per commit group, ingest: 7.45 MB, 0.01 MB/s
                                           Interval WAL: 1700 writes, 710 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 15:57:25 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1964149443' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:57:26 compute-0 nova_compute[239545]: 2026-02-02 15:57:26.211 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:57:26 compute-0 nova_compute[239545]: 2026-02-02 15:57:26.211 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:57:26 compute-0 nova_compute[239545]: 2026-02-02 15:57:26.212 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:57:26 compute-0 nova_compute[239545]: 2026-02-02 15:57:26.544 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:57:26 compute-0 nova_compute[239545]: 2026-02-02 15:57:26.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:57:26 compute-0 ceph-mon[75334]: pgmap v2001: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:27 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:57:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1310859035' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:57:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:57:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1310859035' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:57:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:28 compute-0 ceph-mon[75334]: pgmap v2002: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1310859035' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:57:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/1310859035' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:57:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:57:29 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 118K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.68 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1490 writes, 4587 keys, 1490 commit groups, 1.0 writes per commit group, ingest: 5.99 MB, 0.01 MB/s
                                           Interval WAL: 1490 writes, 651 syncs, 2.29 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 15:57:29 compute-0 nova_compute[239545]: 2026-02-02 15:57:29.596 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:29 compute-0 nova_compute[239545]: 2026-02-02 15:57:29.776 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:29 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:29 compute-0 ceph-mon[75334]: pgmap v2003: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:31 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:32 compute-0 ceph-mon[75334]: pgmap v2004: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 15:57:33 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 22K writes, 89K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 22K writes, 8124 syncs, 2.79 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1230 writes, 3506 keys, 1230 commit groups, 1.0 writes per commit group, ingest: 4.33 MB, 0.01 MB/s
                                           Interval WAL: 1230 writes, 540 syncs, 2.28 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 15:57:33 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:34 compute-0 nova_compute[239545]: 2026-02-02 15:57:34.597 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:34 compute-0 nova_compute[239545]: 2026-02-02 15:57:34.818 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:34 compute-0 ceph-mon[75334]: pgmap v2005: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:35 compute-0 sudo[279205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:57:35 compute-0 sudo[279205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:57:35 compute-0 sudo[279205]: pam_unix(sudo:session): session closed for user root
Feb 02 15:57:35 compute-0 sudo[279230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:57:35 compute-0 sudo[279230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:57:35 compute-0 ceph-mgr[75628]: [devicehealth INFO root] Check health
Feb 02 15:57:35 compute-0 sudo[279230]: pam_unix(sudo:session): session closed for user root
Feb 02 15:57:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:57:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:57:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:57:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:57:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:57:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:57:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:57:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:57:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:57:35 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:57:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:57:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:57:35 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:35 compute-0 sudo[279286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:57:35 compute-0 sudo[279286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:57:35 compute-0 sudo[279286]: pam_unix(sudo:session): session closed for user root
Feb 02 15:57:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:57:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:57:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:57:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:57:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:57:35 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:57:35 compute-0 sudo[279311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:57:35 compute-0 sudo[279311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:57:36 compute-0 podman[279348]: 2026-02-02 15:57:36.194361004 +0000 UTC m=+0.089999114 container create 5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ride, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:57:36 compute-0 podman[279348]: 2026-02-02 15:57:36.134739721 +0000 UTC m=+0.030377831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:57:36 compute-0 systemd[1]: Started libpod-conmon-5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b.scope.
Feb 02 15:57:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:57:36 compute-0 podman[279348]: 2026-02-02 15:57:36.307081027 +0000 UTC m=+0.202719127 container init 5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 15:57:36 compute-0 podman[279348]: 2026-02-02 15:57:36.314327546 +0000 UTC m=+0.209965656 container start 5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ride, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Feb 02 15:57:36 compute-0 podman[279348]: 2026-02-02 15:57:36.318882159 +0000 UTC m=+0.214520249 container attach 5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ride, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 02 15:57:36 compute-0 keen_ride[279364]: 167 167
Feb 02 15:57:36 compute-0 systemd[1]: libpod-5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b.scope: Deactivated successfully.
Feb 02 15:57:36 compute-0 conmon[279364]: conmon 5fed544624945e15d989 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b.scope/container/memory.events
Feb 02 15:57:36 compute-0 podman[279348]: 2026-02-02 15:57:36.323111633 +0000 UTC m=+0.218749713 container died 5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 15:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-68b560972c1e6c968484f97c2d031d3336bdebd842e1416a5ab807c28976cb1a-merged.mount: Deactivated successfully.
Feb 02 15:57:36 compute-0 podman[279348]: 2026-02-02 15:57:36.37076469 +0000 UTC m=+0.266402770 container remove 5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ride, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:57:36 compute-0 systemd[1]: libpod-conmon-5fed544624945e15d9897c9e3c4b6d76c846f4e4e6aa7d7e388d1214123ae00b.scope: Deactivated successfully.
Feb 02 15:57:36 compute-0 podman[279388]: 2026-02-02 15:57:36.549449141 +0000 UTC m=+0.047669138 container create 1069c9ce8ecc3247469c493b64817ca29dcb9e431035704f218f31b693f3cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 15:57:36 compute-0 systemd[1]: Started libpod-conmon-1069c9ce8ecc3247469c493b64817ca29dcb9e431035704f218f31b693f3cd36.scope.
Feb 02 15:57:36 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:57:36 compute-0 podman[279388]: 2026-02-02 15:57:36.527607763 +0000 UTC m=+0.025827740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287e2407b379138abd82e227ad46b6ef5e79de6006fe274b20dd3f88beea3d83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287e2407b379138abd82e227ad46b6ef5e79de6006fe274b20dd3f88beea3d83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287e2407b379138abd82e227ad46b6ef5e79de6006fe274b20dd3f88beea3d83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287e2407b379138abd82e227ad46b6ef5e79de6006fe274b20dd3f88beea3d83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287e2407b379138abd82e227ad46b6ef5e79de6006fe274b20dd3f88beea3d83/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:36 compute-0 podman[279388]: 2026-02-02 15:57:36.651681736 +0000 UTC m=+0.149901733 container init 1069c9ce8ecc3247469c493b64817ca29dcb9e431035704f218f31b693f3cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:57:36 compute-0 podman[279388]: 2026-02-02 15:57:36.662630726 +0000 UTC m=+0.160850693 container start 1069c9ce8ecc3247469c493b64817ca29dcb9e431035704f218f31b693f3cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 02 15:57:36 compute-0 podman[279388]: 2026-02-02 15:57:36.666744018 +0000 UTC m=+0.164963985 container attach 1069c9ce8ecc3247469c493b64817ca29dcb9e431035704f218f31b693f3cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 15:57:36 compute-0 ceph-mon[75334]: pgmap v2006: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:37 compute-0 unruffled_lamport[279404]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:57:37 compute-0 unruffled_lamport[279404]: --> All data devices are unavailable
Feb 02 15:57:37 compute-0 systemd[1]: libpod-1069c9ce8ecc3247469c493b64817ca29dcb9e431035704f218f31b693f3cd36.scope: Deactivated successfully.
Feb 02 15:57:37 compute-0 podman[279388]: 2026-02-02 15:57:37.177294935 +0000 UTC m=+0.675514902 container died 1069c9ce8ecc3247469c493b64817ca29dcb9e431035704f218f31b693f3cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-287e2407b379138abd82e227ad46b6ef5e79de6006fe274b20dd3f88beea3d83-merged.mount: Deactivated successfully.
Feb 02 15:57:37 compute-0 podman[279388]: 2026-02-02 15:57:37.225880924 +0000 UTC m=+0.724100921 container remove 1069c9ce8ecc3247469c493b64817ca29dcb9e431035704f218f31b693f3cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:57:37 compute-0 systemd[1]: libpod-conmon-1069c9ce8ecc3247469c493b64817ca29dcb9e431035704f218f31b693f3cd36.scope: Deactivated successfully.
Feb 02 15:57:37 compute-0 sudo[279311]: pam_unix(sudo:session): session closed for user root
Feb 02 15:57:37 compute-0 sudo[279436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:57:37 compute-0 sudo[279436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:57:37 compute-0 sudo[279436]: pam_unix(sudo:session): session closed for user root
Feb 02 15:57:37 compute-0 podman[279460]: 2026-02-02 15:57:37.443626801 +0000 UTC m=+0.060769812 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:57:37 compute-0 sudo[279467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:57:37 compute-0 sudo[279467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:57:37 compute-0 podman[279503]: 2026-02-02 15:57:37.543016175 +0000 UTC m=+0.080607632 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 02 15:57:37 compute-0 podman[279544]: 2026-02-02 15:57:37.730185107 +0000 UTC m=+0.047233368 container create 1fcd2527797a6d4921528c59d1ed3844c42b0b86fab45aab2d4d0d241ac3f21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:57:37 compute-0 systemd[1]: Started libpod-conmon-1fcd2527797a6d4921528c59d1ed3844c42b0b86fab45aab2d4d0d241ac3f21b.scope.
Feb 02 15:57:37 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:57:37 compute-0 podman[279544]: 2026-02-02 15:57:37.708732106 +0000 UTC m=+0.025780357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:57:37 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:37 compute-0 podman[279544]: 2026-02-02 15:57:37.810974441 +0000 UTC m=+0.128022752 container init 1fcd2527797a6d4921528c59d1ed3844c42b0b86fab45aab2d4d0d241ac3f21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 15:57:37 compute-0 podman[279544]: 2026-02-02 15:57:37.818767454 +0000 UTC m=+0.135815675 container start 1fcd2527797a6d4921528c59d1ed3844c42b0b86fab45aab2d4d0d241ac3f21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 02 15:57:37 compute-0 podman[279544]: 2026-02-02 15:57:37.822939017 +0000 UTC m=+0.139987238 container attach 1fcd2527797a6d4921528c59d1ed3844c42b0b86fab45aab2d4d0d241ac3f21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:57:37 compute-0 eager_bhaskara[279560]: 167 167
Feb 02 15:57:37 compute-0 podman[279544]: 2026-02-02 15:57:37.825039448 +0000 UTC m=+0.142087709 container died 1fcd2527797a6d4921528c59d1ed3844c42b0b86fab45aab2d4d0d241ac3f21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:57:37 compute-0 systemd[1]: libpod-1fcd2527797a6d4921528c59d1ed3844c42b0b86fab45aab2d4d0d241ac3f21b.scope: Deactivated successfully.
Feb 02 15:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-07f329defda9160d15d83e07b091f1547fb21915b99670485935bc08fdf6c836-merged.mount: Deactivated successfully.
Feb 02 15:57:37 compute-0 podman[279544]: 2026-02-02 15:57:37.865794514 +0000 UTC m=+0.182842735 container remove 1fcd2527797a6d4921528c59d1ed3844c42b0b86fab45aab2d4d0d241ac3f21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 15:57:37 compute-0 systemd[1]: libpod-conmon-1fcd2527797a6d4921528c59d1ed3844c42b0b86fab45aab2d4d0d241ac3f21b.scope: Deactivated successfully.
Feb 02 15:57:38 compute-0 podman[279584]: 2026-02-02 15:57:38.025919029 +0000 UTC m=+0.057773108 container create 348f1ee2843295a7e217241cd664cf6458e60253fba0a78d1c44c175e97a82fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mayer, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:57:38 compute-0 systemd[1]: Started libpod-conmon-348f1ee2843295a7e217241cd664cf6458e60253fba0a78d1c44c175e97a82fa.scope.
Feb 02 15:57:38 compute-0 podman[279584]: 2026-02-02 15:57:38.005871863 +0000 UTC m=+0.037725982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:57:38 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeea207ede876c81b1adf82313c17a4477a08f2256cd1e154ad8dffc664fc412/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeea207ede876c81b1adf82313c17a4477a08f2256cd1e154ad8dffc664fc412/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeea207ede876c81b1adf82313c17a4477a08f2256cd1e154ad8dffc664fc412/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeea207ede876c81b1adf82313c17a4477a08f2256cd1e154ad8dffc664fc412/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:38 compute-0 podman[279584]: 2026-02-02 15:57:38.14339627 +0000 UTC m=+0.175250369 container init 348f1ee2843295a7e217241cd664cf6458e60253fba0a78d1c44c175e97a82fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mayer, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:57:38 compute-0 podman[279584]: 2026-02-02 15:57:38.1494751 +0000 UTC m=+0.181329169 container start 348f1ee2843295a7e217241cd664cf6458e60253fba0a78d1c44c175e97a82fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 15:57:38 compute-0 podman[279584]: 2026-02-02 15:57:38.227364923 +0000 UTC m=+0.259219002 container attach 348f1ee2843295a7e217241cd664cf6458e60253fba0a78d1c44c175e97a82fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 02 15:57:38 compute-0 angry_mayer[279601]: {
Feb 02 15:57:38 compute-0 angry_mayer[279601]:     "0": [
Feb 02 15:57:38 compute-0 angry_mayer[279601]:         {
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "devices": [
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "/dev/loop3"
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             ],
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_name": "ceph_lv0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_size": "21470642176",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "name": "ceph_lv0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "tags": {
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.cluster_name": "ceph",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.crush_device_class": "",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.encrypted": "0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.objectstore": "bluestore",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.osd_id": "0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.type": "block",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.vdo": "0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.with_tpm": "0"
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             },
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "type": "block",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "vg_name": "ceph_vg0"
Feb 02 15:57:38 compute-0 angry_mayer[279601]:         }
Feb 02 15:57:38 compute-0 angry_mayer[279601]:     ],
Feb 02 15:57:38 compute-0 angry_mayer[279601]:     "1": [
Feb 02 15:57:38 compute-0 angry_mayer[279601]:         {
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "devices": [
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "/dev/loop4"
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             ],
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_name": "ceph_lv1",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_size": "21470642176",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "name": "ceph_lv1",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "tags": {
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.cluster_name": "ceph",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.crush_device_class": "",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.encrypted": "0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.objectstore": "bluestore",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.osd_id": "1",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.type": "block",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.vdo": "0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.with_tpm": "0"
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             },
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "type": "block",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "vg_name": "ceph_vg1"
Feb 02 15:57:38 compute-0 angry_mayer[279601]:         }
Feb 02 15:57:38 compute-0 angry_mayer[279601]:     ],
Feb 02 15:57:38 compute-0 angry_mayer[279601]:     "2": [
Feb 02 15:57:38 compute-0 angry_mayer[279601]:         {
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "devices": [
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "/dev/loop5"
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             ],
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_name": "ceph_lv2",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_size": "21470642176",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "name": "ceph_lv2",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "tags": {
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.cluster_name": "ceph",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.crush_device_class": "",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.encrypted": "0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.objectstore": "bluestore",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.osd_id": "2",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.type": "block",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.vdo": "0",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:                 "ceph.with_tpm": "0"
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             },
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "type": "block",
Feb 02 15:57:38 compute-0 angry_mayer[279601]:             "vg_name": "ceph_vg2"
Feb 02 15:57:38 compute-0 angry_mayer[279601]:         }
Feb 02 15:57:38 compute-0 angry_mayer[279601]:     ]
Feb 02 15:57:38 compute-0 angry_mayer[279601]: }
Feb 02 15:57:38 compute-0 systemd[1]: libpod-348f1ee2843295a7e217241cd664cf6458e60253fba0a78d1c44c175e97a82fa.scope: Deactivated successfully.
Feb 02 15:57:38 compute-0 podman[279584]: 2026-02-02 15:57:38.474129186 +0000 UTC m=+0.505983295 container died 348f1ee2843295a7e217241cd664cf6458e60253fba0a78d1c44c175e97a82fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeea207ede876c81b1adf82313c17a4477a08f2256cd1e154ad8dffc664fc412-merged.mount: Deactivated successfully.
Feb 02 15:57:38 compute-0 podman[279584]: 2026-02-02 15:57:38.605628143 +0000 UTC m=+0.637482212 container remove 348f1ee2843295a7e217241cd664cf6458e60253fba0a78d1c44c175e97a82fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mayer, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 02 15:57:38 compute-0 systemd[1]: libpod-conmon-348f1ee2843295a7e217241cd664cf6458e60253fba0a78d1c44c175e97a82fa.scope: Deactivated successfully.
Feb 02 15:57:38 compute-0 sudo[279467]: pam_unix(sudo:session): session closed for user root
Feb 02 15:57:38 compute-0 sudo[279622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:57:38 compute-0 sudo[279622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:57:38 compute-0 sudo[279622]: pam_unix(sudo:session): session closed for user root
Feb 02 15:57:38 compute-0 sudo[279647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:57:38 compute-0 sudo[279647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:57:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:38 compute-0 ceph-mon[75334]: pgmap v2007: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:39 compute-0 podman[279683]: 2026-02-02 15:57:39.065922949 +0000 UTC m=+0.046963151 container create d1d82cedf2bc57e6dc3c9bfedf73e231ea55643511ec332c835cffd4c38c8e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 02 15:57:39 compute-0 systemd[1]: Started libpod-conmon-d1d82cedf2bc57e6dc3c9bfedf73e231ea55643511ec332c835cffd4c38c8e39.scope.
Feb 02 15:57:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:57:39 compute-0 podman[279683]: 2026-02-02 15:57:39.043841114 +0000 UTC m=+0.024881376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:57:39 compute-0 podman[279683]: 2026-02-02 15:57:39.143594357 +0000 UTC m=+0.124634559 container init d1d82cedf2bc57e6dc3c9bfedf73e231ea55643511ec332c835cffd4c38c8e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:57:39 compute-0 podman[279683]: 2026-02-02 15:57:39.150175189 +0000 UTC m=+0.131215371 container start d1d82cedf2bc57e6dc3c9bfedf73e231ea55643511ec332c835cffd4c38c8e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 02 15:57:39 compute-0 podman[279683]: 2026-02-02 15:57:39.153807368 +0000 UTC m=+0.134847660 container attach d1d82cedf2bc57e6dc3c9bfedf73e231ea55643511ec332c835cffd4c38c8e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:57:39 compute-0 ecstatic_mahavira[279699]: 167 167
Feb 02 15:57:39 compute-0 systemd[1]: libpod-d1d82cedf2bc57e6dc3c9bfedf73e231ea55643511ec332c835cffd4c38c8e39.scope: Deactivated successfully.
Feb 02 15:57:39 compute-0 podman[279683]: 2026-02-02 15:57:39.156514115 +0000 UTC m=+0.137554307 container died d1d82cedf2bc57e6dc3c9bfedf73e231ea55643511ec332c835cffd4c38c8e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e589bcdeb0257e70be8f4cd0d4b1ea04b049389c3d506dc90e42c4df5c250d1c-merged.mount: Deactivated successfully.
Feb 02 15:57:39 compute-0 podman[279683]: 2026-02-02 15:57:39.194465762 +0000 UTC m=+0.175505954 container remove d1d82cedf2bc57e6dc3c9bfedf73e231ea55643511ec332c835cffd4c38c8e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:57:39 compute-0 systemd[1]: libpod-conmon-d1d82cedf2bc57e6dc3c9bfedf73e231ea55643511ec332c835cffd4c38c8e39.scope: Deactivated successfully.
Feb 02 15:57:39 compute-0 podman[279723]: 2026-02-02 15:57:39.396981013 +0000 UTC m=+0.066495803 container create 890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hypatia, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:57:39 compute-0 systemd[1]: Started libpod-conmon-890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50.scope.
Feb 02 15:57:39 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b40f3b87aede61b82736a2f22877d001fc4565619a5c9a775b0acc61e1ac6b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:39 compute-0 podman[279723]: 2026-02-02 15:57:39.37459296 +0000 UTC m=+0.044107750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b40f3b87aede61b82736a2f22877d001fc4565619a5c9a775b0acc61e1ac6b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b40f3b87aede61b82736a2f22877d001fc4565619a5c9a775b0acc61e1ac6b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b40f3b87aede61b82736a2f22877d001fc4565619a5c9a775b0acc61e1ac6b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:57:39 compute-0 podman[279723]: 2026-02-02 15:57:39.501629707 +0000 UTC m=+0.171144497 container init 890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hypatia, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:57:39 compute-0 podman[279723]: 2026-02-02 15:57:39.51066926 +0000 UTC m=+0.180184010 container start 890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hypatia, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:57:39 compute-0 podman[279723]: 2026-02-02 15:57:39.523854756 +0000 UTC m=+0.193369556 container attach 890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:57:39 compute-0 nova_compute[239545]: 2026-02-02 15:57:39.599 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:39 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:39 compute-0 nova_compute[239545]: 2026-02-02 15:57:39.822 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:40 compute-0 lvm[279818]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:57:40 compute-0 lvm[279818]: VG ceph_vg0 finished
Feb 02 15:57:40 compute-0 lvm[279819]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:57:40 compute-0 lvm[279819]: VG ceph_vg1 finished
Feb 02 15:57:40 compute-0 lvm[279821]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:57:40 compute-0 lvm[279821]: VG ceph_vg2 finished
Feb 02 15:57:40 compute-0 peaceful_hypatia[279739]: {}
Feb 02 15:57:40 compute-0 systemd[1]: libpod-890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50.scope: Deactivated successfully.
Feb 02 15:57:40 compute-0 systemd[1]: libpod-890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50.scope: Consumed 1.280s CPU time.
Feb 02 15:57:40 compute-0 podman[279723]: 2026-02-02 15:57:40.377464473 +0000 UTC m=+1.046979253 container died 890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 15:57:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b40f3b87aede61b82736a2f22877d001fc4565619a5c9a775b0acc61e1ac6b9-merged.mount: Deactivated successfully.
Feb 02 15:57:40 compute-0 podman[279723]: 2026-02-02 15:57:40.423236944 +0000 UTC m=+1.092751744 container remove 890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hypatia, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 02 15:57:40 compute-0 systemd[1]: libpod-conmon-890b40e7c173b2fb7aef17c9db2cf2ae567a510fc5746a5c86a5491719f98e50.scope: Deactivated successfully.
Feb 02 15:57:40 compute-0 sudo[279647]: pam_unix(sudo:session): session closed for user root
Feb 02 15:57:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:57:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:57:40 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:57:40 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:57:40 compute-0 sudo[279836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:57:40 compute-0 sudo[279836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:57:40 compute-0 sudo[279836]: pam_unix(sudo:session): session closed for user root
Feb 02 15:57:40 compute-0 ceph-mon[75334]: pgmap v2008: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:57:40 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:57:41 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:41 compute-0 ceph-mon[75334]: pgmap v2009: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:57:42
Feb 02 15:57:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:57:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:57:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.control', '.rgw.root', 'images', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Feb 02 15:57:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:57:43 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:44 compute-0 nova_compute[239545]: 2026-02-02 15:57:44.602 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:57:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:57:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:57:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:57:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:57:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:57:44 compute-0 nova_compute[239545]: 2026-02-02 15:57:44.860 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:44 compute-0 ceph-mon[75334]: pgmap v2010: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:57:45 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:46 compute-0 ceph-mon[75334]: pgmap v2011: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.853685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047868853906, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1383, "num_deletes": 252, "total_data_size": 2216363, "memory_usage": 2275032, "flush_reason": "Manual Compaction"}
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047868862134, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1295123, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38908, "largest_seqno": 40290, "table_properties": {"data_size": 1290213, "index_size": 2242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12801, "raw_average_key_size": 20, "raw_value_size": 1279504, "raw_average_value_size": 2070, "num_data_blocks": 103, "num_entries": 618, "num_filter_entries": 618, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770047725, "oldest_key_time": 1770047725, "file_creation_time": 1770047868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 8509 microseconds, and 3805 cpu microseconds.
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.862188) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1295123 bytes OK
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.862214) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.864627) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.864647) EVENT_LOG_v1 {"time_micros": 1770047868864641, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.864674) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2210198, prev total WAL file size 2210198, number of live WAL files 2.
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.865541) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323531' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1264KB)], [80(11MB)]
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047868865604, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13397723, "oldest_snapshot_seqno": -1}
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7313 keys, 10900236 bytes, temperature: kUnknown
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047868985167, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 10900236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10847618, "index_size": 33242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 183460, "raw_average_key_size": 25, "raw_value_size": 10712777, "raw_average_value_size": 1464, "num_data_blocks": 1325, "num_entries": 7313, "num_filter_entries": 7313, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770047868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.985619) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10900236 bytes
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.991606) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.9 rd, 91.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(18.8) write-amplify(8.4) OK, records in: 7763, records dropped: 450 output_compression: NoCompression
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.991629) EVENT_LOG_v1 {"time_micros": 1770047868991617, "job": 46, "event": "compaction_finished", "compaction_time_micros": 119688, "compaction_time_cpu_micros": 33491, "output_level": 6, "num_output_files": 1, "total_output_size": 10900236, "num_input_records": 7763, "num_output_records": 7313, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047868992495, "job": 46, "event": "table_file_deletion", "file_number": 82}
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047868994489, "job": 46, "event": "table_file_deletion", "file_number": 80}
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.865362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.994563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.994570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.994572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.994574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:57:48 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:57:48.994576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:57:49 compute-0 nova_compute[239545]: 2026-02-02 15:57:49.604 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:49 compute-0 ceph-mon[75334]: pgmap v2012: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:49 compute-0 nova_compute[239545]: 2026-02-02 15:57:49.862 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:51 compute-0 ceph-mon[75334]: pgmap v2013: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:52 compute-0 ceph-mon[75334]: pgmap v2014: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:54 compute-0 nova_compute[239545]: 2026-02-02 15:57:54.606 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632409966596675 of space, bias 1.0, pg target 0.22897229899790025 quantized to 32 (current 32)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029290397162393036 of space, bias 1.0, pg target 0.8787119148717911 quantized to 32 (current 32)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8636478275776745e-06 of space, bias 1.0, pg target 0.0011590943482733024 quantized to 32 (current 32)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677614895015238 of space, bias 1.0, pg target 0.20032844685045714 quantized to 32 (current 32)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4237735919936245e-06 of space, bias 4.0, pg target 0.0017085283103923494 quantized to 16 (current 16)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:57:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:57:54 compute-0 nova_compute[239545]: 2026-02-02 15:57:54.865 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:55 compute-0 ceph-mon[75334]: pgmap v2015: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:57 compute-0 ceph-mon[75334]: pgmap v2016: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:57:59 compute-0 ceph-mon[75334]: pgmap v2017: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:57:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:57:59.268 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:57:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:57:59.269 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:57:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:57:59.270 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:57:59 compute-0 nova_compute[239545]: 2026-02-02 15:57:59.608 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:57:59 compute-0 nova_compute[239545]: 2026-02-02 15:57:59.904 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:01 compute-0 ceph-mon[75334]: pgmap v2018: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:03 compute-0 ceph-mon[75334]: pgmap v2019: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:04 compute-0 nova_compute[239545]: 2026-02-02 15:58:04.609 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:04 compute-0 nova_compute[239545]: 2026-02-02 15:58:04.907 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:05 compute-0 ceph-mon[75334]: pgmap v2020: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:07 compute-0 ceph-mon[75334]: pgmap v2021: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:08 compute-0 podman[279862]: 2026-02-02 15:58:08.321640997 +0000 UTC m=+0.058482955 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 15:58:08 compute-0 podman[279861]: 2026-02-02 15:58:08.342865951 +0000 UTC m=+0.079707579 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 02 15:58:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:09 compute-0 ceph-mon[75334]: pgmap v2022: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:09 compute-0 nova_compute[239545]: 2026-02-02 15:58:09.612 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:09 compute-0 nova_compute[239545]: 2026-02-02 15:58:09.909 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:11 compute-0 ceph-mon[75334]: pgmap v2023: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:13 compute-0 ceph-mon[75334]: pgmap v2024: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:13 compute-0 nova_compute[239545]: 2026-02-02 15:58:13.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.861446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047893861474, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 436, "num_deletes": 256, "total_data_size": 350704, "memory_usage": 358888, "flush_reason": "Manual Compaction"}
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047893865255, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 347746, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40291, "largest_seqno": 40726, "table_properties": {"data_size": 345227, "index_size": 619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5865, "raw_average_key_size": 17, "raw_value_size": 340238, "raw_average_value_size": 1040, "num_data_blocks": 28, "num_entries": 327, "num_filter_entries": 327, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770047869, "oldest_key_time": 1770047869, "file_creation_time": 1770047893, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 3888 microseconds, and 1807 cpu microseconds.
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.865325) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 347746 bytes OK
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.865351) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.867452) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.867480) EVENT_LOG_v1 {"time_micros": 1770047893867471, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.867509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 348020, prev total WAL file size 348020, number of live WAL files 2.
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.868412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(339KB)], [83(10MB)]
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047893868474, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11247982, "oldest_snapshot_seqno": -1}
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7121 keys, 11096010 bytes, temperature: kUnknown
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047893941405, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 11096010, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11043980, "index_size": 33102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 180389, "raw_average_key_size": 25, "raw_value_size": 10911754, "raw_average_value_size": 1532, "num_data_blocks": 1316, "num_entries": 7121, "num_filter_entries": 7121, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770047893, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.942046) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 11096010 bytes
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.943751) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.7 rd, 151.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.4 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(64.3) write-amplify(31.9) OK, records in: 7640, records dropped: 519 output_compression: NoCompression
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.943796) EVENT_LOG_v1 {"time_micros": 1770047893943775, "job": 48, "event": "compaction_finished", "compaction_time_micros": 73160, "compaction_time_cpu_micros": 45111, "output_level": 6, "num_output_files": 1, "total_output_size": 11096010, "num_input_records": 7640, "num_output_records": 7121, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047893944326, "job": 48, "event": "table_file_deletion", "file_number": 85}
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047893946762, "job": 48, "event": "table_file_deletion", "file_number": 83}
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.868007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.947038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.947048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.947051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.947056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:58:13 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:58:13.947060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:58:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:14 compute-0 nova_compute[239545]: 2026-02-02 15:58:14.613 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:58:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:58:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:58:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:58:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:58:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:58:14 compute-0 nova_compute[239545]: 2026-02-02 15:58:14.953 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:15 compute-0 ceph-mon[75334]: pgmap v2025: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:16 compute-0 nova_compute[239545]: 2026-02-02 15:58:16.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:16 compute-0 nova_compute[239545]: 2026-02-02 15:58:16.547 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:58:16 compute-0 nova_compute[239545]: 2026-02-02 15:58:16.547 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:58:17 compute-0 nova_compute[239545]: 2026-02-02 15:58:17.052 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:58:17 compute-0 nova_compute[239545]: 2026-02-02 15:58:17.053 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:58:17 compute-0 nova_compute[239545]: 2026-02-02 15:58:17.053 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:58:17 compute-0 nova_compute[239545]: 2026-02-02 15:58:17.054 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:58:17 compute-0 ceph-mon[75334]: pgmap v2026: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:18 compute-0 nova_compute[239545]: 2026-02-02 15:58:18.244 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:58:18 compute-0 nova_compute[239545]: 2026-02-02 15:58:18.258 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:58:18 compute-0 nova_compute[239545]: 2026-02-02 15:58:18.259 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:58:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:19 compute-0 nova_compute[239545]: 2026-02-02 15:58:19.617 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:19 compute-0 ceph-mon[75334]: pgmap v2027: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:19 compute-0 nova_compute[239545]: 2026-02-02 15:58:19.956 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:21 compute-0 ceph-mon[75334]: pgmap v2028: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:22 compute-0 nova_compute[239545]: 2026-02-02 15:58:22.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:22 compute-0 nova_compute[239545]: 2026-02-02 15:58:22.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:23 compute-0 ceph-mon[75334]: pgmap v2029: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 02 15:58:24 compute-0 nova_compute[239545]: 2026-02-02 15:58:24.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:24 compute-0 nova_compute[239545]: 2026-02-02 15:58:24.619 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:24 compute-0 nova_compute[239545]: 2026-02-02 15:58:24.695 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:24 compute-0 nova_compute[239545]: 2026-02-02 15:58:24.719 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:58:24 compute-0 nova_compute[239545]: 2026-02-02 15:58:24.719 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:58:24 compute-0 nova_compute[239545]: 2026-02-02 15:58:24.720 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:58:24 compute-0 nova_compute[239545]: 2026-02-02 15:58:24.720 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:58:24 compute-0 nova_compute[239545]: 2026-02-02 15:58:24.720 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:58:24 compute-0 nova_compute[239545]: 2026-02-02 15:58:24.957 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:58:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359532148' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.299 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.392 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.392 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.392 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.588 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.589 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3976MB free_disk=59.94249573443085GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.590 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.590 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.680 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.681 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.681 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:58:25 compute-0 nova_compute[239545]: 2026-02-02 15:58:25.719 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:58:25 compute-0 ceph-mon[75334]: pgmap v2030: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 02 15:58:25 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3359532148' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:58:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Feb 02 15:58:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:58:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/564159823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:58:26 compute-0 nova_compute[239545]: 2026-02-02 15:58:26.287 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:58:26 compute-0 nova_compute[239545]: 2026-02-02 15:58:26.292 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:58:26 compute-0 nova_compute[239545]: 2026-02-02 15:58:26.308 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:58:26 compute-0 nova_compute[239545]: 2026-02-02 15:58:26.309 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:58:26 compute-0 nova_compute[239545]: 2026-02-02 15:58:26.309 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:58:26 compute-0 ceph-mon[75334]: pgmap v2031: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Feb 02 15:58:26 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/564159823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:58:27 compute-0 nova_compute[239545]: 2026-02-02 15:58:27.159 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:27 compute-0 nova_compute[239545]: 2026-02-02 15:58:27.160 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:27 compute-0 nova_compute[239545]: 2026-02-02 15:58:27.160 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:58:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/136683552' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:58:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:58:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/136683552' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:58:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Feb 02 15:58:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/136683552' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:58:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/136683552' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:58:28 compute-0 nova_compute[239545]: 2026-02-02 15:58:28.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:58:28 compute-0 nova_compute[239545]: 2026-02-02 15:58:28.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:58:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:29 compute-0 ceph-mon[75334]: pgmap v2032: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Feb 02 15:58:29 compute-0 nova_compute[239545]: 2026-02-02 15:58:29.620 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:30 compute-0 nova_compute[239545]: 2026-02-02 15:58:29.999 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:58:31 compute-0 ceph-mon[75334]: pgmap v2033: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:58:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:58:33 compute-0 ceph-mon[75334]: pgmap v2034: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:58:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:58:34 compute-0 nova_compute[239545]: 2026-02-02 15:58:34.623 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:35 compute-0 nova_compute[239545]: 2026-02-02 15:58:35.058 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:35 compute-0 ceph-mon[75334]: pgmap v2035: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 02 15:58:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Feb 02 15:58:37 compute-0 ceph-mon[75334]: pgmap v2036: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Feb 02 15:58:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Feb 02 15:58:38 compute-0 podman[279952]: 2026-02-02 15:58:38.615517361 +0000 UTC m=+0.079545095 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:58:38 compute-0 podman[279951]: 2026-02-02 15:58:38.635014643 +0000 UTC m=+0.110357736 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 02 15:58:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:39 compute-0 ceph-mon[75334]: pgmap v2037: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Feb 02 15:58:39 compute-0 nova_compute[239545]: 2026-02-02 15:58:39.624 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:40 compute-0 nova_compute[239545]: 2026-02-02 15:58:40.060 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Feb 02 15:58:40 compute-0 sudo[279996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:58:40 compute-0 sudo[279996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:58:40 compute-0 sudo[279996]: pam_unix(sudo:session): session closed for user root
Feb 02 15:58:40 compute-0 sudo[280021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:58:40 compute-0 sudo[280021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:58:41 compute-0 ceph-mon[75334]: pgmap v2038: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Feb 02 15:58:41 compute-0 sudo[280021]: pam_unix(sudo:session): session closed for user root
Feb 02 15:58:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:58:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:58:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:58:41 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:58:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:58:41 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:58:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:58:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:58:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:58:41 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:58:41 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:58:41 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:58:41 compute-0 sudo[280077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:58:41 compute-0 sudo[280077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:58:41 compute-0 sudo[280077]: pam_unix(sudo:session): session closed for user root
Feb 02 15:58:41 compute-0 sudo[280102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:58:41 compute-0 sudo[280102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:58:41 compute-0 podman[280140]: 2026-02-02 15:58:41.840929273 +0000 UTC m=+0.041189447 container create 4858567e73040a08cf909cfdd8468d6cd3267b97b9ef9afbbc9bc5b6e45effa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:58:41 compute-0 systemd[1]: Started libpod-conmon-4858567e73040a08cf909cfdd8468d6cd3267b97b9ef9afbbc9bc5b6e45effa6.scope.
Feb 02 15:58:41 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:58:41 compute-0 podman[280140]: 2026-02-02 15:58:41.910942362 +0000 UTC m=+0.111202576 container init 4858567e73040a08cf909cfdd8468d6cd3267b97b9ef9afbbc9bc5b6e45effa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_sammet, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 15:58:41 compute-0 podman[280140]: 2026-02-02 15:58:41.917259959 +0000 UTC m=+0.117520133 container start 4858567e73040a08cf909cfdd8468d6cd3267b97b9ef9afbbc9bc5b6e45effa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_sammet, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 02 15:58:41 compute-0 podman[280140]: 2026-02-02 15:58:41.823816441 +0000 UTC m=+0.024076635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:58:41 compute-0 podman[280140]: 2026-02-02 15:58:41.921416481 +0000 UTC m=+0.121676675 container attach 4858567e73040a08cf909cfdd8468d6cd3267b97b9ef9afbbc9bc5b6e45effa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_sammet, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:58:41 compute-0 elegant_sammet[280156]: 167 167
Feb 02 15:58:41 compute-0 systemd[1]: libpod-4858567e73040a08cf909cfdd8468d6cd3267b97b9ef9afbbc9bc5b6e45effa6.scope: Deactivated successfully.
Feb 02 15:58:41 compute-0 podman[280140]: 2026-02-02 15:58:41.928058305 +0000 UTC m=+0.128318479 container died 4858567e73040a08cf909cfdd8468d6cd3267b97b9ef9afbbc9bc5b6e45effa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dcb7763901c270716a6f656c7849d3ab2145b807f13ef315706abd4c69d5d47-merged.mount: Deactivated successfully.
Feb 02 15:58:41 compute-0 podman[280140]: 2026-02-02 15:58:41.967477219 +0000 UTC m=+0.167737403 container remove 4858567e73040a08cf909cfdd8468d6cd3267b97b9ef9afbbc9bc5b6e45effa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 15:58:41 compute-0 systemd[1]: libpod-conmon-4858567e73040a08cf909cfdd8468d6cd3267b97b9ef9afbbc9bc5b6e45effa6.scope: Deactivated successfully.
Feb 02 15:58:42 compute-0 podman[280180]: 2026-02-02 15:58:42.103807985 +0000 UTC m=+0.044564042 container create e012a67f3675e31421be4a0f909372121f27d873c3082f9268c1de5e48dc1055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:58:42 compute-0 systemd[1]: Started libpod-conmon-e012a67f3675e31421be4a0f909372121f27d873c3082f9268c1de5e48dc1055.scope.
Feb 02 15:58:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:42 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec46c6182e159d53a5f5755d749c792ef061e99395e51c4967f41481b510966d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec46c6182e159d53a5f5755d749c792ef061e99395e51c4967f41481b510966d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec46c6182e159d53a5f5755d749c792ef061e99395e51c4967f41481b510966d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec46c6182e159d53a5f5755d749c792ef061e99395e51c4967f41481b510966d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec46c6182e159d53a5f5755d749c792ef061e99395e51c4967f41481b510966d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:42 compute-0 podman[280180]: 2026-02-02 15:58:42.084523988 +0000 UTC m=+0.025280085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:58:42 compute-0 podman[280180]: 2026-02-02 15:58:42.186015945 +0000 UTC m=+0.126772012 container init e012a67f3675e31421be4a0f909372121f27d873c3082f9268c1de5e48dc1055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 02 15:58:42 compute-0 podman[280180]: 2026-02-02 15:58:42.193203362 +0000 UTC m=+0.133959409 container start e012a67f3675e31421be4a0f909372121f27d873c3082f9268c1de5e48dc1055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:58:42 compute-0 podman[280180]: 2026-02-02 15:58:42.197855987 +0000 UTC m=+0.138612034 container attach e012a67f3675e31421be4a0f909372121f27d873c3082f9268c1de5e48dc1055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bhaskara, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:58:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:58:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:58:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:58:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:58:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:58:42 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:58:42 compute-0 elastic_bhaskara[280197]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:58:42 compute-0 elastic_bhaskara[280197]: --> All data devices are unavailable
Feb 02 15:58:42 compute-0 systemd[1]: libpod-e012a67f3675e31421be4a0f909372121f27d873c3082f9268c1de5e48dc1055.scope: Deactivated successfully.
Feb 02 15:58:42 compute-0 podman[280180]: 2026-02-02 15:58:42.713054228 +0000 UTC m=+0.653810355 container died e012a67f3675e31421be4a0f909372121f27d873c3082f9268c1de5e48dc1055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 02 15:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec46c6182e159d53a5f5755d749c792ef061e99395e51c4967f41481b510966d-merged.mount: Deactivated successfully.
Feb 02 15:58:42 compute-0 podman[280180]: 2026-02-02 15:58:42.760273284 +0000 UTC m=+0.701029331 container remove e012a67f3675e31421be4a0f909372121f27d873c3082f9268c1de5e48dc1055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:58:42 compute-0 systemd[1]: libpod-conmon-e012a67f3675e31421be4a0f909372121f27d873c3082f9268c1de5e48dc1055.scope: Deactivated successfully.
Feb 02 15:58:42 compute-0 sudo[280102]: pam_unix(sudo:session): session closed for user root
Feb 02 15:58:42 compute-0 sudo[280228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:58:42 compute-0 sudo[280228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:58:42 compute-0 sudo[280228]: pam_unix(sudo:session): session closed for user root
Feb 02 15:58:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:58:42
Feb 02 15:58:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:58:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:58:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.rgw.root', 'images', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Feb 02 15:58:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:58:42 compute-0 sudo[280253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:58:42 compute-0 sudo[280253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:58:43 compute-0 podman[280291]: 2026-02-02 15:58:43.185372091 +0000 UTC m=+0.034835051 container create de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 02 15:58:43 compute-0 systemd[1]: Started libpod-conmon-de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f.scope.
Feb 02 15:58:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:58:43 compute-0 podman[280291]: 2026-02-02 15:58:43.171230921 +0000 UTC m=+0.020693901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:58:43 compute-0 podman[280291]: 2026-02-02 15:58:43.270829041 +0000 UTC m=+0.120292051 container init de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:58:43 compute-0 podman[280291]: 2026-02-02 15:58:43.276147122 +0000 UTC m=+0.125610082 container start de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:58:43 compute-0 podman[280291]: 2026-02-02 15:58:43.279422283 +0000 UTC m=+0.128885303 container attach de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:58:43 compute-0 systemd[1]: libpod-de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f.scope: Deactivated successfully.
Feb 02 15:58:43 compute-0 suspicious_edison[280307]: 167 167
Feb 02 15:58:43 compute-0 podman[280291]: 2026-02-02 15:58:43.283233617 +0000 UTC m=+0.132696577 container died de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:58:43 compute-0 conmon[280307]: conmon de396990c24ddf1e4b0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f.scope/container/memory.events
Feb 02 15:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-360886e9baf627459f1093ea3e8ceddcaaa865281f2c14203223f9eee9d5d8c9-merged.mount: Deactivated successfully.
Feb 02 15:58:43 compute-0 podman[280291]: 2026-02-02 15:58:43.315923874 +0000 UTC m=+0.165386834 container remove de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 02 15:58:43 compute-0 systemd[1]: libpod-conmon-de396990c24ddf1e4b0d549b9de0cc088cbae42c95b548074835f5432fcdc61f.scope: Deactivated successfully.
Feb 02 15:58:43 compute-0 ceph-mon[75334]: pgmap v2039: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:43 compute-0 podman[280333]: 2026-02-02 15:58:43.447805551 +0000 UTC m=+0.038132823 container create d45a47a07af24f66471319db79c268486f26481be35d0000e0e5872d72aac31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 02 15:58:43 compute-0 systemd[1]: Started libpod-conmon-d45a47a07af24f66471319db79c268486f26481be35d0000e0e5872d72aac31f.scope.
Feb 02 15:58:43 compute-0 podman[280333]: 2026-02-02 15:58:43.429398856 +0000 UTC m=+0.019726148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:58:43 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0a36b547baec94068030e7531c449f91cc1946d29d0cd664c1a8495a7e2eef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0a36b547baec94068030e7531c449f91cc1946d29d0cd664c1a8495a7e2eef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0a36b547baec94068030e7531c449f91cc1946d29d0cd664c1a8495a7e2eef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0a36b547baec94068030e7531c449f91cc1946d29d0cd664c1a8495a7e2eef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:43 compute-0 podman[280333]: 2026-02-02 15:58:43.553430838 +0000 UTC m=+0.143758150 container init d45a47a07af24f66471319db79c268486f26481be35d0000e0e5872d72aac31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:58:43 compute-0 podman[280333]: 2026-02-02 15:58:43.565835345 +0000 UTC m=+0.156162627 container start d45a47a07af24f66471319db79c268486f26481be35d0000e0e5872d72aac31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:58:43 compute-0 podman[280333]: 2026-02-02 15:58:43.569800473 +0000 UTC m=+0.160127785 container attach d45a47a07af24f66471319db79c268486f26481be35d0000e0e5872d72aac31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:58:43 compute-0 stupefied_pare[280349]: {
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:     "0": [
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:         {
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "devices": [
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "/dev/loop3"
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             ],
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_name": "ceph_lv0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_size": "21470642176",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "name": "ceph_lv0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "tags": {
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.cluster_name": "ceph",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.crush_device_class": "",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.encrypted": "0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.objectstore": "bluestore",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.osd_id": "0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.type": "block",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.vdo": "0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.with_tpm": "0"
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             },
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "type": "block",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "vg_name": "ceph_vg0"
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:         }
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:     ],
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:     "1": [
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:         {
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "devices": [
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "/dev/loop4"
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             ],
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_name": "ceph_lv1",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_size": "21470642176",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "name": "ceph_lv1",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "tags": {
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.cluster_name": "ceph",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.crush_device_class": "",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.encrypted": "0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.objectstore": "bluestore",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.osd_id": "1",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.type": "block",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.vdo": "0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.with_tpm": "0"
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             },
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "type": "block",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "vg_name": "ceph_vg1"
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:         }
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:     ],
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:     "2": [
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:         {
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "devices": [
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "/dev/loop5"
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             ],
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_name": "ceph_lv2",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_size": "21470642176",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "name": "ceph_lv2",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "tags": {
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.cluster_name": "ceph",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.crush_device_class": "",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.encrypted": "0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.objectstore": "bluestore",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.osd_id": "2",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.type": "block",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.vdo": "0",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:                 "ceph.with_tpm": "0"
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             },
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "type": "block",
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:             "vg_name": "ceph_vg2"
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:         }
Feb 02 15:58:43 compute-0 stupefied_pare[280349]:     ]
Feb 02 15:58:43 compute-0 stupefied_pare[280349]: }
Feb 02 15:58:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:43 compute-0 systemd[1]: libpod-d45a47a07af24f66471319db79c268486f26481be35d0000e0e5872d72aac31f.scope: Deactivated successfully.
Feb 02 15:58:43 compute-0 podman[280333]: 2026-02-02 15:58:43.887267952 +0000 UTC m=+0.477595264 container died d45a47a07af24f66471319db79c268486f26481be35d0000e0e5872d72aac31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c0a36b547baec94068030e7531c449f91cc1946d29d0cd664c1a8495a7e2eef-merged.mount: Deactivated successfully.
Feb 02 15:58:43 compute-0 podman[280333]: 2026-02-02 15:58:43.945199412 +0000 UTC m=+0.535526724 container remove d45a47a07af24f66471319db79c268486f26481be35d0000e0e5872d72aac31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 02 15:58:43 compute-0 systemd[1]: libpod-conmon-d45a47a07af24f66471319db79c268486f26481be35d0000e0e5872d72aac31f.scope: Deactivated successfully.
Feb 02 15:58:43 compute-0 sudo[280253]: pam_unix(sudo:session): session closed for user root
Feb 02 15:58:44 compute-0 sudo[280371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:58:44 compute-0 sudo[280371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:58:44 compute-0 sudo[280371]: pam_unix(sudo:session): session closed for user root
Feb 02 15:58:44 compute-0 sudo[280396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:58:44 compute-0 sudo[280396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:58:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:44 compute-0 podman[280434]: 2026-02-02 15:58:44.456391685 +0000 UTC m=+0.059031349 container create 4326272d43039e4f8641fc4bebb4e879f8e41f6255a63f8dd194051c699f6376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gauss, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:58:44 compute-0 systemd[1]: Started libpod-conmon-4326272d43039e4f8641fc4bebb4e879f8e41f6255a63f8dd194051c699f6376.scope.
Feb 02 15:58:44 compute-0 podman[280434]: 2026-02-02 15:58:44.431968442 +0000 UTC m=+0.034608146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:58:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:58:44 compute-0 podman[280434]: 2026-02-02 15:58:44.549213277 +0000 UTC m=+0.151852991 container init 4326272d43039e4f8641fc4bebb4e879f8e41f6255a63f8dd194051c699f6376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gauss, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:58:44 compute-0 podman[280434]: 2026-02-02 15:58:44.556660091 +0000 UTC m=+0.159299745 container start 4326272d43039e4f8641fc4bebb4e879f8e41f6255a63f8dd194051c699f6376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 02 15:58:44 compute-0 podman[280434]: 2026-02-02 15:58:44.560987438 +0000 UTC m=+0.163627102 container attach 4326272d43039e4f8641fc4bebb4e879f8e41f6255a63f8dd194051c699f6376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 15:58:44 compute-0 sharp_gauss[280451]: 167 167
Feb 02 15:58:44 compute-0 systemd[1]: libpod-4326272d43039e4f8641fc4bebb4e879f8e41f6255a63f8dd194051c699f6376.scope: Deactivated successfully.
Feb 02 15:58:44 compute-0 podman[280434]: 2026-02-02 15:58:44.5630934 +0000 UTC m=+0.165733074 container died 4326272d43039e4f8641fc4bebb4e879f8e41f6255a63f8dd194051c699f6376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 15:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fc623daca69e6016dd8d5aa93361fe44e2376d63edc9551d9a91963e67fd92b-merged.mount: Deactivated successfully.
Feb 02 15:58:44 compute-0 podman[280434]: 2026-02-02 15:58:44.607914317 +0000 UTC m=+0.210553981 container remove 4326272d43039e4f8641fc4bebb4e879f8e41f6255a63f8dd194051c699f6376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gauss, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 15:58:44 compute-0 systemd[1]: libpod-conmon-4326272d43039e4f8641fc4bebb4e879f8e41f6255a63f8dd194051c699f6376.scope: Deactivated successfully.
Feb 02 15:58:44 compute-0 nova_compute[239545]: 2026-02-02 15:58:44.625 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:58:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:58:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:58:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:58:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:58:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:58:44 compute-0 podman[280474]: 2026-02-02 15:58:44.761190851 +0000 UTC m=+0.038101701 container create 9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gagarin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:58:44 compute-0 systemd[1]: Started libpod-conmon-9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908.scope.
Feb 02 15:58:44 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0018f314ffc6db3aebad669bc4e714ba19a1e3f9d0e9bc96dc30a6d20edb29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0018f314ffc6db3aebad669bc4e714ba19a1e3f9d0e9bc96dc30a6d20edb29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0018f314ffc6db3aebad669bc4e714ba19a1e3f9d0e9bc96dc30a6d20edb29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0018f314ffc6db3aebad669bc4e714ba19a1e3f9d0e9bc96dc30a6d20edb29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:58:44 compute-0 podman[280474]: 2026-02-02 15:58:44.744321805 +0000 UTC m=+0.021232705 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:58:44 compute-0 podman[280474]: 2026-02-02 15:58:44.846493478 +0000 UTC m=+0.123404348 container init 9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 02 15:58:44 compute-0 podman[280474]: 2026-02-02 15:58:44.852114856 +0000 UTC m=+0.129025716 container start 9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 02 15:58:44 compute-0 podman[280474]: 2026-02-02 15:58:44.855771677 +0000 UTC m=+0.132682537 container attach 9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:58:45 compute-0 nova_compute[239545]: 2026-02-02 15:58:45.104 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:58:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:58:45 compute-0 ceph-mon[75334]: pgmap v2040: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:45 compute-0 lvm[280569]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:58:45 compute-0 lvm[280568]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:58:45 compute-0 lvm[280568]: VG ceph_vg1 finished
Feb 02 15:58:45 compute-0 lvm[280569]: VG ceph_vg0 finished
Feb 02 15:58:45 compute-0 lvm[280571]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:58:45 compute-0 lvm[280571]: VG ceph_vg2 finished
Feb 02 15:58:45 compute-0 stoic_gagarin[280490]: {}
Feb 02 15:58:45 compute-0 systemd[1]: libpod-9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908.scope: Deactivated successfully.
Feb 02 15:58:45 compute-0 podman[280474]: 2026-02-02 15:58:45.58487501 +0000 UTC m=+0.861785940 container died 9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gagarin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:58:45 compute-0 systemd[1]: libpod-9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908.scope: Consumed 1.116s CPU time.
Feb 02 15:58:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec0018f314ffc6db3aebad669bc4e714ba19a1e3f9d0e9bc96dc30a6d20edb29-merged.mount: Deactivated successfully.
Feb 02 15:58:45 compute-0 podman[280474]: 2026-02-02 15:58:45.63303386 +0000 UTC m=+0.909944710 container remove 9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 02 15:58:45 compute-0 systemd[1]: libpod-conmon-9320d3d7f439fe4c0f710bf3bc12d968e453cc66dce499c72c5397c70644f908.scope: Deactivated successfully.
Feb 02 15:58:45 compute-0 sudo[280396]: pam_unix(sudo:session): session closed for user root
Feb 02 15:58:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:58:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:58:45 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:58:45 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:58:45 compute-0 sudo[280587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:58:45 compute-0 sudo[280587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:58:45 compute-0 sudo[280587]: pam_unix(sudo:session): session closed for user root
Feb 02 15:58:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:58:46 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:58:48 compute-0 ceph-mon[75334]: pgmap v2041: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:49 compute-0 ceph-mon[75334]: pgmap v2042: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:49 compute-0 nova_compute[239545]: 2026-02-02 15:58:49.629 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:50 compute-0 nova_compute[239545]: 2026-02-02 15:58:50.106 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:51 compute-0 ceph-mon[75334]: pgmap v2043: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:53 compute-0 ceph-mon[75334]: pgmap v2044: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:54 compute-0 nova_compute[239545]: 2026-02-02 15:58:54.632 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632409966596675 of space, bias 1.0, pg target 0.22897229899790025 quantized to 32 (current 32)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029290397162393036 of space, bias 1.0, pg target 0.8787119148717911 quantized to 32 (current 32)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8636478275776745e-06 of space, bias 1.0, pg target 0.0011590943482733024 quantized to 32 (current 32)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677614895015238 of space, bias 1.0, pg target 0.20032844685045714 quantized to 32 (current 32)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4237735919936245e-06 of space, bias 4.0, pg target 0.0017085283103923494 quantized to 16 (current 16)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:58:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:58:55 compute-0 nova_compute[239545]: 2026-02-02 15:58:55.108 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:58:55 compute-0 ceph-mon[75334]: pgmap v2045: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:57 compute-0 ceph-mon[75334]: pgmap v2046: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:58:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:58:59.269 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:58:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:58:59.269 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:58:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:58:59.270 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:58:59 compute-0 ceph-mon[75334]: pgmap v2047: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:58:59 compute-0 nova_compute[239545]: 2026-02-02 15:58:59.635 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:00 compute-0 nova_compute[239545]: 2026-02-02 15:59:00.110 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:01 compute-0 ceph-mon[75334]: pgmap v2048: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:03 compute-0 ceph-mon[75334]: pgmap v2049: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:04 compute-0 nova_compute[239545]: 2026-02-02 15:59:04.637 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:05 compute-0 nova_compute[239545]: 2026-02-02 15:59:05.112 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:05 compute-0 ceph-mon[75334]: pgmap v2050: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:07 compute-0 ceph-mon[75334]: pgmap v2051: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:09 compute-0 ceph-mon[75334]: pgmap v2052: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:09 compute-0 podman[280613]: 2026-02-02 15:59:09.321483037 +0000 UTC m=+0.059266534 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 15:59:09 compute-0 podman[280612]: 2026-02-02 15:59:09.355849416 +0000 UTC m=+0.093377427 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Feb 02 15:59:09 compute-0 nova_compute[239545]: 2026-02-02 15:59:09.639 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:10 compute-0 nova_compute[239545]: 2026-02-02 15:59:10.114 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:11 compute-0 ceph-mon[75334]: pgmap v2053: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:13 compute-0 ceph-mon[75334]: pgmap v2054: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.344791) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047954344823, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 736, "num_deletes": 251, "total_data_size": 946062, "memory_usage": 960408, "flush_reason": "Manual Compaction"}
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047954350240, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 937497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40727, "largest_seqno": 41462, "table_properties": {"data_size": 933606, "index_size": 1671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8572, "raw_average_key_size": 19, "raw_value_size": 925919, "raw_average_value_size": 2099, "num_data_blocks": 74, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770047894, "oldest_key_time": 1770047894, "file_creation_time": 1770047954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 5505 microseconds, and 2377 cpu microseconds.
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.350295) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 937497 bytes OK
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.350315) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.352256) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.352268) EVENT_LOG_v1 {"time_micros": 1770047954352264, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.352286) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 942291, prev total WAL file size 942291, number of live WAL files 2.
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.352737) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(915KB)], [86(10MB)]
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047954352796, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12033507, "oldest_snapshot_seqno": -1}
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7048 keys, 10297609 bytes, temperature: kUnknown
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047954409123, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10297609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10246868, "index_size": 32005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 179534, "raw_average_key_size": 25, "raw_value_size": 10116674, "raw_average_value_size": 1435, "num_data_blocks": 1263, "num_entries": 7048, "num_filter_entries": 7048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770047954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.409471) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10297609 bytes
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.411050) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.2 rd, 182.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(23.8) write-amplify(11.0) OK, records in: 7562, records dropped: 514 output_compression: NoCompression
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.411076) EVENT_LOG_v1 {"time_micros": 1770047954411064, "job": 50, "event": "compaction_finished", "compaction_time_micros": 56447, "compaction_time_cpu_micros": 32925, "output_level": 6, "num_output_files": 1, "total_output_size": 10297609, "num_input_records": 7562, "num_output_records": 7048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047954411503, "job": 50, "event": "table_file_deletion", "file_number": 88}
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770047954413396, "job": 50, "event": "table_file_deletion", "file_number": 86}
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.352563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.413558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.413567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.413569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.413571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:59:14 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-15:59:14.413573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 15:59:14 compute-0 nova_compute[239545]: 2026-02-02 15:59:14.640 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:59:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:59:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:59:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:59:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:59:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:59:15 compute-0 nova_compute[239545]: 2026-02-02 15:59:15.116 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:15 compute-0 ceph-mon[75334]: pgmap v2055: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:15 compute-0 nova_compute[239545]: 2026-02-02 15:59:15.547 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:17 compute-0 ceph-mon[75334]: pgmap v2056: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:17 compute-0 nova_compute[239545]: 2026-02-02 15:59:17.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:17 compute-0 nova_compute[239545]: 2026-02-02 15:59:17.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 02 15:59:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:18 compute-0 nova_compute[239545]: 2026-02-02 15:59:18.568 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:18 compute-0 nova_compute[239545]: 2026-02-02 15:59:18.569 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 15:59:18 compute-0 nova_compute[239545]: 2026-02-02 15:59:18.569 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 15:59:18 compute-0 nova_compute[239545]: 2026-02-02 15:59:18.785 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 15:59:18 compute-0 nova_compute[239545]: 2026-02-02 15:59:18.785 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 15:59:18 compute-0 nova_compute[239545]: 2026-02-02 15:59:18.785 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 15:59:18 compute-0 nova_compute[239545]: 2026-02-02 15:59:18.786 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 15:59:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:19 compute-0 ceph-mon[75334]: pgmap v2057: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:19 compute-0 nova_compute[239545]: 2026-02-02 15:59:19.642 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:20 compute-0 nova_compute[239545]: 2026-02-02 15:59:20.022 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 15:59:20 compute-0 nova_compute[239545]: 2026-02-02 15:59:20.038 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 15:59:20 compute-0 nova_compute[239545]: 2026-02-02 15:59:20.039 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 15:59:20 compute-0 nova_compute[239545]: 2026-02-02 15:59:20.118 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:21 compute-0 ceph-mon[75334]: pgmap v2058: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:22 compute-0 nova_compute[239545]: 2026-02-02 15:59:22.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:23 compute-0 ceph-mon[75334]: pgmap v2059: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:23 compute-0 nova_compute[239545]: 2026-02-02 15:59:23.541 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:24 compute-0 nova_compute[239545]: 2026-02-02 15:59:24.643 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:25 compute-0 nova_compute[239545]: 2026-02-02 15:59:25.120 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:25 compute-0 ceph-mon[75334]: pgmap v2060: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:26 compute-0 nova_compute[239545]: 2026-02-02 15:59:26.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:26 compute-0 nova_compute[239545]: 2026-02-02 15:59:26.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:26 compute-0 nova_compute[239545]: 2026-02-02 15:59:26.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:26 compute-0 nova_compute[239545]: 2026-02-02 15:59:26.572 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:59:26 compute-0 nova_compute[239545]: 2026-02-02 15:59:26.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:59:26 compute-0 nova_compute[239545]: 2026-02-02 15:59:26.573 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:59:26 compute-0 nova_compute[239545]: 2026-02-02 15:59:26.573 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 15:59:26 compute-0 nova_compute[239545]: 2026-02-02 15:59:26.573 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:59:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:59:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165940307' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.152 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.235 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.235 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.236 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 15:59:27 compute-0 ceph-mon[75334]: pgmap v2061: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4165940307' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.445 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.447 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3946MB free_disk=59.94249573443085GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.447 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.447 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.600 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.601 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.601 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 15:59:27 compute-0 nova_compute[239545]: 2026-02-02 15:59:27.742 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 15:59:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 15:59:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2371079133' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:59:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 15:59:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2371079133' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:59:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 15:59:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816809617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:59:28 compute-0 nova_compute[239545]: 2026-02-02 15:59:28.291 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 15:59:28 compute-0 nova_compute[239545]: 2026-02-02 15:59:28.298 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 15:59:28 compute-0 nova_compute[239545]: 2026-02-02 15:59:28.324 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 15:59:28 compute-0 nova_compute[239545]: 2026-02-02 15:59:28.327 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 15:59:28 compute-0 nova_compute[239545]: 2026-02-02 15:59:28.327 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:59:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2371079133' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 15:59:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/2371079133' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 15:59:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3816809617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 15:59:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:29 compute-0 nova_compute[239545]: 2026-02-02 15:59:29.645 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:29 compute-0 ceph-mon[75334]: pgmap v2062: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:30 compute-0 nova_compute[239545]: 2026-02-02 15:59:30.149 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:30 compute-0 nova_compute[239545]: 2026-02-02 15:59:30.328 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:30 compute-0 nova_compute[239545]: 2026-02-02 15:59:30.329 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:30 compute-0 nova_compute[239545]: 2026-02-02 15:59:30.329 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 15:59:31 compute-0 ceph-mon[75334]: pgmap v2063: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:32 compute-0 nova_compute[239545]: 2026-02-02 15:59:32.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:33 compute-0 ceph-mon[75334]: pgmap v2064: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:34 compute-0 nova_compute[239545]: 2026-02-02 15:59:34.647 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:35 compute-0 nova_compute[239545]: 2026-02-02 15:59:35.151 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:35 compute-0 ceph-mon[75334]: pgmap v2065: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:37 compute-0 ceph-mon[75334]: pgmap v2066: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:39 compute-0 nova_compute[239545]: 2026-02-02 15:59:39.648 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:40 compute-0 ceph-mon[75334]: pgmap v2067: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:40 compute-0 nova_compute[239545]: 2026-02-02 15:59:40.207 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:40 compute-0 podman[280703]: 2026-02-02 15:59:40.329049224 +0000 UTC m=+0.063727704 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 02 15:59:40 compute-0 podman[280702]: 2026-02-02 15:59:40.362766886 +0000 UTC m=+0.096926044 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Feb 02 15:59:41 compute-0 nova_compute[239545]: 2026-02-02 15:59:41.167 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 15:59:41 compute-0 nova_compute[239545]: 2026-02-02 15:59:41.168 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 02 15:59:41 compute-0 nova_compute[239545]: 2026-02-02 15:59:41.189 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 02 15:59:42 compute-0 ceph-mon[75334]: pgmap v2068: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_15:59:42
Feb 02 15:59:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 15:59:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 15:59:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.log', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 02 15:59:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 15:59:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:44 compute-0 ceph-mon[75334]: pgmap v2069: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:44 compute-0 nova_compute[239545]: 2026-02-02 15:59:44.650 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:59:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:59:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:59:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:59:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 15:59:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 15:59:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 15:59:45 compute-0 nova_compute[239545]: 2026-02-02 15:59:45.253 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:45 compute-0 sudo[280747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:59:45 compute-0 sudo[280747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:45 compute-0 sudo[280747]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:45 compute-0 sudo[280772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 02 15:59:45 compute-0 sudo[280772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:46 compute-0 ceph-mon[75334]: pgmap v2070: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:46 compute-0 sudo[280772]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:59:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:46 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:59:46 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:46 compute-0 sudo[280817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:59:46 compute-0 sudo[280817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:46 compute-0 sudo[280817]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:46 compute-0 sudo[280842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 15:59:46 compute-0 sudo[280842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:47 compute-0 sudo[280842]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:59:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:59:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 15:59:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:59:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 15:59:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 15:59:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:59:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 15:59:47 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:59:47 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 15:59:47 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:59:47 compute-0 sudo[280898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:59:47 compute-0 sudo[280898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:47 compute-0 sudo[280898]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:47 compute-0 sudo[280923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 15:59:47 compute-0 sudo[280923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:47 compute-0 ceph-mon[75334]: pgmap v2071: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:59:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 15:59:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 15:59:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 15:59:47 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 15:59:47 compute-0 podman[280961]: 2026-02-02 15:59:47.517398484 +0000 UTC m=+0.054858637 container create 09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_austin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 02 15:59:47 compute-0 systemd[1]: Started libpod-conmon-09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc.scope.
Feb 02 15:59:47 compute-0 podman[280961]: 2026-02-02 15:59:47.494660356 +0000 UTC m=+0.032120489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:59:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:59:47 compute-0 podman[280961]: 2026-02-02 15:59:47.620223608 +0000 UTC m=+0.157683751 container init 09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_austin, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:59:47 compute-0 podman[280961]: 2026-02-02 15:59:47.628559633 +0000 UTC m=+0.166019756 container start 09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_austin, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 15:59:47 compute-0 podman[280961]: 2026-02-02 15:59:47.6325149 +0000 UTC m=+0.169975013 container attach 09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_austin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:59:47 compute-0 trusting_austin[280978]: 167 167
Feb 02 15:59:47 compute-0 systemd[1]: libpod-09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc.scope: Deactivated successfully.
Feb 02 15:59:47 compute-0 conmon[280978]: conmon 09c0552554c32fee6d77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc.scope/container/memory.events
Feb 02 15:59:47 compute-0 podman[280961]: 2026-02-02 15:59:47.637747858 +0000 UTC m=+0.175207981 container died 09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-574009da55242bd310a5bc0925d1797bfc36241511f631682a315b1a24b14c37-merged.mount: Deactivated successfully.
Feb 02 15:59:47 compute-0 podman[280961]: 2026-02-02 15:59:47.67775958 +0000 UTC m=+0.215219693 container remove 09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 02 15:59:47 compute-0 systemd[1]: libpod-conmon-09c0552554c32fee6d77a215e1d508a59b8785b3b92de29055ff3e40f22aacbc.scope: Deactivated successfully.
Feb 02 15:59:47 compute-0 podman[281001]: 2026-02-02 15:59:47.833857361 +0000 UTC m=+0.039131281 container create 3d6cab34dc04bf3a37821850ae076a58ec30c28f7329b639dffc7fde064a1bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jackson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:59:47 compute-0 systemd[1]: Started libpod-conmon-3d6cab34dc04bf3a37821850ae076a58ec30c28f7329b639dffc7fde064a1bce.scope.
Feb 02 15:59:47 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e65fa6e7f32c9696d343245313468019b4f746d5a683e6608093ea687bd0be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e65fa6e7f32c9696d343245313468019b4f746d5a683e6608093ea687bd0be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e65fa6e7f32c9696d343245313468019b4f746d5a683e6608093ea687bd0be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e65fa6e7f32c9696d343245313468019b4f746d5a683e6608093ea687bd0be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e65fa6e7f32c9696d343245313468019b4f746d5a683e6608093ea687bd0be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:47 compute-0 podman[281001]: 2026-02-02 15:59:47.815028379 +0000 UTC m=+0.020302329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:59:47 compute-0 podman[281001]: 2026-02-02 15:59:47.917420922 +0000 UTC m=+0.122694932 container init 3d6cab34dc04bf3a37821850ae076a58ec30c28f7329b639dffc7fde064a1bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 15:59:47 compute-0 podman[281001]: 2026-02-02 15:59:47.927768546 +0000 UTC m=+0.133042476 container start 3d6cab34dc04bf3a37821850ae076a58ec30c28f7329b639dffc7fde064a1bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:59:47 compute-0 podman[281001]: 2026-02-02 15:59:47.931427666 +0000 UTC m=+0.136701586 container attach 3d6cab34dc04bf3a37821850ae076a58ec30c28f7329b639dffc7fde064a1bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 02 15:59:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:48 compute-0 interesting_jackson[281018]: --> passed data devices: 0 physical, 3 LVM
Feb 02 15:59:48 compute-0 interesting_jackson[281018]: --> All data devices are unavailable
Feb 02 15:59:48 compute-0 systemd[1]: libpod-3d6cab34dc04bf3a37821850ae076a58ec30c28f7329b639dffc7fde064a1bce.scope: Deactivated successfully.
Feb 02 15:59:48 compute-0 podman[281001]: 2026-02-02 15:59:48.351586728 +0000 UTC m=+0.556860648 container died 3d6cab34dc04bf3a37821850ae076a58ec30c28f7329b639dffc7fde064a1bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:59:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-43e65fa6e7f32c9696d343245313468019b4f746d5a683e6608093ea687bd0be-merged.mount: Deactivated successfully.
Feb 02 15:59:48 compute-0 podman[281001]: 2026-02-02 15:59:48.397077615 +0000 UTC m=+0.602351535 container remove 3d6cab34dc04bf3a37821850ae076a58ec30c28f7329b639dffc7fde064a1bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_jackson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:59:48 compute-0 systemd[1]: libpod-conmon-3d6cab34dc04bf3a37821850ae076a58ec30c28f7329b639dffc7fde064a1bce.scope: Deactivated successfully.
Feb 02 15:59:48 compute-0 sudo[280923]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:48 compute-0 sudo[281052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:59:48 compute-0 sudo[281052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:48 compute-0 sudo[281052]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:48 compute-0 sudo[281077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 15:59:48 compute-0 sudo[281077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:48 compute-0 podman[281114]: 2026-02-02 15:59:48.799357819 +0000 UTC m=+0.053776881 container create faf5ee0c9b752360d6552d7b1527f1edbf327506ecbdc5b75ab9b944f8b650f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:59:48 compute-0 systemd[1]: Started libpod-conmon-faf5ee0c9b752360d6552d7b1527f1edbf327506ecbdc5b75ab9b944f8b650f1.scope.
Feb 02 15:59:48 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:59:48 compute-0 podman[281114]: 2026-02-02 15:59:48.863870442 +0000 UTC m=+0.118289514 container init faf5ee0c9b752360d6552d7b1527f1edbf327506ecbdc5b75ab9b944f8b650f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kowalevski, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:59:48 compute-0 podman[281114]: 2026-02-02 15:59:48.773878603 +0000 UTC m=+0.028297725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:59:48 compute-0 podman[281114]: 2026-02-02 15:59:48.8694948 +0000 UTC m=+0.123913842 container start faf5ee0c9b752360d6552d7b1527f1edbf327506ecbdc5b75ab9b944f8b650f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kowalevski, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 15:59:48 compute-0 gracious_kowalevski[281130]: 167 167
Feb 02 15:59:48 compute-0 podman[281114]: 2026-02-02 15:59:48.872964926 +0000 UTC m=+0.127383958 container attach faf5ee0c9b752360d6552d7b1527f1edbf327506ecbdc5b75ab9b944f8b650f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 15:59:48 compute-0 systemd[1]: libpod-faf5ee0c9b752360d6552d7b1527f1edbf327506ecbdc5b75ab9b944f8b650f1.scope: Deactivated successfully.
Feb 02 15:59:48 compute-0 podman[281114]: 2026-02-02 15:59:48.875016135 +0000 UTC m=+0.129435167 container died faf5ee0c9b752360d6552d7b1527f1edbf327506ecbdc5b75ab9b944f8b650f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:59:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d75617cc6350596553d47c0e484f35f3892c47e63089de9cbc7959d2b92b6ec4-merged.mount: Deactivated successfully.
Feb 02 15:59:48 compute-0 podman[281114]: 2026-02-02 15:59:48.930348563 +0000 UTC m=+0.184767585 container remove faf5ee0c9b752360d6552d7b1527f1edbf327506ecbdc5b75ab9b944f8b650f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kowalevski, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:59:48 compute-0 systemd[1]: libpod-conmon-faf5ee0c9b752360d6552d7b1527f1edbf327506ecbdc5b75ab9b944f8b650f1.scope: Deactivated successfully.
Feb 02 15:59:49 compute-0 podman[281156]: 2026-02-02 15:59:49.067135801 +0000 UTC m=+0.039280835 container create a1500273b5d87794de6f19df70e435ffa3b09424d60ee35225c3f49748eb8415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_allen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 15:59:49 compute-0 systemd[1]: Started libpod-conmon-a1500273b5d87794de6f19df70e435ffa3b09424d60ee35225c3f49748eb8415.scope.
Feb 02 15:59:49 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/044aa076f4d5a1dda4866f82b2c3697e38ffc1fb8217656233517d8e773c65fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/044aa076f4d5a1dda4866f82b2c3697e38ffc1fb8217656233517d8e773c65fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/044aa076f4d5a1dda4866f82b2c3697e38ffc1fb8217656233517d8e773c65fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/044aa076f4d5a1dda4866f82b2c3697e38ffc1fb8217656233517d8e773c65fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:49 compute-0 podman[281156]: 2026-02-02 15:59:49.047636492 +0000 UTC m=+0.019781516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:59:49 compute-0 podman[281156]: 2026-02-02 15:59:49.166310305 +0000 UTC m=+0.138455359 container init a1500273b5d87794de6f19df70e435ffa3b09424d60ee35225c3f49748eb8415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:59:49 compute-0 podman[281156]: 2026-02-02 15:59:49.177073609 +0000 UTC m=+0.149218613 container start a1500273b5d87794de6f19df70e435ffa3b09424d60ee35225c3f49748eb8415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_allen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 02 15:59:49 compute-0 podman[281156]: 2026-02-02 15:59:49.180526214 +0000 UTC m=+0.152671278 container attach a1500273b5d87794de6f19df70e435ffa3b09424d60ee35225c3f49748eb8415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 15:59:49 compute-0 optimistic_allen[281172]: {
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:     "0": [
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:         {
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "devices": [
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "/dev/loop3"
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             ],
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_name": "ceph_lv0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_size": "21470642176",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "name": "ceph_lv0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "tags": {
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.cluster_name": "ceph",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.crush_device_class": "",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.encrypted": "0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.objectstore": "bluestore",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.osd_id": "0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.type": "block",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.vdo": "0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.with_tpm": "0"
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             },
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "type": "block",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "vg_name": "ceph_vg0"
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:         }
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:     ],
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:     "1": [
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:         {
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "devices": [
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "/dev/loop4"
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             ],
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_name": "ceph_lv1",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_size": "21470642176",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "name": "ceph_lv1",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "tags": {
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.cluster_name": "ceph",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.crush_device_class": "",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.encrypted": "0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.objectstore": "bluestore",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.osd_id": "1",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.type": "block",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.vdo": "0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.with_tpm": "0"
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             },
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "type": "block",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "vg_name": "ceph_vg1"
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:         }
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:     ],
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:     "2": [
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:         {
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "devices": [
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "/dev/loop5"
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             ],
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_name": "ceph_lv2",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_size": "21470642176",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "name": "ceph_lv2",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "tags": {
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.cluster_name": "ceph",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.crush_device_class": "",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.encrypted": "0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.objectstore": "bluestore",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.osd_id": "2",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.type": "block",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.vdo": "0",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:                 "ceph.with_tpm": "0"
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             },
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "type": "block",
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:             "vg_name": "ceph_vg2"
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:         }
Feb 02 15:59:49 compute-0 optimistic_allen[281172]:     ]
Feb 02 15:59:49 compute-0 optimistic_allen[281172]: }
Feb 02 15:59:49 compute-0 systemd[1]: libpod-a1500273b5d87794de6f19df70e435ffa3b09424d60ee35225c3f49748eb8415.scope: Deactivated successfully.
Feb 02 15:59:49 compute-0 podman[281156]: 2026-02-02 15:59:49.498833527 +0000 UTC m=+0.470978571 container died a1500273b5d87794de6f19df70e435ffa3b09424d60ee35225c3f49748eb8415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_allen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:59:49 compute-0 ceph-mon[75334]: pgmap v2072: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-044aa076f4d5a1dda4866f82b2c3697e38ffc1fb8217656233517d8e773c65fd-merged.mount: Deactivated successfully.
Feb 02 15:59:49 compute-0 podman[281156]: 2026-02-02 15:59:49.544330613 +0000 UTC m=+0.516475627 container remove a1500273b5d87794de6f19df70e435ffa3b09424d60ee35225c3f49748eb8415 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_allen, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 15:59:49 compute-0 systemd[1]: libpod-conmon-a1500273b5d87794de6f19df70e435ffa3b09424d60ee35225c3f49748eb8415.scope: Deactivated successfully.
Feb 02 15:59:49 compute-0 sudo[281077]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:49 compute-0 sudo[281192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 15:59:49 compute-0 sudo[281192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:49 compute-0 sudo[281192]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:49 compute-0 nova_compute[239545]: 2026-02-02 15:59:49.651 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:49 compute-0 sudo[281217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 15:59:49 compute-0 sudo[281217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:49 compute-0 podman[281252]: 2026-02-02 15:59:49.956324635 +0000 UTC m=+0.040120255 container create ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Feb 02 15:59:49 compute-0 systemd[1]: Started libpod-conmon-ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63.scope.
Feb 02 15:59:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:59:50 compute-0 podman[281252]: 2026-02-02 15:59:50.027159394 +0000 UTC m=+0.110955014 container init ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:59:50 compute-0 podman[281252]: 2026-02-02 15:59:50.033092979 +0000 UTC m=+0.116888609 container start ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:59:50 compute-0 podman[281252]: 2026-02-02 15:59:49.938159139 +0000 UTC m=+0.021954759 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:59:50 compute-0 podman[281252]: 2026-02-02 15:59:50.036677238 +0000 UTC m=+0.120472888 container attach ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 02 15:59:50 compute-0 systemd[1]: libpod-ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63.scope: Deactivated successfully.
Feb 02 15:59:50 compute-0 eager_euler[281268]: 167 167
Feb 02 15:59:50 compute-0 conmon[281268]: conmon ed72ad1fac38035b6d41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63.scope/container/memory.events
Feb 02 15:59:50 compute-0 podman[281252]: 2026-02-02 15:59:50.040908111 +0000 UTC m=+0.124703741 container died ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 15:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-af9f7cf0906b77d82d87943be2b79e984ddac03261a8a0c15c0427a3d016b115-merged.mount: Deactivated successfully.
Feb 02 15:59:50 compute-0 podman[281252]: 2026-02-02 15:59:50.080522783 +0000 UTC m=+0.164318383 container remove ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 02 15:59:50 compute-0 systemd[1]: libpod-conmon-ed72ad1fac38035b6d41069e8b9c0bd06bed2c0eda67b2180e6958efcb04fa63.scope: Deactivated successfully.
Feb 02 15:59:50 compute-0 podman[281291]: 2026-02-02 15:59:50.224309123 +0000 UTC m=+0.052010658 container create 5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_bell, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 15:59:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:50 compute-0 nova_compute[239545]: 2026-02-02 15:59:50.256 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:50 compute-0 systemd[1]: Started libpod-conmon-5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999.scope.
Feb 02 15:59:50 compute-0 systemd[1]: Started libcrun container.
Feb 02 15:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f254f340b1a114b865b8744773f90464f2da31a396ff592a232a9bc4a732ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f254f340b1a114b865b8744773f90464f2da31a396ff592a232a9bc4a732ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f254f340b1a114b865b8744773f90464f2da31a396ff592a232a9bc4a732ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f254f340b1a114b865b8744773f90464f2da31a396ff592a232a9bc4a732ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 15:59:50 compute-0 podman[281291]: 2026-02-02 15:59:50.205901241 +0000 UTC m=+0.033602806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 15:59:50 compute-0 podman[281291]: 2026-02-02 15:59:50.310087178 +0000 UTC m=+0.137788713 container init 5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_bell, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 02 15:59:50 compute-0 podman[281291]: 2026-02-02 15:59:50.320821622 +0000 UTC m=+0.148523177 container start 5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_bell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb 02 15:59:50 compute-0 podman[281291]: 2026-02-02 15:59:50.32646789 +0000 UTC m=+0.154169405 container attach 5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_bell, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 15:59:50 compute-0 lvm[281389]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 15:59:50 compute-0 lvm[281389]: VG ceph_vg1 finished
Feb 02 15:59:50 compute-0 lvm[281388]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 15:59:50 compute-0 lvm[281388]: VG ceph_vg2 finished
Feb 02 15:59:50 compute-0 lvm[281385]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 15:59:50 compute-0 lvm[281385]: VG ceph_vg0 finished
Feb 02 15:59:51 compute-0 vibrant_bell[281307]: {}
Feb 02 15:59:51 compute-0 systemd[1]: libpod-5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999.scope: Deactivated successfully.
Feb 02 15:59:51 compute-0 systemd[1]: libpod-5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999.scope: Consumed 1.174s CPU time.
Feb 02 15:59:51 compute-0 podman[281291]: 2026-02-02 15:59:51.10834207 +0000 UTC m=+0.936043625 container died 5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_bell, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 02 15:59:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-98f254f340b1a114b865b8744773f90464f2da31a396ff592a232a9bc4a732ea-merged.mount: Deactivated successfully.
Feb 02 15:59:51 compute-0 podman[281291]: 2026-02-02 15:59:51.157858565 +0000 UTC m=+0.985560120 container remove 5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 15:59:51 compute-0 systemd[1]: libpod-conmon-5033ef4e83cf043cb0df0fa94ba5e041913576c5b1a4264aa6f3eafdb4323999.scope: Deactivated successfully.
Feb 02 15:59:51 compute-0 sudo[281217]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 15:59:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 15:59:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:51 compute-0 sudo[281403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 15:59:51 compute-0 sudo[281403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 15:59:51 compute-0 sudo[281403]: pam_unix(sudo:session): session closed for user root
Feb 02 15:59:51 compute-0 ceph-mon[75334]: pgmap v2073: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:51 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:51 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 15:59:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:53 compute-0 ceph-mon[75334]: pgmap v2074: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:54 compute-0 nova_compute[239545]: 2026-02-02 15:59:54.696 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007632409966596675 of space, bias 1.0, pg target 0.22897229899790025 quantized to 32 (current 32)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029290397162393036 of space, bias 1.0, pg target 0.8787119148717911 quantized to 32 (current 32)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8636478275776745e-06 of space, bias 1.0, pg target 0.0011590943482733024 quantized to 32 (current 32)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677614895015238 of space, bias 1.0, pg target 0.20032844685045714 quantized to 32 (current 32)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4237735919936245e-06 of space, bias 4.0, pg target 0.0017085283103923494 quantized to 16 (current 16)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 15:59:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 15:59:55 compute-0 nova_compute[239545]: 2026-02-02 15:59:55.259 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 15:59:55 compute-0 ceph-mon[75334]: pgmap v2075: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:57 compute-0 ceph-mon[75334]: pgmap v2076: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 15:59:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:59:59.270 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 15:59:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:59:59.271 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 15:59:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 15:59:59.272 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 15:59:59 compute-0 ceph-mon[75334]: pgmap v2077: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 15:59:59 compute-0 nova_compute[239545]: 2026-02-02 15:59:59.698 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:00 compute-0 nova_compute[239545]: 2026-02-02 16:00:00.261 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:01 compute-0 ceph-mon[75334]: pgmap v2078: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:03 compute-0 ceph-mon[75334]: pgmap v2079: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:04 compute-0 nova_compute[239545]: 2026-02-02 16:00:04.699 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:05 compute-0 nova_compute[239545]: 2026-02-02 16:00:05.298 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:05 compute-0 ceph-mon[75334]: pgmap v2080: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:07 compute-0 ceph-mon[75334]: pgmap v2081: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:09 compute-0 ceph-mon[75334]: pgmap v2082: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:09 compute-0 nova_compute[239545]: 2026-02-02 16:00:09.701 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:10 compute-0 nova_compute[239545]: 2026-02-02 16:00:10.301 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:11 compute-0 podman[281429]: 2026-02-02 16:00:11.330521843 +0000 UTC m=+0.061570672 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Feb 02 16:00:11 compute-0 podman[281428]: 2026-02-02 16:00:11.370474394 +0000 UTC m=+0.101041972 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 02 16:00:11 compute-0 ceph-mon[75334]: pgmap v2083: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:13 compute-0 ceph-mon[75334]: pgmap v2084: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:14 compute-0 nova_compute[239545]: 2026-02-02 16:00:14.703 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:00:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:00:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:00:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:00:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:00:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:00:15 compute-0 nova_compute[239545]: 2026-02-02 16:00:15.302 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:15 compute-0 nova_compute[239545]: 2026-02-02 16:00:15.568 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:15 compute-0 ceph-mon[75334]: pgmap v2085: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:17 compute-0 ceph-mon[75334]: pgmap v2086: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:18 compute-0 nova_compute[239545]: 2026-02-02 16:00:18.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:18 compute-0 nova_compute[239545]: 2026-02-02 16:00:18.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 16:00:18 compute-0 nova_compute[239545]: 2026-02-02 16:00:18.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 16:00:18 compute-0 nova_compute[239545]: 2026-02-02 16:00:18.727 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 02 16:00:18 compute-0 nova_compute[239545]: 2026-02-02 16:00:18.727 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquired lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 02 16:00:18 compute-0 nova_compute[239545]: 2026-02-02 16:00:18.727 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 02 16:00:18 compute-0 nova_compute[239545]: 2026-02-02 16:00:18.727 239549 DEBUG nova.objects.instance [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 16:00:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:19 compute-0 ceph-mon[75334]: pgmap v2087: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:19 compute-0 nova_compute[239545]: 2026-02-02 16:00:19.704 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:20 compute-0 nova_compute[239545]: 2026-02-02 16:00:20.303 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:20 compute-0 nova_compute[239545]: 2026-02-02 16:00:20.611 239549 DEBUG nova.network.neutron [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [{"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 16:00:20 compute-0 nova_compute[239545]: 2026-02-02 16:00:20.627 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Releasing lock "refresh_cache-0a8d1e5a-af31-43cc-80a2-17c586996828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 02 16:00:20 compute-0 nova_compute[239545]: 2026-02-02 16:00:20.627 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 02 16:00:21 compute-0 ceph-mon[75334]: pgmap v2088: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:23 compute-0 nova_compute[239545]: 2026-02-02 16:00:23.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:23 compute-0 nova_compute[239545]: 2026-02-02 16:00:23.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:23 compute-0 ceph-mon[75334]: pgmap v2089: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:24 compute-0 nova_compute[239545]: 2026-02-02 16:00:24.706 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:25 compute-0 nova_compute[239545]: 2026-02-02 16:00:25.327 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:25 compute-0 ceph-mon[75334]: pgmap v2090: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:27 compute-0 nova_compute[239545]: 2026-02-02 16:00:27.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:27 compute-0 ceph-mon[75334]: pgmap v2091: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 16:00:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/467221014' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 16:00:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 16:00:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/467221014' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 16:00:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:28 compute-0 nova_compute[239545]: 2026-02-02 16:00:28.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:28 compute-0 nova_compute[239545]: 2026-02-02 16:00:28.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:28 compute-0 nova_compute[239545]: 2026-02-02 16:00:28.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:28 compute-0 nova_compute[239545]: 2026-02-02 16:00:28.547 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:28 compute-0 nova_compute[239545]: 2026-02-02 16:00:28.578 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:00:28 compute-0 nova_compute[239545]: 2026-02-02 16:00:28.579 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:00:28 compute-0 nova_compute[239545]: 2026-02-02 16:00:28.579 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:00:28 compute-0 nova_compute[239545]: 2026-02-02 16:00:28.580 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 16:00:28 compute-0 nova_compute[239545]: 2026-02-02 16:00:28.580 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 16:00:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/467221014' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 16:00:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/467221014' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 16:00:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 16:00:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854510616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.161 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.247 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.248 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.248 239549 DEBUG nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.448 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.449 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3988MB free_disk=59.94249573443085GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.450 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.450 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.553 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Instance 0a8d1e5a-af31-43cc-80a2-17c586996828 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.553 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.553 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.582 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 16:00:29 compute-0 ceph-mon[75334]: pgmap v2092: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2854510616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:00:29 compute-0 nova_compute[239545]: 2026-02-02 16:00:29.707 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 16:00:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867612731' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:00:30 compute-0 nova_compute[239545]: 2026-02-02 16:00:30.083 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 16:00:30 compute-0 nova_compute[239545]: 2026-02-02 16:00:30.088 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 16:00:30 compute-0 nova_compute[239545]: 2026-02-02 16:00:30.104 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 16:00:30 compute-0 nova_compute[239545]: 2026-02-02 16:00:30.107 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 16:00:30 compute-0 nova_compute[239545]: 2026-02-02 16:00:30.108 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:00:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:30 compute-0 nova_compute[239545]: 2026-02-02 16:00:30.329 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3867612731' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:00:31 compute-0 nova_compute[239545]: 2026-02-02 16:00:31.107 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:00:31 compute-0 nova_compute[239545]: 2026-02-02 16:00:31.108 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 16:00:31 compute-0 ceph-mon[75334]: pgmap v2093: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:33 compute-0 ceph-mon[75334]: pgmap v2094: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:34 compute-0 nova_compute[239545]: 2026-02-02 16:00:34.709 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:35 compute-0 nova_compute[239545]: 2026-02-02 16:00:35.331 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:35 compute-0 ceph-mon[75334]: pgmap v2095: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:37 compute-0 ceph-mon[75334]: pgmap v2096: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:39 compute-0 nova_compute[239545]: 2026-02-02 16:00:39.711 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:39 compute-0 ceph-mon[75334]: pgmap v2097: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.319 239549 DEBUG oslo_concurrency.lockutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.320 239549 DEBUG oslo_concurrency.lockutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.321 239549 DEBUG oslo_concurrency.lockutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.321 239549 DEBUG oslo_concurrency.lockutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.321 239549 DEBUG oslo_concurrency.lockutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.324 239549 INFO nova.compute.manager [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Terminating instance
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.326 239549 DEBUG nova.compute.manager [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.333 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 kernel: tapb40b5abb-11 (unregistering): left promiscuous mode
Feb 02 16:00:40 compute-0 NetworkManager[49171]: <info>  [1770048040.4109] device (tapb40b5abb-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 02 16:00:40 compute-0 ovn_controller[144995]: 2026-02-02T16:00:40Z|00290|binding|INFO|Releasing lport b40b5abb-11a7-4bce-96a9-904feea605f6 from this chassis (sb_readonly=0)
Feb 02 16:00:40 compute-0 ovn_controller[144995]: 2026-02-02T16:00:40Z|00291|binding|INFO|Setting lport b40b5abb-11a7-4bce-96a9-904feea605f6 down in Southbound
Feb 02 16:00:40 compute-0 ovn_controller[144995]: 2026-02-02T16:00:40Z|00292|binding|INFO|Removing iface tapb40b5abb-11 ovn-installed in OVS
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.418 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.421 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.426 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:7b:e6 10.100.0.6'], port_security=['fa:16:3e:a3:7b:e6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0a8d1e5a-af31-43cc-80a2-17c586996828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93cb165b-b97d-434d-8af7-ddc2fabeffee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4dcd12fb00104dd9bbcc100f7828c435', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64abc105-a857-4a13-b475-b019801cc32c e8b16762-2aff-4721-b0e3-10bc40176f4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8bd12e5-65b1-4fe7-9b52-fe844064c5a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>], logical_port=b40b5abb-11a7-4bce-96a9-904feea605f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efc0ab1fb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.427 154982 INFO neutron.agent.ovn.metadata.agent [-] Port b40b5abb-11a7-4bce-96a9-904feea605f6 in datapath 93cb165b-b97d-434d-8af7-ddc2fabeffee unbound from our chassis
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.428 154982 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93cb165b-b97d-434d-8af7-ddc2fabeffee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.430 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.429 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[c7163731-7b32-4800-9324-5a2debd3df36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.431 154982 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee namespace which is not needed anymore
Feb 02 16:00:40 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Feb 02 16:00:40 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 50.500s CPU time.
Feb 02 16:00:40 compute-0 systemd-machined[207609]: Machine qemu-22-instance-00000016 terminated.
Feb 02 16:00:40 compute-0 neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee[267558]: [NOTICE]   (267563) : haproxy version is 2.8.14-c23fe91
Feb 02 16:00:40 compute-0 neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee[267558]: [NOTICE]   (267563) : path to executable is /usr/sbin/haproxy
Feb 02 16:00:40 compute-0 neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee[267558]: [WARNING]  (267563) : Exiting Master process...
Feb 02 16:00:40 compute-0 neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee[267558]: [ALERT]    (267563) : Current worker (267565) exited with code 143 (Terminated)
Feb 02 16:00:40 compute-0 neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee[267558]: [WARNING]  (267563) : All workers exited. Exiting... (0)
Feb 02 16:00:40 compute-0 systemd[1]: libpod-5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac.scope: Deactivated successfully.
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.547 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.551 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 podman[281540]: 2026-02-02 16:00:40.551327223 +0000 UTC m=+0.044142715 container died 5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.567 239549 INFO nova.virt.libvirt.driver [-] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Instance destroyed successfully.
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.568 239549 DEBUG nova.objects.instance [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lazy-loading 'resources' on Instance uuid 0a8d1e5a-af31-43cc-80a2-17c586996828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.584 239549 DEBUG nova.virt.libvirt.vif [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T15:44:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-569887417',display_name='tempest-SnapshotDataIntegrityTests-server-569887417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-569887417',id=22,image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKDjFL6frGYjZPZCAUAsF1kAi2gOs4UqX81GaslFuyFLyY5rcP/AssRZOt9xbxtSCQ4ETXtR5POrUSSA1jnMxdJ/13sE4Jmx1NpbWyjIm1JVJWcS6wHWb75Gr3WAoTE0CQ==',key_name='tempest-keypair-1823352159',keypairs=<?>,launch_index=0,launched_at=2026-02-02T15:44:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4dcd12fb00104dd9bbcc100f7828c435',ramdisk_id='',reservation_id='r-1qpnu0l8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='271bf15b-9e9a-428a-a098-dcc68b158a7a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-235440494',owner_user_name='tempest-SnapshotDataIntegrityTests-235440494-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T15:44:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='91001e0c903c4810bbeb98636b2e2380',uuid=0a8d1e5a-af31-43cc-80a2-17c586996828,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.585 239549 DEBUG nova.network.os_vif_util [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Converting VIF {"id": "b40b5abb-11a7-4bce-96a9-904feea605f6", "address": "fa:16:3e:a3:7b:e6", "network": {"id": "93cb165b-b97d-434d-8af7-ddc2fabeffee", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-437424832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dcd12fb00104dd9bbcc100f7828c435", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb40b5abb-11", "ovs_interfaceid": "b40b5abb-11a7-4bce-96a9-904feea605f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.586 239549 DEBUG nova.network.os_vif_util [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:7b:e6,bridge_name='br-int',has_traffic_filtering=True,id=b40b5abb-11a7-4bce-96a9-904feea605f6,network=Network(93cb165b-b97d-434d-8af7-ddc2fabeffee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb40b5abb-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.587 239549 DEBUG os_vif [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:7b:e6,bridge_name='br-int',has_traffic_filtering=True,id=b40b5abb-11a7-4bce-96a9-904feea605f6,network=Network(93cb165b-b97d-434d-8af7-ddc2fabeffee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb40b5abb-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 02 16:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac-userdata-shm.mount: Deactivated successfully.
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.588 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.588 239549 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb40b5abb-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.590 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-04afa74679d4774c0a9e465d1c766f43997907cb707a89950696228a16ca8f4b-merged.mount: Deactivated successfully.
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.591 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.595 239549 INFO os_vif [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:7b:e6,bridge_name='br-int',has_traffic_filtering=True,id=b40b5abb-11a7-4bce-96a9-904feea605f6,network=Network(93cb165b-b97d-434d-8af7-ddc2fabeffee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb40b5abb-11')
Feb 02 16:00:40 compute-0 podman[281540]: 2026-02-02 16:00:40.59928308 +0000 UTC m=+0.092098482 container cleanup 5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 16:00:40 compute-0 systemd[1]: libpod-conmon-5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac.scope: Deactivated successfully.
Feb 02 16:00:40 compute-0 podman[281582]: 2026-02-02 16:00:40.676498955 +0000 UTC m=+0.053915954 container remove 5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.682 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[a5eb9842-ad81-4e22-805e-f8f0bb9154d0]: (4, ('Mon Feb  2 04:00:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee (5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac)\n5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac\nMon Feb  2 04:00:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee (5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac)\n5f03a92956ce8e97aacf0490eee4da0af548dd09de2cd6eeb9cae462cca054ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.685 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[002fe717-36f4-4e86-b2fa-96dcfec792c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.686 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93cb165b-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 16:00:40 compute-0 kernel: tap93cb165b-b0: left promiscuous mode
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.688 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.693 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.697 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[01367adf-8530-4821-8832-c49b414efbec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.715 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2f6d56-a61b-4aea-98b1-d9394d2fde9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.717 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[52d0d595-83c5-48a7-99ff-443dd41d0924]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.728 245965 DEBUG oslo.privsep.daemon [-] privsep: reply[7d29655c-6199-4f63-a3d2-8b99565150ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458297, 'reachable_time': 37521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281608, 'error': None, 'target': 'ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.730 155499 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93cb165b-b97d-434d-8af7-ddc2fabeffee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 02 16:00:40 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:40.731 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[33b42779-b7d8-443f-bcdb-0b25b4e1278c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 02 16:00:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d93cb165b\x2db97d\x2d434d\x2d8af7\x2dddc2fabeffee.mount: Deactivated successfully.
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.906 239549 INFO nova.virt.libvirt.driver [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Deleting instance files /var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828_del
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.907 239549 INFO nova.virt.libvirt.driver [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Deletion of /var/lib/nova/instances/0a8d1e5a-af31-43cc-80a2-17c586996828_del complete
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.959 239549 INFO nova.compute.manager [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Took 0.63 seconds to destroy the instance on the hypervisor.
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.959 239549 DEBUG oslo.service.loopingcall [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.960 239549 DEBUG nova.compute.manager [-] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 02 16:00:40 compute-0 nova_compute[239545]: 2026-02-02 16:00:40.960 239549 DEBUG nova.network.neutron [-] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.083 239549 DEBUG nova.compute.manager [req-bfefd3af-4044-4d22-aba1-364f3671dfc8 req-57c6a8e0-8d26-4a60-9681-3e130769b166 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received event network-vif-unplugged-b40b5abb-11a7-4bce-96a9-904feea605f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.085 239549 DEBUG oslo_concurrency.lockutils [req-bfefd3af-4044-4d22-aba1-364f3671dfc8 req-57c6a8e0-8d26-4a60-9681-3e130769b166 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.085 239549 DEBUG oslo_concurrency.lockutils [req-bfefd3af-4044-4d22-aba1-364f3671dfc8 req-57c6a8e0-8d26-4a60-9681-3e130769b166 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.086 239549 DEBUG oslo_concurrency.lockutils [req-bfefd3af-4044-4d22-aba1-364f3671dfc8 req-57c6a8e0-8d26-4a60-9681-3e130769b166 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.086 239549 DEBUG nova.compute.manager [req-bfefd3af-4044-4d22-aba1-364f3671dfc8 req-57c6a8e0-8d26-4a60-9681-3e130769b166 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] No waiting events found dispatching network-vif-unplugged-b40b5abb-11a7-4bce-96a9-904feea605f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.086 239549 DEBUG nova.compute.manager [req-bfefd3af-4044-4d22-aba1-364f3671dfc8 req-57c6a8e0-8d26-4a60-9681-3e130769b166 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received event network-vif-unplugged-b40b5abb-11a7-4bce-96a9-904feea605f6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 02 16:00:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:41.143 154982 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:50:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '86:a1:c3:ab:dd:81'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.144 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:41 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:41.145 154982 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 02 16:00:41 compute-0 ceph-mon[75334]: pgmap v2098: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.781 239549 DEBUG nova.network.neutron [-] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.797 239549 INFO nova.compute.manager [-] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Took 0.84 seconds to deallocate network for instance.
Feb 02 16:00:41 compute-0 nova_compute[239545]: 2026-02-02 16:00:41.943 239549 DEBUG nova.compute.manager [req-33120ade-bdca-49e2-aaba-9f70b1cdfd6f req-fa5fc2bd-dfcc-487e-9c71-d8f143eeac85 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received event network-vif-deleted-b40b5abb-11a7-4bce-96a9-904feea605f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.027 239549 INFO nova.compute.manager [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Took 0.23 seconds to detach 1 volumes for instance.
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.089 239549 DEBUG oslo_concurrency.lockutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.090 239549 DEBUG oslo_concurrency.lockutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.147 239549 DEBUG oslo_concurrency.processutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 16:00:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:42 compute-0 podman[281622]: 2026-02-02 16:00:42.361520863 +0000 UTC m=+0.082585898 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 02 16:00:42 compute-0 podman[281611]: 2026-02-02 16:00:42.368346081 +0000 UTC m=+0.089390785 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 16:00:42 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 16:00:42 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2465264748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.678 239549 DEBUG oslo_concurrency.processutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.687 239549 DEBUG nova.compute.provider_tree [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.707 239549 DEBUG nova.scheduler.client.report [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.733 239549 DEBUG oslo_concurrency.lockutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:00:42 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2465264748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.760 239549 INFO nova.scheduler.client.report [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Deleted allocations for instance 0a8d1e5a-af31-43cc-80a2-17c586996828
Feb 02 16:00:42 compute-0 nova_compute[239545]: 2026-02-02 16:00:42.837 239549 DEBUG oslo_concurrency.lockutils [None req-70ebfc8e-1e0e-4d02-ae89-6a3054a7b63f 91001e0c903c4810bbeb98636b2e2380 4dcd12fb00104dd9bbcc100f7828c435 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:00:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_16:00:42
Feb 02 16:00:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 16:00:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 16:00:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', '.rgw.root', 'volumes', '.mgr']
Feb 02 16:00:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 16:00:43 compute-0 nova_compute[239545]: 2026-02-02 16:00:43.146 239549 DEBUG nova.compute.manager [req-944f950d-850b-4d8d-a4c3-457fcb774029 req-45d5f897-529e-45b6-a7e2-0d32f6b1fed5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received event network-vif-plugged-b40b5abb-11a7-4bce-96a9-904feea605f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 02 16:00:43 compute-0 nova_compute[239545]: 2026-02-02 16:00:43.147 239549 DEBUG oslo_concurrency.lockutils [req-944f950d-850b-4d8d-a4c3-457fcb774029 req-45d5f897-529e-45b6-a7e2-0d32f6b1fed5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Acquiring lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:00:43 compute-0 nova_compute[239545]: 2026-02-02 16:00:43.147 239549 DEBUG oslo_concurrency.lockutils [req-944f950d-850b-4d8d-a4c3-457fcb774029 req-45d5f897-529e-45b6-a7e2-0d32f6b1fed5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:00:43 compute-0 nova_compute[239545]: 2026-02-02 16:00:43.148 239549 DEBUG oslo_concurrency.lockutils [req-944f950d-850b-4d8d-a4c3-457fcb774029 req-45d5f897-529e-45b6-a7e2-0d32f6b1fed5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] Lock "0a8d1e5a-af31-43cc-80a2-17c586996828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:00:43 compute-0 nova_compute[239545]: 2026-02-02 16:00:43.148 239549 DEBUG nova.compute.manager [req-944f950d-850b-4d8d-a4c3-457fcb774029 req-45d5f897-529e-45b6-a7e2-0d32f6b1fed5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] No waiting events found dispatching network-vif-plugged-b40b5abb-11a7-4bce-96a9-904feea605f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 02 16:00:43 compute-0 nova_compute[239545]: 2026-02-02 16:00:43.149 239549 WARNING nova.compute.manager [req-944f950d-850b-4d8d-a4c3-457fcb774029 req-45d5f897-529e-45b6-a7e2-0d32f6b1fed5 d4c9fb41732744c28cf62023364d23d3 625e86c884f1485cb78ec4d053300312 - - default default] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Received unexpected event network-vif-plugged-b40b5abb-11a7-4bce-96a9-904feea605f6 for instance with vm_state deleted and task_state None.
Feb 02 16:00:43 compute-0 ceph-mon[75334]: pgmap v2099: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 335 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 341 B/s wr, 4 op/s
Feb 02 16:00:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:00:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:00:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:00:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:00:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:00:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 16:00:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 16:00:45 compute-0 nova_compute[239545]: 2026-02-02 16:00:45.335 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:45 compute-0 nova_compute[239545]: 2026-02-02 16:00:45.590 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:45 compute-0 ceph-mon[75334]: pgmap v2100: 305 pgs: 305 active+clean; 335 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 341 B/s wr, 4 op/s
Feb 02 16:00:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:47 compute-0 ceph-mon[75334]: pgmap v2101: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:48 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:49 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:49.147 154982 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=673607ba-6470-4d88-9324-0f750aed69af, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 02 16:00:49 compute-0 nova_compute[239545]: 2026-02-02 16:00:49.417 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:49 compute-0 nova_compute[239545]: 2026-02-02 16:00:49.467 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:49 compute-0 ceph-mon[75334]: pgmap v2102: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:50 compute-0 nova_compute[239545]: 2026-02-02 16:00:50.337 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:50 compute-0 nova_compute[239545]: 2026-02-02 16:00:50.593 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:51 compute-0 sudo[281676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 16:00:51 compute-0 sudo[281676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:00:51 compute-0 sudo[281676]: pam_unix(sudo:session): session closed for user root
Feb 02 16:00:51 compute-0 sudo[281701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 16:00:51 compute-0 sudo[281701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:00:51 compute-0 ceph-mon[75334]: pgmap v2103: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:51 compute-0 sudo[281701]: pam_unix(sudo:session): session closed for user root
Feb 02 16:00:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 02 16:00:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 16:00:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 16:00:51 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:00:51 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 16:00:51 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 16:00:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 16:00:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:00:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 16:00:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 16:00:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 16:00:52 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 16:00:52 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 16:00:52 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:00:52 compute-0 sudo[281757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 16:00:52 compute-0 sudo[281757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:00:52 compute-0 sudo[281757]: pam_unix(sudo:session): session closed for user root
Feb 02 16:00:52 compute-0 sudo[281782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 16:00:52 compute-0 sudo[281782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:00:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:52 compute-0 podman[281820]: 2026-02-02 16:00:52.408337152 +0000 UTC m=+0.054053117 container create 99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_northcutt, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 16:00:52 compute-0 systemd[1]: Started libpod-conmon-99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a.scope.
Feb 02 16:00:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:00:52 compute-0 podman[281820]: 2026-02-02 16:00:52.387265345 +0000 UTC m=+0.032981340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:00:52 compute-0 podman[281820]: 2026-02-02 16:00:52.498062234 +0000 UTC m=+0.143778269 container init 99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 02 16:00:52 compute-0 podman[281820]: 2026-02-02 16:00:52.508335897 +0000 UTC m=+0.154051852 container start 99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 16:00:52 compute-0 podman[281820]: 2026-02-02 16:00:52.512281754 +0000 UTC m=+0.157997749 container attach 99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_northcutt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 02 16:00:52 compute-0 sad_northcutt[281837]: 167 167
Feb 02 16:00:52 compute-0 systemd[1]: libpod-99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a.scope: Deactivated successfully.
Feb 02 16:00:52 compute-0 conmon[281837]: conmon 99962564a3176cc08cb9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a.scope/container/memory.events
Feb 02 16:00:52 compute-0 podman[281820]: 2026-02-02 16:00:52.517751477 +0000 UTC m=+0.163467432 container died 99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_northcutt, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 16:00:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-247eebc4fbabf259d5296924cc1b752e66a2d20e39333433a8f85450170382aa-merged.mount: Deactivated successfully.
Feb 02 16:00:52 compute-0 podman[281820]: 2026-02-02 16:00:52.565095839 +0000 UTC m=+0.210811794 container remove 99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 02 16:00:52 compute-0 systemd[1]: libpod-conmon-99962564a3176cc08cb9a8cbb0eeda04135a8f773602dce421d278bd4134a98a.scope: Deactivated successfully.
Feb 02 16:00:52 compute-0 podman[281861]: 2026-02-02 16:00:52.724669026 +0000 UTC m=+0.058240190 container create ce5fe8aed023e2e5cd1f28ebc971dd00fbfe343605f9fbfff0c77d107a764d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 02 16:00:52 compute-0 systemd[1]: Started libpod-conmon-ce5fe8aed023e2e5cd1f28ebc971dd00fbfe343605f9fbfff0c77d107a764d80.scope.
Feb 02 16:00:52 compute-0 podman[281861]: 2026-02-02 16:00:52.703887456 +0000 UTC m=+0.037458650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:00:52 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aea0a83c87b47c9e103aed05b12f2a9f12847d6ed796fad12c7dc36aebd1bb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aea0a83c87b47c9e103aed05b12f2a9f12847d6ed796fad12c7dc36aebd1bb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aea0a83c87b47c9e103aed05b12f2a9f12847d6ed796fad12c7dc36aebd1bb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aea0a83c87b47c9e103aed05b12f2a9f12847d6ed796fad12c7dc36aebd1bb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aea0a83c87b47c9e103aed05b12f2a9f12847d6ed796fad12c7dc36aebd1bb2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:52 compute-0 podman[281861]: 2026-02-02 16:00:52.840153551 +0000 UTC m=+0.173724785 container init ce5fe8aed023e2e5cd1f28ebc971dd00fbfe343605f9fbfff0c77d107a764d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_newton, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 02 16:00:52 compute-0 podman[281861]: 2026-02-02 16:00:52.852846052 +0000 UTC m=+0.186417246 container start ce5fe8aed023e2e5cd1f28ebc971dd00fbfe343605f9fbfff0c77d107a764d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_newton, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 02 16:00:52 compute-0 podman[281861]: 2026-02-02 16:00:52.857549967 +0000 UTC m=+0.191121201 container attach ce5fe8aed023e2e5cd1f28ebc971dd00fbfe343605f9fbfff0c77d107a764d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 16:00:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 02 16:00:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:00:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 16:00:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:00:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 16:00:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 16:00:52 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:00:53 compute-0 intelligent_newton[281878]: --> passed data devices: 0 physical, 3 LVM
Feb 02 16:00:53 compute-0 intelligent_newton[281878]: --> All data devices are unavailable
Feb 02 16:00:53 compute-0 systemd[1]: libpod-ce5fe8aed023e2e5cd1f28ebc971dd00fbfe343605f9fbfff0c77d107a764d80.scope: Deactivated successfully.
Feb 02 16:00:53 compute-0 podman[281861]: 2026-02-02 16:00:53.397223214 +0000 UTC m=+0.730794378 container died ce5fe8aed023e2e5cd1f28ebc971dd00fbfe343605f9fbfff0c77d107a764d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 02 16:00:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aea0a83c87b47c9e103aed05b12f2a9f12847d6ed796fad12c7dc36aebd1bb2-merged.mount: Deactivated successfully.
Feb 02 16:00:53 compute-0 podman[281861]: 2026-02-02 16:00:53.445343774 +0000 UTC m=+0.778914968 container remove ce5fe8aed023e2e5cd1f28ebc971dd00fbfe343605f9fbfff0c77d107a764d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_newton, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 02 16:00:53 compute-0 systemd[1]: libpod-conmon-ce5fe8aed023e2e5cd1f28ebc971dd00fbfe343605f9fbfff0c77d107a764d80.scope: Deactivated successfully.
Feb 02 16:00:53 compute-0 sudo[281782]: pam_unix(sudo:session): session closed for user root
Feb 02 16:00:53 compute-0 sudo[281911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 16:00:53 compute-0 sudo[281911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:00:53 compute-0 sudo[281911]: pam_unix(sudo:session): session closed for user root
Feb 02 16:00:53 compute-0 sudo[281936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 16:00:53 compute-0 sudo[281936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:00:53 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:53 compute-0 podman[281973]: 2026-02-02 16:00:53.918721243 +0000 UTC m=+0.041340755 container create 3f5ff4e6589f4137b822a93d30a909d726daf09c39810acff8b3e9dc75f1f18d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_volhard, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 02 16:00:53 compute-0 systemd[1]: Started libpod-conmon-3f5ff4e6589f4137b822a93d30a909d726daf09c39810acff8b3e9dc75f1f18d.scope.
Feb 02 16:00:53 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:00:53 compute-0 podman[281973]: 2026-02-02 16:00:53.901003888 +0000 UTC m=+0.023623430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:00:53 compute-0 podman[281973]: 2026-02-02 16:00:53.998684825 +0000 UTC m=+0.121304387 container init 3f5ff4e6589f4137b822a93d30a909d726daf09c39810acff8b3e9dc75f1f18d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 02 16:00:54 compute-0 podman[281973]: 2026-02-02 16:00:54.005175535 +0000 UTC m=+0.127795047 container start 3f5ff4e6589f4137b822a93d30a909d726daf09c39810acff8b3e9dc75f1f18d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_volhard, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 16:00:54 compute-0 podman[281973]: 2026-02-02 16:00:54.009592183 +0000 UTC m=+0.132211695 container attach 3f5ff4e6589f4137b822a93d30a909d726daf09c39810acff8b3e9dc75f1f18d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_volhard, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 16:00:54 compute-0 jovial_volhard[281989]: 167 167
Feb 02 16:00:54 compute-0 systemd[1]: libpod-3f5ff4e6589f4137b822a93d30a909d726daf09c39810acff8b3e9dc75f1f18d.scope: Deactivated successfully.
Feb 02 16:00:54 compute-0 podman[281973]: 2026-02-02 16:00:54.012027983 +0000 UTC m=+0.134647505 container died 3f5ff4e6589f4137b822a93d30a909d726daf09c39810acff8b3e9dc75f1f18d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 16:00:54 compute-0 ceph-mon[75334]: pgmap v2104: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-87362d15af41d7a3b654e5ec8e92fd7f842d418b29cb54d4c9e914ab9cf9838d-merged.mount: Deactivated successfully.
Feb 02 16:00:54 compute-0 podman[281973]: 2026-02-02 16:00:54.050550768 +0000 UTC m=+0.173170290 container remove 3f5ff4e6589f4137b822a93d30a909d726daf09c39810acff8b3e9dc75f1f18d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 16:00:54 compute-0 systemd[1]: libpod-conmon-3f5ff4e6589f4137b822a93d30a909d726daf09c39810acff8b3e9dc75f1f18d.scope: Deactivated successfully.
Feb 02 16:00:54 compute-0 podman[282013]: 2026-02-02 16:00:54.20705076 +0000 UTC m=+0.049031445 container create d0ac440d9978bbb02615b266a46e96bcbf3ae1381dcd33dab400c8dace72073e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_golick, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 02 16:00:54 compute-0 systemd[1]: Started libpod-conmon-d0ac440d9978bbb02615b266a46e96bcbf3ae1381dcd33dab400c8dace72073e.scope.
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:54 compute-0 podman[282013]: 2026-02-02 16:00:54.179983846 +0000 UTC m=+0.021964601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:00:54 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591e18a2bf9c60a4e917c02fcc79f0b0fee5e9d1963da10a7cb2fda99dbacee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591e18a2bf9c60a4e917c02fcc79f0b0fee5e9d1963da10a7cb2fda99dbacee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591e18a2bf9c60a4e917c02fcc79f0b0fee5e9d1963da10a7cb2fda99dbacee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591e18a2bf9c60a4e917c02fcc79f0b0fee5e9d1963da10a7cb2fda99dbacee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:54 compute-0 podman[282013]: 2026-02-02 16:00:54.301124318 +0000 UTC m=+0.143104993 container init d0ac440d9978bbb02615b266a46e96bcbf3ae1381dcd33dab400c8dace72073e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_golick, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 16:00:54 compute-0 podman[282013]: 2026-02-02 16:00:54.305963758 +0000 UTC m=+0.147944423 container start d0ac440d9978bbb02615b266a46e96bcbf3ae1381dcd33dab400c8dace72073e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Feb 02 16:00:54 compute-0 podman[282013]: 2026-02-02 16:00:54.309212057 +0000 UTC m=+0.151192732 container attach d0ac440d9978bbb02615b266a46e96bcbf3ae1381dcd33dab400c8dace72073e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_golick, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 16:00:54 compute-0 busy_golick[282029]: {
Feb 02 16:00:54 compute-0 busy_golick[282029]:     "0": [
Feb 02 16:00:54 compute-0 busy_golick[282029]:         {
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "devices": [
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "/dev/loop3"
Feb 02 16:00:54 compute-0 busy_golick[282029]:             ],
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_name": "ceph_lv0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_size": "21470642176",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "name": "ceph_lv0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "tags": {
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.cluster_name": "ceph",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.crush_device_class": "",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.encrypted": "0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.objectstore": "bluestore",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.osd_id": "0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.type": "block",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.vdo": "0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.with_tpm": "0"
Feb 02 16:00:54 compute-0 busy_golick[282029]:             },
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "type": "block",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "vg_name": "ceph_vg0"
Feb 02 16:00:54 compute-0 busy_golick[282029]:         }
Feb 02 16:00:54 compute-0 busy_golick[282029]:     ],
Feb 02 16:00:54 compute-0 busy_golick[282029]:     "1": [
Feb 02 16:00:54 compute-0 busy_golick[282029]:         {
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "devices": [
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "/dev/loop4"
Feb 02 16:00:54 compute-0 busy_golick[282029]:             ],
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_name": "ceph_lv1",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_size": "21470642176",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "name": "ceph_lv1",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "tags": {
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.cluster_name": "ceph",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.crush_device_class": "",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.encrypted": "0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.objectstore": "bluestore",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.osd_id": "1",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.type": "block",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.vdo": "0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.with_tpm": "0"
Feb 02 16:00:54 compute-0 busy_golick[282029]:             },
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "type": "block",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "vg_name": "ceph_vg1"
Feb 02 16:00:54 compute-0 busy_golick[282029]:         }
Feb 02 16:00:54 compute-0 busy_golick[282029]:     ],
Feb 02 16:00:54 compute-0 busy_golick[282029]:     "2": [
Feb 02 16:00:54 compute-0 busy_golick[282029]:         {
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "devices": [
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "/dev/loop5"
Feb 02 16:00:54 compute-0 busy_golick[282029]:             ],
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_name": "ceph_lv2",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_size": "21470642176",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "name": "ceph_lv2",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "tags": {
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.cluster_name": "ceph",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.crush_device_class": "",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.encrypted": "0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.objectstore": "bluestore",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.osd_id": "2",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.type": "block",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.vdo": "0",
Feb 02 16:00:54 compute-0 busy_golick[282029]:                 "ceph.with_tpm": "0"
Feb 02 16:00:54 compute-0 busy_golick[282029]:             },
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "type": "block",
Feb 02 16:00:54 compute-0 busy_golick[282029]:             "vg_name": "ceph_vg2"
Feb 02 16:00:54 compute-0 busy_golick[282029]:         }
Feb 02 16:00:54 compute-0 busy_golick[282029]:     ]
Feb 02 16:00:54 compute-0 busy_golick[282029]: }
Feb 02 16:00:54 compute-0 systemd[1]: libpod-d0ac440d9978bbb02615b266a46e96bcbf3ae1381dcd33dab400c8dace72073e.scope: Deactivated successfully.
Feb 02 16:00:54 compute-0 podman[282013]: 2026-02-02 16:00:54.623151452 +0000 UTC m=+0.465132097 container died d0ac440d9978bbb02615b266a46e96bcbf3ae1381dcd33dab400c8dace72073e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_golick, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Feb 02 16:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6591e18a2bf9c60a4e917c02fcc79f0b0fee5e9d1963da10a7cb2fda99dbacee-merged.mount: Deactivated successfully.
Feb 02 16:00:54 compute-0 podman[282013]: 2026-02-02 16:00:54.662725393 +0000 UTC m=+0.504706058 container remove d0ac440d9978bbb02615b266a46e96bcbf3ae1381dcd33dab400c8dace72073e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_golick, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 16:00:54 compute-0 systemd[1]: libpod-conmon-d0ac440d9978bbb02615b266a46e96bcbf3ae1381dcd33dab400c8dace72073e.scope: Deactivated successfully.
Feb 02 16:00:54 compute-0 sudo[281936]: pam_unix(sudo:session): session closed for user root
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6766615411363839e-06 of space, bias 1.0, pg target 0.0005029984623409152 quantized to 32 (current 32)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002929071092416185 of space, bias 1.0, pg target 0.8787213277248556 quantized to 32 (current 32)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.860744638524344e-06 of space, bias 1.0, pg target 0.0011582233915573034 quantized to 32 (current 32)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677604493214888 of space, bias 1.0, pg target 0.20032813479644662 quantized to 32 (current 32)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4227334119584742e-06 of space, bias 4.0, pg target 0.001707280094350169 quantized to 16 (current 16)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:00:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 16:00:54 compute-0 sudo[282050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 16:00:54 compute-0 sudo[282050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:00:54 compute-0 sudo[282050]: pam_unix(sudo:session): session closed for user root
Feb 02 16:00:54 compute-0 sudo[282075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 16:00:54 compute-0 sudo[282075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:00:55 compute-0 podman[282112]: 2026-02-02 16:00:55.198066423 +0000 UTC m=+0.047485846 container create a96af87e34a1f7e8954b54bd86e66d00891fc3e7b2226e53b2a5c8f33e9720af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 02 16:00:55 compute-0 systemd[1]: Started libpod-conmon-a96af87e34a1f7e8954b54bd86e66d00891fc3e7b2226e53b2a5c8f33e9720af.scope.
Feb 02 16:00:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:00:55 compute-0 podman[282112]: 2026-02-02 16:00:55.180085382 +0000 UTC m=+0.029504785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:00:55 compute-0 podman[282112]: 2026-02-02 16:00:55.277520914 +0000 UTC m=+0.126940307 container init a96af87e34a1f7e8954b54bd86e66d00891fc3e7b2226e53b2a5c8f33e9720af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 16:00:55 compute-0 podman[282112]: 2026-02-02 16:00:55.286859522 +0000 UTC m=+0.136278915 container start a96af87e34a1f7e8954b54bd86e66d00891fc3e7b2226e53b2a5c8f33e9720af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 02 16:00:55 compute-0 vigorous_robinson[282129]: 167 167
Feb 02 16:00:55 compute-0 systemd[1]: libpod-a96af87e34a1f7e8954b54bd86e66d00891fc3e7b2226e53b2a5c8f33e9720af.scope: Deactivated successfully.
Feb 02 16:00:55 compute-0 podman[282112]: 2026-02-02 16:00:55.292163293 +0000 UTC m=+0.141582726 container attach a96af87e34a1f7e8954b54bd86e66d00891fc3e7b2226e53b2a5c8f33e9720af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 02 16:00:55 compute-0 podman[282112]: 2026-02-02 16:00:55.292759397 +0000 UTC m=+0.142178790 container died a96af87e34a1f7e8954b54bd86e66d00891fc3e7b2226e53b2a5c8f33e9720af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 16:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b59ee756e3820f8969091ff9b4d3d448adc6306eccca0388e48f2ee7d5a78b9-merged.mount: Deactivated successfully.
Feb 02 16:00:55 compute-0 podman[282112]: 2026-02-02 16:00:55.329912569 +0000 UTC m=+0.179331972 container remove a96af87e34a1f7e8954b54bd86e66d00891fc3e7b2226e53b2a5c8f33e9720af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 16:00:55 compute-0 systemd[1]: libpod-conmon-a96af87e34a1f7e8954b54bd86e66d00891fc3e7b2226e53b2a5c8f33e9720af.scope: Deactivated successfully.
Feb 02 16:00:55 compute-0 nova_compute[239545]: 2026-02-02 16:00:55.341 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:55 compute-0 podman[282152]: 2026-02-02 16:00:55.500139038 +0000 UTC m=+0.059519273 container create d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 16:00:55 compute-0 systemd[1]: Started libpod-conmon-d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2.scope.
Feb 02 16:00:55 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4558848c4701f5be928dcafb4e17dad7c91af84e8537d0e79dcf793fb6d4f6da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4558848c4701f5be928dcafb4e17dad7c91af84e8537d0e79dcf793fb6d4f6da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4558848c4701f5be928dcafb4e17dad7c91af84e8537d0e79dcf793fb6d4f6da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4558848c4701f5be928dcafb4e17dad7c91af84e8537d0e79dcf793fb6d4f6da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 16:00:55 compute-0 nova_compute[239545]: 2026-02-02 16:00:55.565 239549 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770048040.5641415, 0a8d1e5a-af31-43cc-80a2-17c586996828 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 02 16:00:55 compute-0 nova_compute[239545]: 2026-02-02 16:00:55.567 239549 INFO nova.compute.manager [-] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] VM Stopped (Lifecycle Event)
Feb 02 16:00:55 compute-0 podman[282152]: 2026-02-02 16:00:55.47579977 +0000 UTC m=+0.035180095 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:00:55 compute-0 podman[282152]: 2026-02-02 16:00:55.589745437 +0000 UTC m=+0.149125722 container init d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 02 16:00:55 compute-0 nova_compute[239545]: 2026-02-02 16:00:55.593 239549 DEBUG nova.compute.manager [None req-a1c58fc4-efb6-4f59-8a89-4763e28834ad - - - - - -] [instance: 0a8d1e5a-af31-43cc-80a2-17c586996828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 02 16:00:55 compute-0 nova_compute[239545]: 2026-02-02 16:00:55.596 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:00:55 compute-0 podman[282152]: 2026-02-02 16:00:55.596502932 +0000 UTC m=+0.155883167 container start d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 16:00:55 compute-0 podman[282152]: 2026-02-02 16:00:55.599979388 +0000 UTC m=+0.159359663 container attach d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 16:00:56 compute-0 ceph-mon[75334]: pgmap v2105: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 02 16:00:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 852 B/s wr, 23 op/s
Feb 02 16:00:56 compute-0 lvm[282246]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 16:00:56 compute-0 lvm[282247]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 16:00:56 compute-0 lvm[282247]: VG ceph_vg1 finished
Feb 02 16:00:56 compute-0 lvm[282246]: VG ceph_vg0 finished
Feb 02 16:00:56 compute-0 lvm[282249]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 16:00:56 compute-0 lvm[282249]: VG ceph_vg2 finished
Feb 02 16:00:56 compute-0 beautiful_mahavira[282168]: {}
Feb 02 16:00:56 compute-0 systemd[1]: libpod-d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2.scope: Deactivated successfully.
Feb 02 16:00:56 compute-0 systemd[1]: libpod-d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2.scope: Consumed 1.256s CPU time.
Feb 02 16:00:56 compute-0 podman[282252]: 2026-02-02 16:00:56.551525952 +0000 UTC m=+0.023509337 container died d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 16:00:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4558848c4701f5be928dcafb4e17dad7c91af84e8537d0e79dcf793fb6d4f6da-merged.mount: Deactivated successfully.
Feb 02 16:00:56 compute-0 podman[282252]: 2026-02-02 16:00:56.590212502 +0000 UTC m=+0.062195847 container remove d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 02 16:00:56 compute-0 systemd[1]: libpod-conmon-d8e0f32d742a3789c30543af700a7f60d5d47284174cc1507579f54cc333a9b2.scope: Deactivated successfully.
Feb 02 16:00:56 compute-0 sudo[282075]: pam_unix(sudo:session): session closed for user root
Feb 02 16:00:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 16:00:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:00:56 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 16:00:56 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:00:56 compute-0 sudo[282267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 16:00:56 compute-0 sudo[282267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:00:56 compute-0 sudo[282267]: pam_unix(sudo:session): session closed for user root
Feb 02 16:00:57 compute-0 ceph-mon[75334]: pgmap v2106: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 852 B/s wr, 23 op/s
Feb 02 16:00:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:00:57 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:00:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:00:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:00:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:59.272 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:00:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:59.272 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:00:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:00:59.272 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:00:59 compute-0 ceph-mon[75334]: pgmap v2107: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:00 compute-0 nova_compute[239545]: 2026-02-02 16:01:00.344 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:00 compute-0 nova_compute[239545]: 2026-02-02 16:01:00.599 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:01 compute-0 CROND[282293]: (root) CMD (run-parts /etc/cron.hourly)
Feb 02 16:01:01 compute-0 ceph-mon[75334]: pgmap v2108: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:01 compute-0 run-parts[282296]: (/etc/cron.hourly) starting 0anacron
Feb 02 16:01:01 compute-0 run-parts[282302]: (/etc/cron.hourly) finished 0anacron
Feb 02 16:01:01 compute-0 CROND[282292]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 02 16:01:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:03 compute-0 ceph-mon[75334]: pgmap v2109: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:05 compute-0 nova_compute[239545]: 2026-02-02 16:01:05.346 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:05 compute-0 nova_compute[239545]: 2026-02-02 16:01:05.601 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:05 compute-0 ceph-mon[75334]: pgmap v2110: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:07 compute-0 ceph-mon[75334]: pgmap v2111: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:08 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:09 compute-0 ceph-mon[75334]: pgmap v2112: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:10 compute-0 nova_compute[239545]: 2026-02-02 16:01:10.348 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:10 compute-0 nova_compute[239545]: 2026-02-02 16:01:10.602 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:11 compute-0 ceph-mon[75334]: pgmap v2113: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:13 compute-0 podman[282304]: 2026-02-02 16:01:13.352242957 +0000 UTC m=+0.079676606 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 02 16:01:13 compute-0 podman[282303]: 2026-02-02 16:01:13.357362752 +0000 UTC m=+0.083614702 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 02 16:01:13 compute-0 ceph-mon[75334]: pgmap v2114: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:13 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:01:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:01:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:01:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:01:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:01:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:01:15 compute-0 nova_compute[239545]: 2026-02-02 16:01:15.350 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:15 compute-0 nova_compute[239545]: 2026-02-02 16:01:15.604 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:15 compute-0 ceph-mon[75334]: pgmap v2115: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:17 compute-0 nova_compute[239545]: 2026-02-02 16:01:17.547 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:01:17 compute-0 ceph-mon[75334]: pgmap v2116: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:18 compute-0 nova_compute[239545]: 2026-02-02 16:01:18.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:01:18 compute-0 nova_compute[239545]: 2026-02-02 16:01:18.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 16:01:18 compute-0 nova_compute[239545]: 2026-02-02 16:01:18.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 16:01:18 compute-0 nova_compute[239545]: 2026-02-02 16:01:18.567 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 16:01:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:19 compute-0 ceph-mon[75334]: pgmap v2117: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:20 compute-0 nova_compute[239545]: 2026-02-02 16:01:20.351 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:20 compute-0 nova_compute[239545]: 2026-02-02 16:01:20.607 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:21 compute-0 ceph-mon[75334]: pgmap v2118: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:23 compute-0 nova_compute[239545]: 2026-02-02 16:01:23.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:01:23 compute-0 ovn_controller[144995]: 2026-02-02T16:01:23Z|00293|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 02 16:01:23 compute-0 ceph-mon[75334]: pgmap v2119: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:25 compute-0 nova_compute[239545]: 2026-02-02 16:01:25.356 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:25 compute-0 nova_compute[239545]: 2026-02-02 16:01:25.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:01:25 compute-0 nova_compute[239545]: 2026-02-02 16:01:25.609 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:25 compute-0 ceph-mon[75334]: pgmap v2120: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:27 compute-0 ceph-mon[75334]: pgmap v2121: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 16:01:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3605085945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 16:01:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 16:01:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3605085945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 16:01:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3605085945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 16:01:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/3605085945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 16:01:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:29 compute-0 nova_compute[239545]: 2026-02-02 16:01:29.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:01:29 compute-0 nova_compute[239545]: 2026-02-02 16:01:29.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:01:29 compute-0 nova_compute[239545]: 2026-02-02 16:01:29.546 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:01:29 compute-0 nova_compute[239545]: 2026-02-02 16:01:29.582 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:01:29 compute-0 nova_compute[239545]: 2026-02-02 16:01:29.583 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:01:29 compute-0 nova_compute[239545]: 2026-02-02 16:01:29.583 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:01:29 compute-0 nova_compute[239545]: 2026-02-02 16:01:29.583 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 16:01:29 compute-0 nova_compute[239545]: 2026-02-02 16:01:29.584 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 16:01:29 compute-0 ceph-mon[75334]: pgmap v2122: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 16:01:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017294115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.109 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.266 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.268 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4267MB free_disk=59.98818066995591GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.268 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.268 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:01:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.331 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.332 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.351 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.371 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.612 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4017294115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:01:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 16:01:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/459810916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.939 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.945 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.960 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.984 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 16:01:30 compute-0 nova_compute[239545]: 2026-02-02 16:01:30.985 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:01:31 compute-0 ceph-mon[75334]: pgmap v2123: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/459810916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:01:31 compute-0 nova_compute[239545]: 2026-02-02 16:01:31.986 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:01:31 compute-0 nova_compute[239545]: 2026-02-02 16:01:31.986 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:01:31 compute-0 nova_compute[239545]: 2026-02-02 16:01:31.986 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 16:01:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:33 compute-0 ceph-mon[75334]: pgmap v2124: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:35 compute-0 nova_compute[239545]: 2026-02-02 16:01:35.360 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:35 compute-0 nova_compute[239545]: 2026-02-02 16:01:35.614 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:35 compute-0 ceph-mon[75334]: pgmap v2125: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:37 compute-0 ceph-mon[75334]: pgmap v2126: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:39 compute-0 ceph-mon[75334]: pgmap v2127: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:40 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:40 compute-0 nova_compute[239545]: 2026-02-02 16:01:40.362 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:40 compute-0 nova_compute[239545]: 2026-02-02 16:01:40.617 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:41 compute-0 ceph-mon[75334]: pgmap v2128: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:42 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Optimize plan auto_2026-02-02_16:01:42
Feb 02 16:01:42 compute-0 ceph-mgr[75628]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 02 16:01:42 compute-0 ceph-mgr[75628]: [balancer INFO root] do_upmap
Feb 02 16:01:42 compute-0 ceph-mgr[75628]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'backups', 'images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr']
Feb 02 16:01:42 compute-0 ceph-mgr[75628]: [balancer INFO root] prepared 0/10 upmap changes
Feb 02 16:01:43 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:43 compute-0 ceph-mon[75334]: pgmap v2129: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:44 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:44 compute-0 podman[282396]: 2026-02-02 16:01:44.327104771 +0000 UTC m=+0.053468794 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 02 16:01:44 compute-0 podman[282395]: 2026-02-02 16:01:44.364759675 +0000 UTC m=+0.090577734 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb 02 16:01:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:01:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:01:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:01:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:01:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:01:44 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 02 16:01:45 compute-0 ceph-mgr[75628]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 02 16:01:45 compute-0 nova_compute[239545]: 2026-02-02 16:01:45.364 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:45 compute-0 nova_compute[239545]: 2026-02-02 16:01:45.620 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:45 compute-0 ceph-mon[75334]: pgmap v2130: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:46 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:47 compute-0 ceph-mon[75334]: pgmap v2131: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:48 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:49 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:49 compute-0 ceph-mon[75334]: pgmap v2132: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:50 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:50 compute-0 nova_compute[239545]: 2026-02-02 16:01:50.366 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:50 compute-0 nova_compute[239545]: 2026-02-02 16:01:50.622 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:51 compute-0 ceph-mon[75334]: pgmap v2133: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:52 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:53 compute-0 ceph-mon[75334]: pgmap v2134: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:54 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] _maybe_adjust
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6766615411363839e-06 of space, bias 1.0, pg target 0.0005029984623409152 quantized to 32 (current 32)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002929071092416185 of space, bias 1.0, pg target 0.8787213277248556 quantized to 32 (current 32)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.860744638524344e-06 of space, bias 1.0, pg target 0.0011582233915573034 quantized to 32 (current 32)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677604493214888 of space, bias 1.0, pg target 0.20032813479644662 quantized to 32 (current 32)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4227334119584742e-06 of space, bias 4.0, pg target 0.001707280094350169 quantized to 16 (current 16)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 02 16:01:54 compute-0 ceph-mgr[75628]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 02 16:01:55 compute-0 ceph-mon[75334]: pgmap v2135: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:55 compute-0 nova_compute[239545]: 2026-02-02 16:01:55.368 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:55 compute-0 nova_compute[239545]: 2026-02-02 16:01:55.624 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:01:56 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:56 compute-0 sudo[282441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 16:01:56 compute-0 sudo[282441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:01:56 compute-0 sudo[282441]: pam_unix(sudo:session): session closed for user root
Feb 02 16:01:56 compute-0 sudo[282466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 02 16:01:56 compute-0 sudo[282466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:01:57 compute-0 podman[282537]: 2026-02-02 16:01:57.324903362 +0000 UTC m=+0.076384066 container exec a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 02 16:01:57 compute-0 ceph-mon[75334]: pgmap v2136: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:57 compute-0 podman[282537]: 2026-02-02 16:01:57.451910205 +0000 UTC m=+0.203390929 container exec_died a5faa4b9cf66b48800f52b7f047775780492085d2c07632f1ceefb9dc837ed59 (image=quay.io/ceph/ceph:v20, name=ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 02 16:01:58 compute-0 sudo[282466]: pam_unix(sudo:session): session closed for user root
Feb 02 16:01:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 16:01:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:01:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 16:01:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:01:58 compute-0 sudo[282725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 16:01:58 compute-0 sudo[282725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:01:58 compute-0 sudo[282725]: pam_unix(sudo:session): session closed for user root
Feb 02 16:01:58 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:58 compute-0 sudo[282750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 02 16:01:58 compute-0 sudo[282750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:01:58 compute-0 sudo[282750]: pam_unix(sudo:session): session closed for user root
Feb 02 16:01:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 16:01:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:01:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 02 16:01:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 16:01:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 02 16:01:58 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:01:58 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 02 16:01:58 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 16:01:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 02 16:01:59 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 16:01:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 16:01:59 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:01:59 compute-0 sudo[282807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 16:01:59 compute-0 sudo[282807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:01:59 compute-0 sudo[282807]: pam_unix(sudo:session): session closed for user root
Feb 02 16:01:59 compute-0 sudo[282832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 02 16:01:59 compute-0 sudo[282832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:01:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:01:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:01:59 compute-0 ceph-mon[75334]: pgmap v2137: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:01:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:01:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 02 16:01:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:01:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 02 16:01:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 02 16:01:59 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:01:59 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:01:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:01:59.274 154982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:01:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:01:59.275 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:01:59 compute-0 ovn_metadata_agent[154977]: 2026-02-02 16:01:59.275 154982 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:01:59 compute-0 podman[282868]: 2026-02-02 16:01:59.429401574 +0000 UTC m=+0.044492699 container create e2f6ffb43f12d83d22db47ac17272d33f10e5a514162d3a4dfaf031c1a596f06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 16:01:59 compute-0 systemd[1]: Started libpod-conmon-e2f6ffb43f12d83d22db47ac17272d33f10e5a514162d3a4dfaf031c1a596f06.scope.
Feb 02 16:01:59 compute-0 podman[282868]: 2026-02-02 16:01:59.409355649 +0000 UTC m=+0.024446804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:01:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:01:59 compute-0 podman[282868]: 2026-02-02 16:01:59.524029328 +0000 UTC m=+0.139120483 container init e2f6ffb43f12d83d22db47ac17272d33f10e5a514162d3a4dfaf031c1a596f06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilbur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 16:01:59 compute-0 podman[282868]: 2026-02-02 16:01:59.535211614 +0000 UTC m=+0.150302739 container start e2f6ffb43f12d83d22db47ac17272d33f10e5a514162d3a4dfaf031c1a596f06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 16:01:59 compute-0 podman[282868]: 2026-02-02 16:01:59.539047269 +0000 UTC m=+0.154138494 container attach e2f6ffb43f12d83d22db47ac17272d33f10e5a514162d3a4dfaf031c1a596f06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilbur, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 16:01:59 compute-0 boring_wilbur[282884]: 167 167
Feb 02 16:01:59 compute-0 systemd[1]: libpod-e2f6ffb43f12d83d22db47ac17272d33f10e5a514162d3a4dfaf031c1a596f06.scope: Deactivated successfully.
Feb 02 16:01:59 compute-0 podman[282868]: 2026-02-02 16:01:59.543837227 +0000 UTC m=+0.158928352 container died e2f6ffb43f12d83d22db47ac17272d33f10e5a514162d3a4dfaf031c1a596f06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilbur, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 16:01:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9f28e39f2ab3c498039b98bd338866bcf906d3f172e41536b589b0876cbe65e-merged.mount: Deactivated successfully.
Feb 02 16:01:59 compute-0 podman[282868]: 2026-02-02 16:01:59.584348056 +0000 UTC m=+0.199439181 container remove e2f6ffb43f12d83d22db47ac17272d33f10e5a514162d3a4dfaf031c1a596f06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 02 16:01:59 compute-0 systemd[1]: libpod-conmon-e2f6ffb43f12d83d22db47ac17272d33f10e5a514162d3a4dfaf031c1a596f06.scope: Deactivated successfully.
Feb 02 16:01:59 compute-0 podman[282908]: 2026-02-02 16:01:59.778295002 +0000 UTC m=+0.059759546 container create 816357dee1a883052a48255d7600928503b3f5b6175ea0c3109a2486df70bfd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_taussig, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 16:01:59 compute-0 systemd[1]: Started libpod-conmon-816357dee1a883052a48255d7600928503b3f5b6175ea0c3109a2486df70bfd1.scope.
Feb 02 16:01:59 compute-0 podman[282908]: 2026-02-02 16:01:59.752083625 +0000 UTC m=+0.033548249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:01:59 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96102681dfca6544db04ddbff93d7432769bf26d6295be49421ff666db63d78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 16:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96102681dfca6544db04ddbff93d7432769bf26d6295be49421ff666db63d78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 16:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96102681dfca6544db04ddbff93d7432769bf26d6295be49421ff666db63d78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 16:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96102681dfca6544db04ddbff93d7432769bf26d6295be49421ff666db63d78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 16:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96102681dfca6544db04ddbff93d7432769bf26d6295be49421ff666db63d78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 02 16:01:59 compute-0 podman[282908]: 2026-02-02 16:01:59.883403915 +0000 UTC m=+0.164868529 container init 816357dee1a883052a48255d7600928503b3f5b6175ea0c3109a2486df70bfd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 16:01:59 compute-0 podman[282908]: 2026-02-02 16:01:59.893288349 +0000 UTC m=+0.174752883 container start 816357dee1a883052a48255d7600928503b3f5b6175ea0c3109a2486df70bfd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_taussig, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 16:01:59 compute-0 podman[282908]: 2026-02-02 16:01:59.8973941 +0000 UTC m=+0.178858734 container attach 816357dee1a883052a48255d7600928503b3f5b6175ea0c3109a2486df70bfd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_taussig, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 02 16:02:00 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:00 compute-0 quirky_taussig[282924]: --> passed data devices: 0 physical, 3 LVM
Feb 02 16:02:00 compute-0 quirky_taussig[282924]: --> All data devices are unavailable
Feb 02 16:02:00 compute-0 systemd[1]: libpod-816357dee1a883052a48255d7600928503b3f5b6175ea0c3109a2486df70bfd1.scope: Deactivated successfully.
Feb 02 16:02:00 compute-0 podman[282908]: 2026-02-02 16:02:00.359352297 +0000 UTC m=+0.640816841 container died 816357dee1a883052a48255d7600928503b3f5b6175ea0c3109a2486df70bfd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_taussig, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 16:02:00 compute-0 nova_compute[239545]: 2026-02-02 16:02:00.370 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f96102681dfca6544db04ddbff93d7432769bf26d6295be49421ff666db63d78-merged.mount: Deactivated successfully.
Feb 02 16:02:00 compute-0 podman[282908]: 2026-02-02 16:02:00.409692459 +0000 UTC m=+0.691156993 container remove 816357dee1a883052a48255d7600928503b3f5b6175ea0c3109a2486df70bfd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_taussig, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 02 16:02:00 compute-0 systemd[1]: libpod-conmon-816357dee1a883052a48255d7600928503b3f5b6175ea0c3109a2486df70bfd1.scope: Deactivated successfully.
Feb 02 16:02:00 compute-0 sudo[282832]: pam_unix(sudo:session): session closed for user root
Feb 02 16:02:00 compute-0 sudo[282956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 16:02:00 compute-0 sudo[282956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:02:00 compute-0 sudo[282956]: pam_unix(sudo:session): session closed for user root
Feb 02 16:02:00 compute-0 sudo[282981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- lvm list --format json
Feb 02 16:02:00 compute-0 sudo[282981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:02:00 compute-0 nova_compute[239545]: 2026-02-02 16:02:00.626 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:00 compute-0 podman[283020]: 2026-02-02 16:02:00.927881954 +0000 UTC m=+0.057392237 container create f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 02 16:02:00 compute-0 systemd[1]: Started libpod-conmon-f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4.scope.
Feb 02 16:02:00 compute-0 podman[283020]: 2026-02-02 16:02:00.904683701 +0000 UTC m=+0.034193984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:02:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:02:01 compute-0 podman[283020]: 2026-02-02 16:02:01.027527652 +0000 UTC m=+0.157037935 container init f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 02 16:02:01 compute-0 podman[283020]: 2026-02-02 16:02:01.037109359 +0000 UTC m=+0.166619612 container start f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 02 16:02:01 compute-0 podman[283020]: 2026-02-02 16:02:01.040872822 +0000 UTC m=+0.170383085 container attach f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 02 16:02:01 compute-0 clever_booth[283037]: 167 167
Feb 02 16:02:01 compute-0 systemd[1]: libpod-f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4.scope: Deactivated successfully.
Feb 02 16:02:01 compute-0 conmon[283037]: conmon f8d73ece4b6448a9d32c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4.scope/container/memory.events
Feb 02 16:02:01 compute-0 podman[283020]: 2026-02-02 16:02:01.044842739 +0000 UTC m=+0.174352992 container died f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 16:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7e7e3db2357e49e47941bc860d89419e7908574f1c7e590fec3920dd42252be-merged.mount: Deactivated successfully.
Feb 02 16:02:01 compute-0 podman[283020]: 2026-02-02 16:02:01.086937829 +0000 UTC m=+0.216448072 container remove f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 02 16:02:01 compute-0 systemd[1]: libpod-conmon-f8d73ece4b6448a9d32c005c9fb16c5627f0dbae9ac70435edd38783eb9971e4.scope: Deactivated successfully.
Feb 02 16:02:01 compute-0 podman[283061]: 2026-02-02 16:02:01.295461903 +0000 UTC m=+0.060876063 container create 8d382d28d509df946e76d0bc4982cc3ec3e3f54162904f71f5ee60f2078e8313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 02 16:02:01 compute-0 systemd[1]: Started libpod-conmon-8d382d28d509df946e76d0bc4982cc3ec3e3f54162904f71f5ee60f2078e8313.scope.
Feb 02 16:02:01 compute-0 ceph-mon[75334]: pgmap v2138: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:01 compute-0 podman[283061]: 2026-02-02 16:02:01.273380059 +0000 UTC m=+0.038794249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:02:01 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd42ff2c7b9af01260c3a6fc9c7f4f2b6e433c7a11ac49fe1bcbc1669e9945d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 16:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd42ff2c7b9af01260c3a6fc9c7f4f2b6e433c7a11ac49fe1bcbc1669e9945d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 16:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd42ff2c7b9af01260c3a6fc9c7f4f2b6e433c7a11ac49fe1bcbc1669e9945d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 16:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd42ff2c7b9af01260c3a6fc9c7f4f2b6e433c7a11ac49fe1bcbc1669e9945d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 16:02:01 compute-0 podman[283061]: 2026-02-02 16:02:01.395273726 +0000 UTC m=+0.160687896 container init 8d382d28d509df946e76d0bc4982cc3ec3e3f54162904f71f5ee60f2078e8313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 02 16:02:01 compute-0 podman[283061]: 2026-02-02 16:02:01.409260851 +0000 UTC m=+0.174675011 container start 8d382d28d509df946e76d0bc4982cc3ec3e3f54162904f71f5ee60f2078e8313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 02 16:02:01 compute-0 podman[283061]: 2026-02-02 16:02:01.413610308 +0000 UTC m=+0.179024498 container attach 8d382d28d509df946e76d0bc4982cc3ec3e3f54162904f71f5ee60f2078e8313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_faraday, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]: {
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:     "0": [
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:         {
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "devices": [
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "/dev/loop3"
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             ],
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_name": "ceph_lv0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_size": "21470642176",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=3bab3955-37f6-439d-a6d9-c93f1b81f868,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "name": "ceph_lv0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "tags": {
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.block_uuid": "QvZLRM-R7Dk-2ndq-F4fV-F72S-cKaM-QH9LDj",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.cluster_name": "ceph",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.crush_device_class": "",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.encrypted": "0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.objectstore": "bluestore",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.osd_fsid": "3bab3955-37f6-439d-a6d9-c93f1b81f868",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.osd_id": "0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.type": "block",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.vdo": "0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.with_tpm": "0"
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             },
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "type": "block",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "vg_name": "ceph_vg0"
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:         }
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:     ],
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:     "1": [
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:         {
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "devices": [
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "/dev/loop4"
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             ],
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_name": "ceph_lv1",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_size": "21470642176",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d1192b72-b454-486a-9485-4e52faa418e9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "name": "ceph_lv1",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "tags": {
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.block_uuid": "eNz15o-UgFn-LrYY-C0dt-2YDe-fsmK-zH4TWW",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.cluster_name": "ceph",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.crush_device_class": "",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.encrypted": "0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.objectstore": "bluestore",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.osd_fsid": "d1192b72-b454-486a-9485-4e52faa418e9",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.osd_id": "1",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.type": "block",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.vdo": "0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.with_tpm": "0"
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             },
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "type": "block",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "vg_name": "ceph_vg1"
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:         }
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:     ],
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:     "2": [
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:         {
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "devices": [
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "/dev/loop5"
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             ],
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_name": "ceph_lv2",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_size": "21470642176",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e43470b2-6632-573a-87d3-0f5428ec59e9,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aa948d65-9934-4797-913a-22fcbacb9ed9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "lv_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "name": "ceph_lv2",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "tags": {
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.block_uuid": "24ZSI2-CLd7-Mjdl-Pd3E-TBBQ-a2QY-sR3LkY",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.cephx_lockbox_secret": "",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.cluster_fsid": "e43470b2-6632-573a-87d3-0f5428ec59e9",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.cluster_name": "ceph",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.crush_device_class": "",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.encrypted": "0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.objectstore": "bluestore",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.osd_fsid": "aa948d65-9934-4797-913a-22fcbacb9ed9",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.osd_id": "2",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.type": "block",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.vdo": "0",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:                 "ceph.with_tpm": "0"
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             },
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "type": "block",
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:             "vg_name": "ceph_vg2"
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:         }
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]:     ]
Feb 02 16:02:01 compute-0 quizzical_faraday[283078]: }
Feb 02 16:02:01 compute-0 podman[283061]: 2026-02-02 16:02:01.823943632 +0000 UTC m=+0.589357832 container died 8d382d28d509df946e76d0bc4982cc3ec3e3f54162904f71f5ee60f2078e8313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 02 16:02:01 compute-0 systemd[1]: libpod-8d382d28d509df946e76d0bc4982cc3ec3e3f54162904f71f5ee60f2078e8313.scope: Deactivated successfully.
Feb 02 16:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dd42ff2c7b9af01260c3a6fc9c7f4f2b6e433c7a11ac49fe1bcbc1669e9945d-merged.mount: Deactivated successfully.
Feb 02 16:02:01 compute-0 podman[283061]: 2026-02-02 16:02:01.874316335 +0000 UTC m=+0.639730515 container remove 8d382d28d509df946e76d0bc4982cc3ec3e3f54162904f71f5ee60f2078e8313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_faraday, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 16:02:01 compute-0 systemd[1]: libpod-conmon-8d382d28d509df946e76d0bc4982cc3ec3e3f54162904f71f5ee60f2078e8313.scope: Deactivated successfully.
Feb 02 16:02:01 compute-0 sudo[282981]: pam_unix(sudo:session): session closed for user root
Feb 02 16:02:01 compute-0 sudo[283100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 02 16:02:01 compute-0 sudo[283100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:02:02 compute-0 sudo[283100]: pam_unix(sudo:session): session closed for user root
Feb 02 16:02:02 compute-0 sudo[283125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e43470b2-6632-573a-87d3-0f5428ec59e9/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid e43470b2-6632-573a-87d3-0f5428ec59e9 -- raw list --format json
Feb 02 16:02:02 compute-0 sudo[283125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:02:02 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:02 compute-0 podman[283162]: 2026-02-02 16:02:02.339504421 +0000 UTC m=+0.048757364 container create 7f5e9a903942753fef496d4e01d62da6bdd66e96464079457038f09070c9bd8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 02 16:02:02 compute-0 systemd[1]: Started libpod-conmon-7f5e9a903942753fef496d4e01d62da6bdd66e96464079457038f09070c9bd8d.scope.
Feb 02 16:02:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:02:02 compute-0 podman[283162]: 2026-02-02 16:02:02.318116184 +0000 UTC m=+0.027369137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:02:02 compute-0 podman[283162]: 2026-02-02 16:02:02.419607097 +0000 UTC m=+0.128860060 container init 7f5e9a903942753fef496d4e01d62da6bdd66e96464079457038f09070c9bd8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_antonelli, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 02 16:02:02 compute-0 podman[283162]: 2026-02-02 16:02:02.427428841 +0000 UTC m=+0.136681764 container start 7f5e9a903942753fef496d4e01d62da6bdd66e96464079457038f09070c9bd8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_antonelli, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb 02 16:02:02 compute-0 podman[283162]: 2026-02-02 16:02:02.432370192 +0000 UTC m=+0.141623115 container attach 7f5e9a903942753fef496d4e01d62da6bdd66e96464079457038f09070c9bd8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_antonelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 16:02:02 compute-0 funny_antonelli[283178]: 167 167
Feb 02 16:02:02 compute-0 systemd[1]: libpod-7f5e9a903942753fef496d4e01d62da6bdd66e96464079457038f09070c9bd8d.scope: Deactivated successfully.
Feb 02 16:02:02 compute-0 podman[283162]: 2026-02-02 16:02:02.433836569 +0000 UTC m=+0.143089502 container died 7f5e9a903942753fef496d4e01d62da6bdd66e96464079457038f09070c9bd8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_antonelli, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 02 16:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-21c9757d7053342f814d9db1384e3b122d9fd21c35a377b5170c44f6d6fcd799-merged.mount: Deactivated successfully.
Feb 02 16:02:02 compute-0 podman[283162]: 2026-02-02 16:02:02.48576099 +0000 UTC m=+0.195013903 container remove 7f5e9a903942753fef496d4e01d62da6bdd66e96464079457038f09070c9bd8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 02 16:02:02 compute-0 systemd[1]: libpod-conmon-7f5e9a903942753fef496d4e01d62da6bdd66e96464079457038f09070c9bd8d.scope: Deactivated successfully.
Feb 02 16:02:02 compute-0 podman[283202]: 2026-02-02 16:02:02.650211157 +0000 UTC m=+0.053965232 container create 7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 02 16:02:02 compute-0 systemd[1]: Started libpod-conmon-7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4.scope.
Feb 02 16:02:02 compute-0 systemd[1]: Started libcrun container.
Feb 02 16:02:02 compute-0 podman[283202]: 2026-02-02 16:02:02.636078179 +0000 UTC m=+0.039832274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 02 16:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e1b883b946448a99d831cb8f482e02951848033ba13672e39eaea7175aca11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 02 16:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e1b883b946448a99d831cb8f482e02951848033ba13672e39eaea7175aca11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 02 16:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e1b883b946448a99d831cb8f482e02951848033ba13672e39eaea7175aca11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 02 16:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e1b883b946448a99d831cb8f482e02951848033ba13672e39eaea7175aca11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 02 16:02:02 compute-0 podman[283202]: 2026-02-02 16:02:02.739993972 +0000 UTC m=+0.143748067 container init 7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 02 16:02:02 compute-0 podman[283202]: 2026-02-02 16:02:02.749513767 +0000 UTC m=+0.153267842 container start 7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 02 16:02:02 compute-0 podman[283202]: 2026-02-02 16:02:02.753301481 +0000 UTC m=+0.157055606 container attach 7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 16:02:03 compute-0 ceph-mon[75334]: pgmap v2139: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:03 compute-0 lvm[283296]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 16:02:03 compute-0 lvm[283296]: VG ceph_vg0 finished
Feb 02 16:02:03 compute-0 lvm[283297]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 16:02:03 compute-0 lvm[283297]: VG ceph_vg1 finished
Feb 02 16:02:03 compute-0 lvm[283299]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 16:02:03 compute-0 lvm[283299]: VG ceph_vg2 finished
Feb 02 16:02:03 compute-0 suspicious_aryabhata[283218]: {}
Feb 02 16:02:03 compute-0 systemd[1]: libpod-7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4.scope: Deactivated successfully.
Feb 02 16:02:03 compute-0 systemd[1]: libpod-7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4.scope: Consumed 1.179s CPU time.
Feb 02 16:02:03 compute-0 podman[283202]: 2026-02-02 16:02:03.532854893 +0000 UTC m=+0.936608978 container died 7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 02 16:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3e1b883b946448a99d831cb8f482e02951848033ba13672e39eaea7175aca11-merged.mount: Deactivated successfully.
Feb 02 16:02:03 compute-0 podman[283202]: 2026-02-02 16:02:03.580538841 +0000 UTC m=+0.984292916 container remove 7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 02 16:02:03 compute-0 systemd[1]: libpod-conmon-7748c381549e4093ac53073fc79d4e637176d0a906bb1f1046a293df9f117ac4.scope: Deactivated successfully.
Feb 02 16:02:03 compute-0 sudo[283125]: pam_unix(sudo:session): session closed for user root
Feb 02 16:02:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 02 16:02:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:02:03 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 02 16:02:03 compute-0 ceph-mon[75334]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:02:03 compute-0 sudo[283313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 02 16:02:03 compute-0 sudo[283313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 02 16:02:03 compute-0 sudo[283313]: pam_unix(sudo:session): session closed for user root
Feb 02 16:02:04 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.207176) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770048124207209, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1622, "num_deletes": 251, "total_data_size": 2650459, "memory_usage": 2685632, "flush_reason": "Manual Compaction"}
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770048124223313, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2580707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41463, "largest_seqno": 43084, "table_properties": {"data_size": 2573202, "index_size": 4448, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14313, "raw_average_key_size": 18, "raw_value_size": 2558287, "raw_average_value_size": 3309, "num_data_blocks": 199, "num_entries": 773, "num_filter_entries": 773, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770047955, "oldest_key_time": 1770047955, "file_creation_time": 1770048124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 16212 microseconds, and 4957 cpu microseconds.
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.223379) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2580707 bytes OK
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.223410) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.225419) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.225444) EVENT_LOG_v1 {"time_micros": 1770048124225436, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.225473) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 2643438, prev total WAL file size 2643438, number of live WAL files 2.
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.226507) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2520KB)], [89(10056KB)]
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770048124226590, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 12878316, "oldest_snapshot_seqno": -1}
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 7304 keys, 12136016 bytes, temperature: kUnknown
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770048124274692, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 12136016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12081402, "index_size": 35238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 186574, "raw_average_key_size": 25, "raw_value_size": 11944437, "raw_average_value_size": 1635, "num_data_blocks": 1384, "num_entries": 7304, "num_filter_entries": 7304, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770044783, "oldest_key_time": 0, "file_creation_time": 1770048124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b7096c04-39ee-4763-9c12-88827d921c4c", "db_session_id": "808TM54KTF2S4YGE1ZJW", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.274961) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 12136016 bytes
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.276811) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 267.2 rd, 251.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 9.8 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(9.7) write-amplify(4.7) OK, records in: 7821, records dropped: 517 output_compression: NoCompression
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.276826) EVENT_LOG_v1 {"time_micros": 1770048124276818, "job": 52, "event": "compaction_finished", "compaction_time_micros": 48196, "compaction_time_cpu_micros": 21665, "output_level": 6, "num_output_files": 1, "total_output_size": 12136016, "num_input_records": 7821, "num_output_records": 7304, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770048124277160, "job": 52, "event": "table_file_deletion", "file_number": 91}
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770048124278267, "job": 52, "event": "table_file_deletion", "file_number": 89}
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.226352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.278334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.278339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.278341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.278343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 16:02:04 compute-0 ceph-mon[75334]: rocksdb: (Original Log Time 2026/02/02-16:02:04.278344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 02 16:02:04 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:04 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:02:04 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' 
Feb 02 16:02:05 compute-0 nova_compute[239545]: 2026-02-02 16:02:05.372 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:05 compute-0 sshd-session[283338]: Accepted publickey for zuul from 192.168.122.10 port 45828 ssh2: ECDSA SHA256:pJ38khHc6yt5juzKD1sW0tWbR10nYIVDPm9w93zP3z8
Feb 02 16:02:05 compute-0 systemd-logind[786]: New session 52 of user zuul.
Feb 02 16:02:05 compute-0 systemd[1]: Started Session 52 of User zuul.
Feb 02 16:02:05 compute-0 sshd-session[283338]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 02 16:02:05 compute-0 sudo[283342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 02 16:02:05 compute-0 sudo[283342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 02 16:02:05 compute-0 nova_compute[239545]: 2026-02-02 16:02:05.630 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:05 compute-0 ceph-mon[75334]: pgmap v2140: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:06 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:07 compute-0 ceph-mon[75334]: pgmap v2141: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:08 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19120 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:08 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:08 compute-0 ceph-mon[75334]: from='client.19120 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:08 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19122 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:02:09 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 02 16:02:09 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091594774' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 02 16:02:09 compute-0 ceph-mon[75334]: pgmap v2142: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:09 compute-0 ceph-mon[75334]: from='client.19122 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:09 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4091594774' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 02 16:02:10 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:10 compute-0 nova_compute[239545]: 2026-02-02 16:02:10.374 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:10 compute-0 nova_compute[239545]: 2026-02-02 16:02:10.632 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:11 compute-0 ceph-mon[75334]: pgmap v2143: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:12 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:13 compute-0 ceph-mon[75334]: pgmap v2144: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:14 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:02:14 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:14 compute-0 ovs-vsctl[283665]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 02 16:02:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:02:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:02:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:02:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:02:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] scanning for idle connections..
Feb 02 16:02:14 compute-0 ceph-mgr[75628]: [volumes INFO mgr_util] cleaning up connections: []
Feb 02 16:02:15 compute-0 podman[283727]: 2026-02-02 16:02:15.330505044 +0000 UTC m=+0.060148344 container health_status 79a93cadd29578defef3cacca5a44f88615ffc7e8456abad9f00724dbcdf1ad3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 02 16:02:15 compute-0 nova_compute[239545]: 2026-02-02 16:02:15.400 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:15 compute-0 podman[283716]: 2026-02-02 16:02:15.418981988 +0000 UTC m=+0.148389902 container health_status 3991a52ed18485043a041c8c7c5256111a5fcc3bb4f4efa63fda48491b0e0a53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a4bf74a4ad8ed5f42d9f68dbcb94c4fca75d7baaede34e83d3966c01f1cc405-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65-8ae32f9af2146aed460eccaf9469c325c32376342dfe49a7efb602d526211b65'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 02 16:02:15 compute-0 virtqemud[239316]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 02 16:02:15 compute-0 virtqemud[239316]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 02 16:02:15 compute-0 virtqemud[239316]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 02 16:02:15 compute-0 nova_compute[239545]: 2026-02-02 16:02:15.634 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:15 compute-0 ceph-mon[75334]: pgmap v2145: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:16 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: cache status {prefix=cache status} (starting...)
Feb 02 16:02:16 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: client ls {prefix=client ls} (starting...)
Feb 02 16:02:16 compute-0 lvm[284059]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 02 16:02:16 compute-0 lvm[284059]: VG ceph_vg0 finished
Feb 02 16:02:16 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:16 compute-0 lvm[284062]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 02 16:02:16 compute-0 lvm[284062]: VG ceph_vg2 finished
Feb 02 16:02:16 compute-0 lvm[284068]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 02 16:02:16 compute-0 lvm[284068]: VG ceph_vg1 finished
Feb 02 16:02:16 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19126 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:16 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: damage ls {prefix=damage ls} (starting...)
Feb 02 16:02:17 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: dump loads {prefix=dump loads} (starting...)
Feb 02 16:02:17 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19128 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:17 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 02 16:02:17 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 02 16:02:17 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 02 16:02:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb 02 16:02:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180015852' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 02 16:02:17 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19132 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:17 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 02 16:02:17 compute-0 ceph-mon[75334]: pgmap v2146: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:17 compute-0 ceph-mon[75334]: from='client.19126 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:17 compute-0 ceph-mon[75334]: from='client.19128 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:17 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/180015852' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 02 16:02:17 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 02 16:02:17 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 02 16:02:17 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2776805287' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:02:17 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 02 16:02:18 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19136 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:18 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]: 2026-02-02T16:02:18.027+0000 7f9528241640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 02 16:02:18 compute-0 ceph-mgr[75628]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 02 16:02:18 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: ops {prefix=ops} (starting...)
Feb 02 16:02:18 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb 02 16:02:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358198225' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 02 16:02:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 02 16:02:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416955681' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 02 16:02:18 compute-0 ceph-mon[75334]: from='client.19132 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2776805287' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 02 16:02:18 compute-0 ceph-mon[75334]: from='client.19136 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2358198225' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 02 16:02:18 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/416955681' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 02 16:02:18 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: session ls {prefix=session ls} (starting...)
Feb 02 16:02:18 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 02 16:02:18 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227073713' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 02 16:02:19 compute-0 ceph-mds[95441]: mds.cephfs.compute-0.mcxxtn asok_command: status {prefix=status} (starting...)
Feb 02 16:02:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 02 16:02:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/379869208' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 02 16:02:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:02:19 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19146 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:19 compute-0 nova_compute[239545]: 2026-02-02 16:02:19.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:19 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 02 16:02:19 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464359759' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 02 16:02:19 compute-0 ceph-mon[75334]: pgmap v2147: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4227073713' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 02 16:02:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/379869208' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 02 16:02:19 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1464359759' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 02 16:02:19 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19150 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 02 16:02:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682092499' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 02 16:02:20 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:20 compute-0 nova_compute[239545]: 2026-02-02 16:02:20.401 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb 02 16:02:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331930408' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 02 16:02:20 compute-0 nova_compute[239545]: 2026-02-02 16:02:20.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:20 compute-0 nova_compute[239545]: 2026-02-02 16:02:20.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 02 16:02:20 compute-0 nova_compute[239545]: 2026-02-02 16:02:20.546 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 02 16:02:20 compute-0 nova_compute[239545]: 2026-02-02 16:02:20.582 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 02 16:02:20 compute-0 nova_compute[239545]: 2026-02-02 16:02:20.636 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:20 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 16:02:20 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289248183' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 02 16:02:20 compute-0 ceph-mon[75334]: from='client.19146 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:20 compute-0 ceph-mon[75334]: from='client.19150 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/682092499' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 02 16:02:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1331930408' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 02 16:02:20 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1289248183' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 02 16:02:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 02 16:02:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4274463553' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 02 16:02:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 02 16:02:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247184228' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 02 16:02:21 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19162 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:21 compute-0 ceph-mgr[75628]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 16:02:21 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]: 2026-02-02T16:02:21.507+0000 7f9528241640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 02 16:02:21 compute-0 ceph-mon[75334]: pgmap v2148: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4274463553' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 02 16:02:21 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3247184228' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 02 16:02:21 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 02 16:02:21 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801929998' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 02 16:02:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 02 16:02:22 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4202042859' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 02 16:02:22 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19168 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:22 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:22 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 02 16:02:22 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2242434616' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 248 ms_handle_reset con 0x559dd1740000 session 0x559dd22f7500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 30425088 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:44.573300+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 30425088 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4990884 data_alloc: 234881024 data_used: 25587384
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:45.573458+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 248 heartbeat osd_stat(store_statfs(0x4d555e000/0x0/0x4ffc00000, data 0x267c3475/0x2692c000, compress 0x0/0x0/0x0, omap 0x2ad0a, meta 0x3d452f6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 137478144 unmapped: 30375936 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 248 ms_handle_reset con 0x559dd331f000 session 0x559dd3be2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 248 ms_handle_reset con 0x559dd4f7f800 session 0x559dd17d0e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:46.573561+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 248 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd2ec1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7e400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 248 ms_handle_reset con 0x559dd4f7e400 session 0x559dd07d2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 30154752 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 248 handle_osd_map epochs [248,249], i have 249, src has [1,249]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:47.573726+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 249 ms_handle_reset con 0x559dd1740000 session 0x559dd2fdb180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 249 ms_handle_reset con 0x559dd331f000 session 0x559dd67421c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 249 ms_handle_reset con 0x559dd4f7f800 session 0x559dd08a21c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 30121984 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 249 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd1b14a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 249 ms_handle_reset con 0x559dd4f7f000 session 0x559dd2ec1500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 249 ms_handle_reset con 0x559dd331f000 session 0x559dd2f5b500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:48.573878+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 30072832 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:49.574007+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 250 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd1b15180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 28827648 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5119169 data_alloc: 234881024 data_used: 25587700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 251 ms_handle_reset con 0x559dd4f7f800 session 0x559dd2985180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:50.574164+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 251 ms_handle_reset con 0x559dd6aaa000 session 0x559dd58616c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 251 ms_handle_reset con 0x559dd1740000 session 0x559dd2fabc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139034624 unmapped: 28819456 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:51.574279+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 251 heartbeat osd_stat(store_statfs(0x4d4431000/0x0/0x4ffc00000, data 0x278e6d78/0x27a57000, compress 0x0/0x0/0x0, omap 0x2b326, meta 0x3d44cda), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 28803072 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:52.574404+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 28803072 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 251 ms_handle_reset con 0x559dd331f000 session 0x559dd2f5b340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:53.574516+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 251 heartbeat osd_stat(store_statfs(0x4d4431000/0x0/0x4ffc00000, data 0x278e6d78/0x27a57000, compress 0x0/0x0/0x0, omap 0x2b326, meta 0x3d44cda), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 28770304 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:54.574629+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139083776 unmapped: 28770304 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5125728 data_alloc: 234881024 data_used: 25587700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 251 heartbeat osd_stat(store_statfs(0x4d4431000/0x0/0x4ffc00000, data 0x278e6d78/0x27a57000, compress 0x0/0x0/0x0, omap 0x2b326, meta 0x3d44cda), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:55.574771+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139984896 unmapped: 27869184 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:56.574891+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 27860992 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:57.575045+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 27860992 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.851079941s of 14.186966896s, submitted: 91
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:58.575163+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 252 ms_handle_reset con 0x559dd6aaa000 session 0x559dd3be2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139321344 unmapped: 28532736 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:35:59.575276+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 252 ms_handle_reset con 0x559dd6aaa400 session 0x559dd2984c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 252 ms_handle_reset con 0x559dd6aaac00 session 0x559dd1543880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aab000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 252 ms_handle_reset con 0x559dd6aab000 session 0x559dd2fab340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 252 ms_handle_reset con 0x559dd331f000 session 0x559dd2f4ce00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 252 ms_handle_reset con 0x559dd6aaa000 session 0x559dd5860e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 252 heartbeat osd_stat(store_statfs(0x4d401c000/0x0/0x4ffc00000, data 0x27cfc914/0x27e6e000, compress 0x0/0x0/0x0, omap 0x2b78d, meta 0x3d44873), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139444224 unmapped: 28409856 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5159055 data_alloc: 234881024 data_used: 26644468
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:00.575403+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 253 ms_handle_reset con 0x559dd6aaa800 session 0x559dd147a8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 253 ms_handle_reset con 0x559dd6aaa400 session 0x559dd2984700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139452416 unmapped: 28401664 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:01.575522+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aab400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 253 ms_handle_reset con 0x559dd6aab400 session 0x559dd2faaa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 254 ms_handle_reset con 0x559dd6aaac00 session 0x559dd22f7a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aab400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 254 ms_handle_reset con 0x559dd6aab400 session 0x559dd22f6000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 28246016 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 254 ms_handle_reset con 0x559dd6aaa000 session 0x559dd3be21c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:02.575670+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 255 ms_handle_reset con 0x559dd331f000 session 0x559dd2fdac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 255 ms_handle_reset con 0x559dd6aaa400 session 0x559dd26d48c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 255 ms_handle_reset con 0x559dd331f000 session 0x559dd147a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139542528 unmapped: 28311552 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 255 ms_handle_reset con 0x559dd6aaa000 session 0x559dd26d56c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:03.575753+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 ms_handle_reset con 0x559dd6aaac00 session 0x559dd27a5340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aab400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aab800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139698176 unmapped: 28155904 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:04.575856+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 heartbeat osd_stat(store_statfs(0x4d400f000/0x0/0x4ffc00000, data 0x27d038b0/0x27e7b000, compress 0x0/0x0/0x0, omap 0x2c203, meta 0x3d43dfd), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142983168 unmapped: 24870912 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5190981 data_alloc: 251658240 data_used: 29807794
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f4d880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 ms_handle_reset con 0x559dd4f7e800 session 0x559dd23b0000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:05.575959+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 ms_handle_reset con 0x559dcfffe400 session 0x559dd6742e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 27410432 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:06.576073+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 27426816 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:07.576211+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 heartbeat osd_stat(store_statfs(0x4d4341000/0x0/0x4ffc00000, data 0x279d38b0/0x27b4b000, compress 0x0/0x0/0x0, omap 0x2c203, meta 0x3d43dfd), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 27426816 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:08.576323+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 27426816 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:09.576436+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 heartbeat osd_stat(store_statfs(0x4d4341000/0x0/0x4ffc00000, data 0x279d38b0/0x27b4b000, compress 0x0/0x0/0x0, omap 0x2c203, meta 0x3d43dfd), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 27426816 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5135473 data_alloc: 234881024 data_used: 23594162
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:10.576571+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 27426816 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:11.631863+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 heartbeat osd_stat(store_statfs(0x4d4341000/0x0/0x4ffc00000, data 0x279d38b0/0x27b4b000, compress 0x0/0x0/0x0, omap 0x2c203, meta 0x3d43dfd), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 27426816 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 256 handle_osd_map epochs [256,257], i have 257, src has [1,257]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.482803345s of 14.100172043s, submitted: 109
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:12.632002+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 27426816 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:13.632124+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 27418624 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:14.632281+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 27418624 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5142637 data_alloc: 234881024 data_used: 23631026
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:15.632418+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 258 heartbeat osd_stat(store_statfs(0x4d433b000/0x0/0x4ffc00000, data 0x279d6eeb/0x27b51000, compress 0x0/0x0/0x0, omap 0x2c701, meta 0x3d438ff), peers [0,1] op hist [0,0,0,9])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141787136 unmapped: 26066944 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:16.632558+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142295040 unmapped: 25559040 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:17.632757+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 258 ms_handle_reset con 0x559dd331f000 session 0x559dd07d36c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 258 ms_handle_reset con 0x559dd6aaa000 session 0x559dd5860540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142303232 unmapped: 25550848 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:18.632926+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 258 ms_handle_reset con 0x559dd6aaac00 session 0x559dd17d1880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aabc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 258 ms_handle_reset con 0x559dd6aabc00 session 0x559dd09defc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 258 ms_handle_reset con 0x559dcfffe400 session 0x559dd147b500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141385728 unmapped: 26468352 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:19.633069+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 259 ms_handle_reset con 0x559dd6aaa000 session 0x559dd2faaa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141402112 unmapped: 26451968 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5222907 data_alloc: 234881024 data_used: 24286386
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 259 ms_handle_reset con 0x559dd30e4400 session 0x559dd2fdafc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:20.633205+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 259 ms_handle_reset con 0x559dd6aaac00 session 0x559dd3be36c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 259 ms_handle_reset con 0x559dd331f000 session 0x559dd6742000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141402112 unmapped: 26451968 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:21.633324+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 259 heartbeat osd_stat(store_statfs(0x4d3e61000/0x0/0x4ffc00000, data 0x28301ae9/0x28028000, compress 0x0/0x0/0x0, omap 0x2ca0b, meta 0x3d435f5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 259 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 260 ms_handle_reset con 0x559dcfffe400 session 0x559dd15421c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141402112 unmapped: 26451968 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:22.633784+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 260 ms_handle_reset con 0x559dd30e4400 session 0x559dd17d1180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aabc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.172415733s of 10.721768379s, submitted: 116
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 261 ms_handle_reset con 0x559dd6aabc00 session 0x559dd17d1500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 26509312 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:23.634425+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 261 ms_handle_reset con 0x559dd6aaac00 session 0x559dd2f5ac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141344768 unmapped: 26509312 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 262 ms_handle_reset con 0x559dd6aaa000 session 0x559dd08a3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:24.634661+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 263 ms_handle_reset con 0x559dcfffe400 session 0x559dd09dea80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141385728 unmapped: 26468352 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5234507 data_alloc: 234881024 data_used: 24290482
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:25.634810+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 263 heartbeat osd_stat(store_statfs(0x4d3e53000/0x0/0x4ffc00000, data 0x28308bb9/0x28035000, compress 0x0/0x0/0x0, omap 0x2d415, meta 0x3d42beb), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 263 handle_osd_map epochs [263,264], i have 264, src has [1,264]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 26435584 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:26.634968+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 264 ms_handle_reset con 0x559dd30e4400 session 0x559dd11daa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aabc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 26435584 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 264 ms_handle_reset con 0x559dd6aabc00 session 0x559dd07d3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 264 ms_handle_reset con 0x559dd6aaac00 session 0x559dd6742380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d95400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 264 ms_handle_reset con 0x559dd2d95400 session 0x559dd2ec1340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:27.635244+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 264 heartbeat osd_stat(store_statfs(0x4d3e4e000/0x0/0x4ffc00000, data 0x2830a78d/0x28038000, compress 0x0/0x0/0x0, omap 0x2d747, meta 0x3d428b9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 26435584 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d95400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 265 ms_handle_reset con 0x559dd2d95400 session 0x559dd07d3a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:28.635535+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 266 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f5b500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142508032 unmapped: 25346048 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:29.635670+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 266 heartbeat osd_stat(store_statfs(0x4d3e4f000/0x0/0x4ffc00000, data 0x2830c361/0x2803b000, compress 0x0/0x0/0x0, omap 0x2d920, meta 0x3d426e0), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 266 ms_handle_reset con 0x559dd6aaac00 session 0x559dd1c2d340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aabc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 267 ms_handle_reset con 0x559dd6aabc00 session 0x559dd1c2da40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 267 ms_handle_reset con 0x559dd30e4400 session 0x559dd08a21c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142532608 unmapped: 25321472 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5246307 data_alloc: 234881024 data_used: 24291667
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:30.635771+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d95400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 268 ms_handle_reset con 0x559dd2d95400 session 0x559dd2fdb6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142557184 unmapped: 25296896 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:31.635926+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 268 ms_handle_reset con 0x559dd6aaac00 session 0x559dd2ec0c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 268 ms_handle_reset con 0x559dcfffe400 session 0x559dd07d2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142557184 unmapped: 25296896 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:32.636056+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aabc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 ms_handle_reset con 0x559dd6aabc00 session 0x559dd3be3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142581760 unmapped: 25272320 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:33.636271+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 ms_handle_reset con 0x559dd4f7f800 session 0x559dd6742540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd6742700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.517679214s of 11.096239090s, submitted: 87
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 ms_handle_reset con 0x559dcfffe400 session 0x559dd3be3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141991936 unmapped: 25862144 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:34.636615+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 heartbeat osd_stat(store_statfs(0x4d45d9000/0x0/0x4ffc00000, data 0x27b81388/0x278b3000, compress 0x0/0x0/0x0, omap 0x2e342, meta 0x3d41cbe), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141991936 unmapped: 25862144 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5196449 data_alloc: 234881024 data_used: 23232257
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:35.636751+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141991936 unmapped: 25862144 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:36.636870+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142000128 unmapped: 25853952 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:37.637036+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142000128 unmapped: 25853952 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 ms_handle_reset con 0x559dd6aab400 session 0x559dd2faba40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 ms_handle_reset con 0x559dd6aaa800 session 0x559dd2fdb880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 ms_handle_reset con 0x559dd6aab800 session 0x559dd3be3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:38.637149+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 ms_handle_reset con 0x559dcfffe400 session 0x559dd1b141c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 heartbeat osd_stat(store_statfs(0x4d45d9000/0x0/0x4ffc00000, data 0x27b81388/0x278b3000, compress 0x0/0x0/0x0, omap 0x2e342, meta 0x3d41cbe), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 28385280 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:39.637339+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 28385280 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5152526 data_alloc: 234881024 data_used: 20045553
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 269 handle_osd_map epochs [269,270], i have 270, src has [1,270]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:40.637444+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 138838016 unmapped: 29016064 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:41.637550+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 138838016 unmapped: 29016064 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 271 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd5860fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:42.637662+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3310400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 271 handle_osd_map epochs [272,272], i have 272, src has [1,272]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 272 ms_handle_reset con 0x559dd3310400 session 0x559dd2faac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 272 ms_handle_reset con 0x559dd2d67800 session 0x559dd5861340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 138846208 unmapped: 29007872 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:43.637965+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 273 ms_handle_reset con 0x559dd2d67400 session 0x559dd08a3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140910592 unmapped: 26943488 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:44.638192+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.172956467s of 10.406270981s, submitted: 131
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 274 ms_handle_reset con 0x559dd2d67000 session 0x559dcedd1500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 274 ms_handle_reset con 0x559dcfffe400 session 0x559dd08a3500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 274 ms_handle_reset con 0x559dd383e800 session 0x559dd67428c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 274 heartbeat osd_stat(store_statfs(0x4d41f0000/0x0/0x4ffc00000, data 0x2838ef7e/0x27c96000, compress 0x0/0x0/0x0, omap 0x2f4e8, meta 0x3d40b18), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 26861568 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5265260 data_alloc: 234881024 data_used: 20042086
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:45.638342+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139649024 unmapped: 28205056 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:46.638608+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 274 handle_osd_map epochs [274,275], i have 275, src has [1,275]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 275 ms_handle_reset con 0x559dd2d67800 session 0x559dd147b6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3310400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 275 ms_handle_reset con 0x559dd3310400 session 0x559dd2faa1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139649024 unmapped: 28205056 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:47.638849+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139649024 unmapped: 28205056 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 275 heartbeat osd_stat(store_statfs(0x4d41f1000/0x0/0x4ffc00000, data 0x28390b9a/0x27c99000, compress 0x0/0x0/0x0, omap 0x2fa09, meta 0x3d405f7), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:48.639059+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 276 heartbeat osd_stat(store_statfs(0x4d41f1000/0x0/0x4ffc00000, data 0x28390b9a/0x27c99000, compress 0x0/0x0/0x0, omap 0x2fa09, meta 0x3d405f7), peers [0,1] op hist [0,0,0,0,1])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 276 ms_handle_reset con 0x559dcfffe400 session 0x559dd2fdb340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 276 ms_handle_reset con 0x559dd2d67000 session 0x559dd11daa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 28631040 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:49.639288+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 276 ms_handle_reset con 0x559dd2d67800 session 0x559dd08a3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 28631040 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5133660 data_alloc: 234881024 data_used: 19977704
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:50.639517+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 28631040 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:51.639678+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 276 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd58601c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 139239424 unmapped: 28614656 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 ms_handle_reset con 0x559dd383e800 session 0x559dd17d1180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 heartbeat osd_stat(store_statfs(0x4d51d4000/0x0/0x4ffc00000, data 0x26f7f124/0x26cb6000, compress 0x0/0x0/0x0, omap 0x2fca2, meta 0x3d4035e), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:52.639784+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 ms_handle_reset con 0x559dd2d66c00 session 0x559dd3be28c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 ms_handle_reset con 0x559dd2d67000 session 0x559dd3be2fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 ms_handle_reset con 0x559dcfffe400 session 0x559dd09de540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 138723328 unmapped: 29130752 heap: 167854080 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:53.639919+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 ms_handle_reset con 0x559dd2d67800 session 0x559dd2fdac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 147144704 unmapped: 24911872 heap: 172056576 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:54.640051+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.451243401s of 10.026868820s, submitted: 110
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 143081472 unmapped: 33177600 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5433706 data_alloc: 234881024 data_used: 20043869
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:55.640183+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 ms_handle_reset con 0x559dd2d66400 session 0x559dd3be2380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 28581888 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:56.640304+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 ms_handle_reset con 0x559dcfffe400 session 0x559dd2fdba40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 ms_handle_reset con 0x559dd2d66c00 session 0x559dd23b0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 277 handle_osd_map epochs [277,278], i have 278, src has [1,278]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140083200 unmapped: 36175872 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:57.640482+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 278 heartbeat osd_stat(store_statfs(0x4c9f9b000/0x0/0x4ffc00000, data 0x321b3895/0x31ef1000, compress 0x0/0x0/0x0, omap 0x30671, meta 0x3d3f98f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 278 ms_handle_reset con 0x559dd2d67000 session 0x559dd07d3500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 35880960 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:58.640605+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 278 heartbeat osd_stat(store_statfs(0x4c779b000/0x0/0x4ffc00000, data 0x349b38ce/0x346f1000, compress 0x0/0x0/0x0, omap 0x30671, meta 0x3d3f98f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 35561472 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:59.640773+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 ms_handle_reset con 0x559dd2d66800 session 0x559dd17d0380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 ms_handle_reset con 0x559dd2d67800 session 0x559dd2fdb180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd2fda000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 ms_handle_reset con 0x559dd2d67800 session 0x559dd2ec1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140902400 unmapped: 35356672 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6654942 data_alloc: 234881024 data_used: 20043983
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:00.640899+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 ms_handle_reset con 0x559dcfffe400 session 0x559dd2faa1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 35037184 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:01.641073+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 ms_handle_reset con 0x559dd2d66800 session 0x559dd5860a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 ms_handle_reset con 0x559dd2d67000 session 0x559dd2fdac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 35037184 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:02.641173+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 ms_handle_reset con 0x559dcfffe400 session 0x559dd17d0c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 ms_handle_reset con 0x559dd2d66c00 session 0x559dd08a3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 57K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3863 syncs, 3.33 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7133 writes, 33K keys, 7133 commit groups, 1.0 writes per commit group, ingest: 16.93 MB, 0.03 MB/s
                                           Interval WAL: 7133 writes, 2965 syncs, 2.41 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 ms_handle_reset con 0x559dd2d66800 session 0x559dd2f4dc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 ms_handle_reset con 0x559dd2d67000 session 0x559dd2faba40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 ms_handle_reset con 0x559dd2d67800 session 0x559dd2ec16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 ms_handle_reset con 0x559dcfffe400 session 0x559dd58601c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141254656 unmapped: 35004416 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 ms_handle_reset con 0x559dd2d66800 session 0x559dd11daa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:03.641282+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 ms_handle_reset con 0x559dd2d67000 session 0x559dd5860fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 ms_handle_reset con 0x559dd2d66c00 session 0x559dd147bc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3319000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 ms_handle_reset con 0x559dd3319000 session 0x559dd5861340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141385728 unmapped: 34873344 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:04.641411+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.138663292s of 10.001189232s, submitted: 256
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 heartbeat osd_stat(store_statfs(0x4d4799000/0x0/0x4ffc00000, data 0x279b6fc9/0x276f3000, compress 0x0/0x0/0x0, omap 0x31037, meta 0x3d3efc9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 heartbeat osd_stat(store_statfs(0x4d4799000/0x0/0x4ffc00000, data 0x279b6fc9/0x276f3000, compress 0x0/0x0/0x0, omap 0x31037, meta 0x3d3efc9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141385728 unmapped: 34873344 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5296481 data_alloc: 234881024 data_used: 20044634
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:05.641551+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 280 handle_osd_map epochs [280,281], i have 281, src has [1,281]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 281 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd07d2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141393920 unmapped: 34865152 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:06.641669+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 281 handle_osd_map epochs [282,282], i have 282, src has [1,282]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 282 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f5a380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 282 ms_handle_reset con 0x559dd2d67000 session 0x559dd1b15340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 282 ms_handle_reset con 0x559dd2d66800 session 0x559dd09dfdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 282 ms_handle_reset con 0x559dd2d66c00 session 0x559dd2fda540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140779520 unmapped: 35479552 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:07.641814+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: mgrc ms_handle_reset ms_handle_reset con 0x559dd0428400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/273425939
Feb 02 16:02:22 compute-0 ceph-osd[88227]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/273425939,v1:192.168.122.100:6801/273425939]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: get_auth_request con 0x559dd2d66c00 auth_method 0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: mgrc handle_mgr_configure stats_period=5
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 282 heartbeat osd_stat(store_statfs(0x4d5483000/0x0/0x4ffc00000, data 0x268707f9/0x26a04000, compress 0x0/0x0/0x0, omap 0x31925, meta 0x3d3e6db), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 282 ms_handle_reset con 0x559dd2d66800 session 0x559dd5860700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 282 ms_handle_reset con 0x559dd2d67000 session 0x559dd2985880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140894208 unmapped: 35364864 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:08.642778+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 282 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd3be3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 35315712 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:09.642898+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 283 ms_handle_reset con 0x559dcfffe400 session 0x559dd09de380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 284 ms_handle_reset con 0x559dd2d49c00 session 0x559dd15421c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 284 ms_handle_reset con 0x559dcfffe400 session 0x559dd26d4540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 284 ms_handle_reset con 0x559dd2d49c00 session 0x559dd1b141c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141737984 unmapped: 34521088 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3959249 data_alloc: 234881024 data_used: 23587462
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:10.643015+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e4c82000/0x0/0x4ffc00000, data 0x17073f15/0x17208000, compress 0x0/0x0/0x0, omap 0x3376a, meta 0x3d3c896), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141737984 unmapped: 34521088 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:11.643145+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e4c82000/0x0/0x4ffc00000, data 0x17073f15/0x17208000, compress 0x0/0x0/0x0, omap 0x3376a, meta 0x3d3c896), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 284 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 285 ms_handle_reset con 0x559dd2d66800 session 0x559dd1c2d340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 142786560 unmapped: 33472512 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:12.643246+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 285 ms_handle_reset con 0x559dd2d67000 session 0x559dd2fab6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:13.643380+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 34545664 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:14.643533+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 34545664 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:15.643679+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 34545664 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2295085 data_alloc: 234881024 data_used: 23588717
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:16.643819+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 34545664 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 285 heartbeat osd_stat(store_statfs(0x4f9483000/0x0/0x4ffc00000, data 0x2875afc/0x2a09000, compress 0x0/0x0/0x0, omap 0x33dfe, meta 0x3d3c202), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 285 handle_osd_map epochs [285,286], i have 286, src has [1,286]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.625379562s of 12.797693253s, submitted: 301
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:17.643998+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 34545664 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:18.644291+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 34545664 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 34545664 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 286 heartbeat osd_stat(store_statfs(0x4f947e000/0x0/0x4ffc00000, data 0x287759b/0x2a0c000, compress 0x0/0x0/0x0, omap 0x3409f, meta 0x3d3bf61), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:19.930302+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 286 ms_handle_reset con 0x559dd4f7fc00 session 0x559dd2ec1500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 34676736 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2305423 data_alloc: 234881024 data_used: 23588717
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:20.930629+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 286 heartbeat osd_stat(store_statfs(0x4f9418000/0x0/0x4ffc00000, data 0x28de5fd/0x2a74000, compress 0x0/0x0/0x0, omap 0x342cc, meta 0x3d3bd34), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 287 ms_handle_reset con 0x559dcfffe400 session 0x559dd07d3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141721600 unmapped: 34537472 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:21.930772+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141721600 unmapped: 34537472 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:22.930927+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 287 heartbeat osd_stat(store_statfs(0x4f940d000/0x0/0x4ffc00000, data 0x28e6199/0x2a7d000, compress 0x0/0x0/0x0, omap 0x347d6, meta 0x3d3b82a), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 287 ms_handle_reset con 0x559dd2d66800 session 0x559dd08a3880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 287 ms_handle_reset con 0x559dd2d49c00 session 0x559dd15421c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141721600 unmapped: 34537472 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:23.931111+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141721600 unmapped: 34537472 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:24.931294+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 287 handle_osd_map epochs [287,288], i have 288, src has [1,288]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141721600 unmapped: 34537472 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2319236 data_alloc: 234881024 data_used: 23589400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 288 ms_handle_reset con 0x559dd2d67000 session 0x559dd2faac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:25.931448+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 288 heartbeat osd_stat(store_statfs(0x4f9409000/0x0/0x4ffc00000, data 0x28e7d97/0x2a81000, compress 0x0/0x0/0x0, omap 0x34b25, meta 0x3d3b4db), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a4000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 288 ms_handle_reset con 0x559dd14a4000 session 0x559dd2faa540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141729792 unmapped: 34529280 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:26.931674+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141729792 unmapped: 34529280 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:27.931923+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141729792 unmapped: 34529280 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.321642876s of 11.395038605s, submitted: 34
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:28.932102+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dcfffe400 session 0x559dd23b1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141729792 unmapped: 34529280 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:29.932233+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dd2d49c00 session 0x559dd08a3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dd2d66800 session 0x559dd08a2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dd2d67000 session 0x559dd2f5b340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141746176 unmapped: 34512896 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2323600 data_alloc: 234881024 data_used: 23589498
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:30.932381+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5c400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dd2d5c400 session 0x559dd3be2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dcfffe400 session 0x559dd5861880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 heartbeat osd_stat(store_statfs(0x4f9406000/0x0/0x4ffc00000, data 0x28e9933/0x2a84000, compress 0x0/0x0/0x0, omap 0x34fa1, meta 0x3d3b05f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141746176 unmapped: 34512896 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:31.932520+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dd2d49c00 session 0x559dd07d3880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dd2d66800 session 0x559dd2fabc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141746176 unmapped: 34512896 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:32.932677+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141746176 unmapped: 34512896 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dd2d67000 session 0x559dd17d16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 ms_handle_reset con 0x559dd2d67800 session 0x559dd5860e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:33.932769+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 34660352 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dcfffe400 session 0x559dd2ec08c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:34.932874+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dd2d5d800 session 0x559dd2fab340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 heartbeat osd_stat(store_statfs(0x4f9403000/0x0/0x4ffc00000, data 0x28eb523/0x2a87000, compress 0x0/0x0/0x0, omap 0x35446, meta 0x3d3abba), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2325166 data_alloc: 234881024 data_used: 23589985
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 141647872 unmapped: 34611200 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:35.933019+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dd2d66800 session 0x559dd2984700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd1543180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dd383fc00 session 0x559dd2faa380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dcfffe400 session 0x559dd1b156c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dd2d5d800 session 0x559dd22f7880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dd2d5f000 session 0x559dd22f61c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dd14a3400 session 0x559dd147aa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 148275200 unmapped: 27983872 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 290 ms_handle_reset con 0x559dd2d66800 session 0x559dd67436c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 ms_handle_reset con 0x559dcfffe400 session 0x559dd17d1340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:36.933141+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 ms_handle_reset con 0x559dd14a3400 session 0x559dd2f5aa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 ms_handle_reset con 0x559dd2d5d800 session 0x559dd2f4ce00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 ms_handle_reset con 0x559dd2d67000 session 0x559dd22f6000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 ms_handle_reset con 0x559dd2d5f000 session 0x559dd2ec1180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 ms_handle_reset con 0x559dd2d49c00 session 0x559dd1542700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 ms_handle_reset con 0x559dcfffe400 session 0x559dd08a3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140664832 unmapped: 35594240 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:37.933312+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140664832 unmapped: 35594240 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 ms_handle_reset con 0x559dd14a3400 session 0x559dd2fda540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:38.933452+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 heartbeat osd_stat(store_statfs(0x4f8b4b000/0x0/0x4ffc00000, data 0x31a404f/0x333f000, compress 0x0/0x0/0x0, omap 0x35841, meta 0x3d3a7bf), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140664832 unmapped: 35594240 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:39.933636+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2351419 data_alloc: 234881024 data_used: 19395583
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140664832 unmapped: 35594240 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:40.933876+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 ms_handle_reset con 0x559dd2d5d800 session 0x559dd2f5b500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140664832 unmapped: 35594240 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:41.933994+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.661998749s of 13.024585724s, submitted: 115
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 292 ms_handle_reset con 0x559dd2d67000 session 0x559dd5860700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 292 ms_handle_reset con 0x559dcfffe400 session 0x559dd2ec1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 292 ms_handle_reset con 0x559dd14a3400 session 0x559dd1542700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 35553280 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:42.934151+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 293 ms_handle_reset con 0x559dd2d5d800 session 0x559dd3be2c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 140713984 unmapped: 35545088 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:43.934278+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152068096 unmapped: 24190976 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:44.934441+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 ms_handle_reset con 0x559dd689f800 session 0x559dd22f7c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 heartbeat osd_stat(store_statfs(0x4f8b42000/0x0/0x4ffc00000, data 0x31a76c9/0x3348000, compress 0x0/0x0/0x0, omap 0x35d12, meta 0x3d3a2ee), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2444089 data_alloc: 251658240 data_used: 32051781
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152109056 unmapped: 24150016 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:45.934589+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3319400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 ms_handle_reset con 0x559dd3319400 session 0x559dd2f4ddc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 ms_handle_reset con 0x559dd689f400 session 0x559dd2984a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152387584 unmapped: 23871488 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:46.934750+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 ms_handle_reset con 0x559dcfffe400 session 0x559dd2ec0e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152387584 unmapped: 23871488 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:47.934940+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 ms_handle_reset con 0x559dd14a3400 session 0x559dd2fabdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 ms_handle_reset con 0x559dd2d5d800 session 0x559dd147aa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 heartbeat osd_stat(store_statfs(0x4f8b3b000/0x0/0x4ffc00000, data 0x31a9329/0x334d000, compress 0x0/0x0/0x0, omap 0x35eaf, meta 0x3d3a151), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152387584 unmapped: 23871488 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:48.935125+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3319400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 ms_handle_reset con 0x559dd3319400 session 0x559dd2985180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152387584 unmapped: 23871488 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:49.935263+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 295 ms_handle_reset con 0x559dcfffe400 session 0x559dd08a3880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2447673 data_alloc: 251658240 data_used: 32051781
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152395776 unmapped: 23863296 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 295 ms_handle_reset con 0x559dd14a3400 session 0x559dd2faa540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:50.935447+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152420352 unmapped: 23838720 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:51.935558+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 295 ms_handle_reset con 0x559dd2d5d800 session 0x559dd23b1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 295 ms_handle_reset con 0x559dd689f400 session 0x559dd2f5b340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.060369492s of 10.220314026s, submitted: 71
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 296 ms_handle_reset con 0x559dd689f800 session 0x559dd2984700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152420352 unmapped: 23838720 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 296 ms_handle_reset con 0x559dcfffe400 session 0x559dd147b6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:52.935663+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 296 ms_handle_reset con 0x559dd14a3400 session 0x559dd2984540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 152625152 unmapped: 23633920 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:53.935779+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 296 heartbeat osd_stat(store_statfs(0x4f8b3a000/0x0/0x4ffc00000, data 0x31aca35/0x3350000, compress 0x0/0x0/0x0, omap 0x363b3, meta 0x3d39c4d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153575424 unmapped: 22683648 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:54.935919+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2468696 data_alloc: 251658240 data_used: 32265844
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153829376 unmapped: 22429696 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:55.936017+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 296 handle_osd_map epochs [296,297], i have 297, src has [1,297]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153886720 unmapped: 22372352 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:56.936169+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153886720 unmapped: 22372352 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f8821000/0x0/0x4ffc00000, data 0x34c35ed/0x3668000, compress 0x0/0x0/0x0, omap 0x3671c, meta 0x3d398e4), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 297 handle_osd_map epochs [298,298], i have 298, src has [1,298]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:57.936499+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 298 ms_handle_reset con 0x559dd2d5d800 session 0x559dd1543180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 22257664 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 298 ms_handle_reset con 0x559dd689f400 session 0x559dd08a2a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:58.936691+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154001408 unmapped: 22257664 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:59.936856+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4c400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 298 ms_handle_reset con 0x559dd2d4c400 session 0x559dd08a2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2484077 data_alloc: 251658240 data_used: 32524477
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 299 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f5a8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154009600 unmapped: 22249472 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:00.936970+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 299 ms_handle_reset con 0x559dd14a3400 session 0x559dd17d16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154050560 unmapped: 22208512 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:01.937085+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 299 heartbeat osd_stat(store_statfs(0x4f881d000/0x0/0x4ffc00000, data 0x34c6c6c/0x366f000, compress 0x0/0x0/0x0, omap 0x36e72, meta 0x3d3918e), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154050560 unmapped: 22208512 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:02.937203+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.894374847s of 11.287477493s, submitted: 164
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 300 ms_handle_reset con 0x559dd2d5d800 session 0x559dd09dfdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 22200320 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:03.937416+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 300 ms_handle_reset con 0x559dd689f400 session 0x559dd23b0380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 22192128 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 300 heartbeat osd_stat(store_statfs(0x4f8817000/0x0/0x4ffc00000, data 0x34c8834/0x3673000, compress 0x0/0x0/0x0, omap 0x3717a, meta 0x3d38e86), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:04.937529+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330b800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2489085 data_alloc: 251658240 data_used: 32525090
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 300 ms_handle_reset con 0x559dd330b800 session 0x559dd2f4ce00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 22192128 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:05.937662+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 300 ms_handle_reset con 0x559dcfffe400 session 0x559dd5860380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 300 heartbeat osd_stat(store_statfs(0x4f8817000/0x0/0x4ffc00000, data 0x34c8834/0x3673000, compress 0x0/0x0/0x0, omap 0x3717a, meta 0x3d38e86), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154107904 unmapped: 22151168 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:06.937792+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 300 ms_handle_reset con 0x559dd14a3400 session 0x559dd07d3880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 300 ms_handle_reset con 0x559dd2d5d800 session 0x559dd1b15340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153722880 unmapped: 22536192 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:07.937968+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153722880 unmapped: 22536192 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:08.938095+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 301 heartbeat osd_stat(store_statfs(0x4f8815000/0x0/0x4ffc00000, data 0x34ca2a3/0x3675000, compress 0x0/0x0/0x0, omap 0x375ef, meta 0x3d38a11), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 22503424 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:09.938233+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 301 heartbeat osd_stat(store_statfs(0x4f8815000/0x0/0x4ffc00000, data 0x34ca2a3/0x3675000, compress 0x0/0x0/0x0, omap 0x375ef, meta 0x3d38a11), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2490514 data_alloc: 251658240 data_used: 32525090
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 22503424 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:10.938349+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 22503424 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:11.938472+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 22503424 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:12.938631+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153755648 unmapped: 22503424 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:13.938758+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 301 ms_handle_reset con 0x559dd689f400 session 0x559dd2fdb340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153763840 unmapped: 22495232 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:14.938873+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.819130898s of 11.933044434s, submitted: 51
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2493752 data_alloc: 251658240 data_used: 32525090
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f8810000/0x0/0x4ffc00000, data 0x34cbe3f/0x3678000, compress 0x0/0x0/0x0, omap 0x3777e, meta 0x3d38882), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153763840 unmapped: 22495232 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:15.938995+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153763840 unmapped: 22495232 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:16.939150+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d48400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dd2d48400 session 0x559dd2f4d500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f4d180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dd14a3400 session 0x559dd2f5aa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dd2d5d800 session 0x559dd27a5180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 161316864 unmapped: 14942208 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:17.939861+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dd689f400 session 0x559dd07d3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d48000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dd2d48000 session 0x559dd2f4da40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dcfffe400 session 0x559dd07d2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dd14a3400 session 0x559dd2985a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dd2d5d800 session 0x559dd2f5a380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153985024 unmapped: 22274048 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:18.940021+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 ms_handle_reset con 0x559dd689f400 session 0x559dd2ec1180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153985024 unmapped: 22274048 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:19.940185+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f7f7c000/0x0/0x4ffc00000, data 0x3d62e4f/0x3f10000, compress 0x0/0x0/0x0, omap 0x37991, meta 0x3d3866f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2546992 data_alloc: 251658240 data_used: 32525090
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 153993216 unmapped: 22265856 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:20.940334+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3312000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 ms_handle_reset con 0x559dd3312000 session 0x559dd3be2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154034176 unmapped: 22224896 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:21.940486+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 ms_handle_reset con 0x559dcfffe400 session 0x559dd2784000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 ms_handle_reset con 0x559dd14a3400 session 0x559dd1b15340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154042368 unmapped: 22216704 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:22.940793+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 ms_handle_reset con 0x559dd2d5d800 session 0x559dd3be3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3312000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 ms_handle_reset con 0x559dd3312000 session 0x559dd2f4c8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 ms_handle_reset con 0x559dd689f400 session 0x559dd5860380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154050560 unmapped: 22208512 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:23.940910+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 ms_handle_reset con 0x559dd14a3400 session 0x559dd2985dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 ms_handle_reset con 0x559dd2d5d800 session 0x559dd1b156c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f7f79000/0x0/0x4ffc00000, data 0x3d649eb/0x3f13000, compress 0x0/0x0/0x0, omap 0x380bd, meta 0x3d37f43), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 ms_handle_reset con 0x559dd2d5a800 session 0x559dd2fdb6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 304 ms_handle_reset con 0x559dcfffe400 session 0x559dcedd1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3312000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:24.941053+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154058752 unmapped: 22200320 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3309000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2575834 data_alloc: 251658240 data_used: 36199803
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.172310829s of 10.540171623s, submitted: 57
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 304 ms_handle_reset con 0x559dd2d49c00 session 0x559dd2fab6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 304 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd3be2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:25.941169+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 157761536 unmapped: 18497536 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 304 ms_handle_reset con 0x559dd14a3400 session 0x559dd3be2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 304 ms_handle_reset con 0x559dcfffe400 session 0x559dd58608c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 304 ms_handle_reset con 0x559dd2d49c00 session 0x559dd08a2e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:26.941330+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154222592 unmapped: 22036480 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 304 ms_handle_reset con 0x559dd2d5a800 session 0x559dd23b1880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 304 handle_osd_map epochs [304,305], i have 305, src has [1,305]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:27.941547+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154222592 unmapped: 22036480 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:28.941668+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154230784 unmapped: 22028288 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f8fae000/0x0/0x4ffc00000, data 0x2d2f198/0x2ede000, compress 0x0/0x0/0x0, omap 0x3882f, meta 0x3d377d1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:29.941963+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154230784 unmapped: 22028288 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2436107 data_alloc: 251658240 data_used: 27271085
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:30.942169+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154230784 unmapped: 22028288 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 305 ms_handle_reset con 0x559dd2d5a800 session 0x559dd08a2a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:31.942304+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154238976 unmapped: 22020096 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 305 ms_handle_reset con 0x559dcfffe400 session 0x559dd08a3180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:32.942471+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154263552 unmapped: 21995520 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:33.942604+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154263552 unmapped: 21995520 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f8fa9000/0x0/0x4ffc00000, data 0x2d30c17/0x2ee1000, compress 0x0/0x0/0x0, omap 0x38993, meta 0x3d3766d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 306 ms_handle_reset con 0x559dd14a3400 session 0x559dd2984700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:34.942794+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154263552 unmapped: 21995520 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2439389 data_alloc: 251658240 data_used: 27283373
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:35.942927+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154263552 unmapped: 21995520 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.968672752s of 10.256503105s, submitted: 101
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:36.943045+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155287552 unmapped: 20971520 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 306 ms_handle_reset con 0x559dd2d49c00 session 0x559dd1b14380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 306 handle_osd_map epochs [306,307], i have 307, src has [1,307]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 307 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd07d2380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:37.943416+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155312128 unmapped: 20946944 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f8b04000/0x0/0x4ffc00000, data 0x31d37b3/0x3385000, compress 0x0/0x0/0x0, omap 0x38d11, meta 0x3d372ef), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 307 ms_handle_reset con 0x559dd2d5d800 session 0x559dd2984540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:38.943616+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155312128 unmapped: 20946944 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f8b04000/0x0/0x4ffc00000, data 0x31d37b3/0x3385000, compress 0x0/0x0/0x0, omap 0x38d11, meta 0x3d372ef), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 307 handle_osd_map epochs [308,308], i have 308, src has [1,308]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:39.943774+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156409856 unmapped: 19849216 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:40.943898+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2476125 data_alloc: 251658240 data_used: 27316141
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 309 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd08a3880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156475392 unmapped: 19783680 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:41.944049+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156475392 unmapped: 19783680 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 309 heartbeat osd_stat(store_statfs(0x4f8aff000/0x0/0x4ffc00000, data 0x31d6f3f/0x338b000, compress 0x0/0x0/0x0, omap 0x3928d, meta 0x3d36d73), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 309 heartbeat osd_stat(store_statfs(0x4f8aff000/0x0/0x4ffc00000, data 0x31d6f3f/0x338b000, compress 0x0/0x0/0x0, omap 0x3928d, meta 0x3d36d73), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:42.944170+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156491776 unmapped: 19767296 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:43.944319+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156491776 unmapped: 19767296 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:44.944455+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156491776 unmapped: 19767296 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:45.944586+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2479363 data_alloc: 251658240 data_used: 27316141
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156491776 unmapped: 19767296 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:46.944782+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156491776 unmapped: 19767296 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.362527847s of 11.620371819s, submitted: 60
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:47.944965+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156491776 unmapped: 19767296 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f8af9000/0x0/0x4ffc00000, data 0x31da602/0x3391000, compress 0x0/0x0/0x0, omap 0x39894, meta 0x3d3676c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:48.945103+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156491776 unmapped: 19767296 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f8af9000/0x0/0x4ffc00000, data 0x31da602/0x3391000, compress 0x0/0x0/0x0, omap 0x39894, meta 0x3d3676c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:49.945223+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 311 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f5bdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156524544 unmapped: 19734528 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:50.945350+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2484239 data_alloc: 251658240 data_used: 27316141
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156524544 unmapped: 19734528 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 311 ms_handle_reset con 0x559dd14a3400 session 0x559dd09de380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:51.945480+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156524544 unmapped: 19734528 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d49c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 311 ms_handle_reset con 0x559dd2d5a800 session 0x559dd26d56c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f8af9000/0x0/0x4ffc00000, data 0x31da674/0x3393000, compress 0x0/0x0/0x0, omap 0x3982b, meta 0x3d367d5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 312 ms_handle_reset con 0x559dcfffe400 session 0x559dd17d0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:52.945667+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 19726336 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 312 ms_handle_reset con 0x559dd2d5a800 session 0x559dd2985500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 313 ms_handle_reset con 0x559dd14a3400 session 0x559dd26d48c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 313 ms_handle_reset con 0x559dd2d5d800 session 0x559dd147bc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 313 ms_handle_reset con 0x559dd2d49c00 session 0x559dd23b0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:53.945867+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156573696 unmapped: 19685376 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 313 heartbeat osd_stat(store_statfs(0x4f8aeb000/0x0/0x4ffc00000, data 0x31de3c6/0x339d000, compress 0x0/0x0/0x0, omap 0x39e08, meta 0x3d361f8), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 313 ms_handle_reset con 0x559dcfffe400 session 0x559dd23b1340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 313 ms_handle_reset con 0x559dd14a3400 session 0x559dd23b1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 314 ms_handle_reset con 0x559dd2d5a800 session 0x559dd23b1880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:54.945975+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156606464 unmapped: 19652608 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 314 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd2fab880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3315c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d94c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 315 ms_handle_reset con 0x559dd3315c00 session 0x559dd17d1880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:55.946080+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 315 ms_handle_reset con 0x559dd2d94c00 session 0x559dd22f7c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f8ae7000/0x0/0x4ffc00000, data 0x31e1afe/0x33a3000, compress 0x0/0x0/0x0, omap 0x3a613, meta 0x3d359ed), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2506383 data_alloc: 251658240 data_used: 27316157
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 19365888 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 315 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f4c1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 316 ms_handle_reset con 0x559dd14a3400 session 0x559dd147b6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 316 ms_handle_reset con 0x559dd178e000 session 0x559dd23b1180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:56.946192+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 316 ms_handle_reset con 0x559dd2d5d800 session 0x559dd08a3180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 316 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd23b0000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 316 ms_handle_reset con 0x559dd2d5a800 session 0x559dd2984a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158179328 unmapped: 18079744 heap: 176259072 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8ae2000/0x0/0x4ffc00000, data 0x31e37ef/0x33a6000, compress 0x0/0x0/0x0, omap 0x3aafd, meta 0x3d35503), peers [0,1] op hist [0,0,1])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 316 ms_handle_reset con 0x559dd2d5d800 session 0x559dd22f6fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 316 ms_handle_reset con 0x559dcfffe400 session 0x559dd58608c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.654004097s of 10.057458878s, submitted: 108
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:57.946332+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 169213952 unmapped: 33857536 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 317 heartbeat osd_stat(store_statfs(0x4f6042000/0x0/0x4ffc00000, data 0x4ae43df/0x4ca8000, compress 0x0/0x0/0x0, omap 0x3b00b, meta 0x4ed4ff5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 318 ms_handle_reset con 0x559dd14a3400 session 0x559dd3be3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:58.946485+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 318 ms_handle_reset con 0x559dcfffe400 session 0x559dd5861880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 318 ms_handle_reset con 0x559dd178e000 session 0x559dd17d0000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 169271296 unmapped: 33800192 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 318 ms_handle_reset con 0x559dd2d5a800 session 0x559dd27841c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 318 ms_handle_reset con 0x559dd2d5d800 session 0x559dd1543180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 318 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd2f5b180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:59.946621+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 169058304 unmapped: 34013184 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 318 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd08a2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:00.946788+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d94c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 319 ms_handle_reset con 0x559dd2d94c00 session 0x559dd3be3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2700944 data_alloc: 251658240 data_used: 33264817
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 319 ms_handle_reset con 0x559dd383e000 session 0x559dd2f4c8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 319 ms_handle_reset con 0x559dd1740c00 session 0x559dd2ec1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 169058304 unmapped: 34013184 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 319 ms_handle_reset con 0x559dd383e400 session 0x559dd3be3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 320 ms_handle_reset con 0x559dd0700000 session 0x559dd09df340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:01.946970+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 320 ms_handle_reset con 0x559dd1740c00 session 0x559dd1b148c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 320 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f5ac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 162832384 unmapped: 40239104 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d94c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:02.947119+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 321 ms_handle_reset con 0x559dd383e000 session 0x559dd2fdb180
Feb 02 16:02:22 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19172 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163872768 unmapped: 39198720 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 321 ms_handle_reset con 0x559dd383e400 session 0x559dd26d5a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd2faa540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:03.947259+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f602e000/0x0/0x4ffc00000, data 0x4aed766/0x4cb8000, compress 0x0/0x0/0x0, omap 0x3cf17, meta 0x4ed30e9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163872768 unmapped: 39198720 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd2d94c00 session 0x559dd27841c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dcfffe400 session 0x559dd23b1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd0700000 session 0x559dd23b0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd383e000 session 0x559dd2f4c540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd1740c00 session 0x559dd26d48c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dcfffe400 session 0x559dd26d56c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd0700000 session 0x559dd2f5bdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d94c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd383e000 session 0x559dd2ec1880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd2d94c00 session 0x559dd08a21c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d94c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd2d94c00 session 0x559dd29848c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dcfffe400 session 0x559dd3be3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:04.947409+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd0700000 session 0x559dd2f5b180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163889152 unmapped: 39182336 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd1740c00 session 0x559dd5860fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:05.947542+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2697994 data_alloc: 251658240 data_used: 33266795
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163889152 unmapped: 39182336 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:06.947771+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163889152 unmapped: 39182336 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf4c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dd3cf4c00 session 0x559dd147a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f5b500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.727590561s of 10.040853500s, submitted: 176
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 323 ms_handle_reset con 0x559dd0700000 session 0x559dd2fdaa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:07.947960+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d94c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 323 ms_handle_reset con 0x559dd2d94c00 session 0x559dd23b0380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 164577280 unmapped: 38494208 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f602c000/0x0/0x4ffc00000, data 0x4aeef0f/0x4cbe000, compress 0x0/0x0/0x0, omap 0x3d33f, meta 0x4ed2cc1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 324 ms_handle_reset con 0x559dd1740c00 session 0x559dd3be3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:08.948110+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 324 ms_handle_reset con 0x559dd383e000 session 0x559dd1c2d340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 38486016 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 324 ms_handle_reset con 0x559dcfffe400 session 0x559dd3be3a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 324 ms_handle_reset con 0x559dd1740c00 session 0x559dd29841c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 325 ms_handle_reset con 0x559dd0700000 session 0x559dd2f5a1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d94c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:09.948229+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 325 ms_handle_reset con 0x559dd2d94c00 session 0x559dd1b15180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 165691392 unmapped: 37380096 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:10.948337+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2748632 data_alloc: 251658240 data_used: 40281169
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 167329792 unmapped: 35741696 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 325 ms_handle_reset con 0x559dd2d66800 session 0x559dd23b16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:11.948516+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 325 ms_handle_reset con 0x559dcfffe400 session 0x559dd147b880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 172630016 unmapped: 30441472 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 325 ms_handle_reset con 0x559dd0700000 session 0x559dd17d1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 325 ms_handle_reset con 0x559dd1740c00 session 0x559dd3be3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d94c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 326 ms_handle_reset con 0x559dd2d94c00 session 0x559dd2f4c8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:12.948654+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173400064 unmapped: 29671424 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e5c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6858c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 326 ms_handle_reset con 0x559dd6858c00 session 0x559dd58616c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:13.967318+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 327 ms_handle_reset con 0x559dcfffe400 session 0x559dd09dea80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 327 ms_handle_reset con 0x559dd1740c00 session 0x559dd2f5aa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f5a6b000/0x0/0x4ffc00000, data 0x50aa267/0x5281000, compress 0x0/0x0/0x0, omap 0x3f036, meta 0x4ed0fca), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173662208 unmapped: 29409280 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 328 ms_handle_reset con 0x559dd0700000 session 0x559dd61c5c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 328 ms_handle_reset con 0x559dd30e5c00 session 0x559dd29848c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:14.967435+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173662208 unmapped: 29409280 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d94c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 328 ms_handle_reset con 0x559dd2d94c00 session 0x559dd2fdaa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 328 ms_handle_reset con 0x559dcfffe400 session 0x559dd1c2d6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:15.967601+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2836970 data_alloc: 268435456 data_used: 45962438
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173686784 unmapped: 29384704 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f5a61000/0x0/0x4ffc00000, data 0x50ad9d7/0x5287000, compress 0x0/0x0/0x0, omap 0x3f82c, meta 0x4ed07d4), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 329 ms_handle_reset con 0x559dd1740c00 session 0x559dd2985340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 329 ms_handle_reset con 0x559dd0700000 session 0x559dd22f6000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:16.967782+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e5c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 329 ms_handle_reset con 0x559dd30e5c00 session 0x559dd1b14380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173752320 unmapped: 29319168 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:17.968052+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173752320 unmapped: 29319168 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:18.968306+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.487469673s of 11.157083511s, submitted: 164
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173752320 unmapped: 29319168 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6858800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 330 ms_handle_reset con 0x559dd6859c00 session 0x559dd17d0c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 330 ms_handle_reset con 0x559dd6858800 session 0x559dd11dbc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:19.968451+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 330 ms_handle_reset con 0x559dcfffe400 session 0x559dd5860c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173768704 unmapped: 29302784 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 331 ms_handle_reset con 0x559dd0700000 session 0x559dd2ec1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 331 ms_handle_reset con 0x559dd1740c00 session 0x559dd2f4ca80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 331 ms_handle_reset con 0x559dd6859c00 session 0x559dd2fda000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:20.968571+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f600d000/0x0/0x4ffc00000, data 0x4afcba8/0x4cd8000, compress 0x0/0x0/0x0, omap 0x40183, meta 0x4ecfe7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2823951 data_alloc: 268435456 data_used: 48816042
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175816704 unmapped: 27254784 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f600d000/0x0/0x4ffc00000, data 0x4afcba8/0x4cd8000, compress 0x0/0x0/0x0, omap 0x40183, meta 0x4ecfe7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 331 ms_handle_reset con 0x559dcfffe400 session 0x559dd22f6fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:21.968752+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f600d000/0x0/0x4ffc00000, data 0x4b01ba8/0x4cdd000, compress 0x0/0x0/0x0, omap 0x40183, meta 0x4ecfe7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176586752 unmapped: 26484736 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:22.968903+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176586752 unmapped: 26484736 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:23.969016+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 26279936 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 334 ms_handle_reset con 0x559dd0700000 session 0x559dd22f7a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:24.969174+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176947200 unmapped: 26124288 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 334 handle_osd_map epochs [334,335], i have 335, src has [1,335]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6858800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dd6858800 session 0x559dd147b180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e5c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dd30e5c00 session 0x559dd2984540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dd6859800 session 0x559dd07d2380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dd1740c00 session 0x559dd2fda700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dd6859800 session 0x559dd2fabc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:25.969302+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dcfffe400 session 0x559dd22f6380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dd0700000 session 0x559dd09dfdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2879279 data_alloc: 268435456 data_used: 50327466
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e5c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dd30e5c00 session 0x559dd2785180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 heartbeat osd_stat(store_statfs(0x4f6003000/0x0/0x4ffc00000, data 0x4b089d9/0x4ce7000, compress 0x0/0x0/0x0, omap 0x40fd4, meta 0x4ecf02c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dcfffe400 session 0x559dd23b0a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 177422336 unmapped: 25649152 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 ms_handle_reset con 0x559dd0700000 session 0x559dd3be2e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 336 ms_handle_reset con 0x559dd6859800 session 0x559dd2fdb340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 336 ms_handle_reset con 0x559dd1740c00 session 0x559dd147b500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6858800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 336 ms_handle_reset con 0x559dd6858800 session 0x559dd3be2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:26.969436+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178249728 unmapped: 24821760 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 336 ms_handle_reset con 0x559dcfffe400 session 0x559dd1c2da40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:27.969613+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 336 ms_handle_reset con 0x559dd1740c00 session 0x559dd22f7500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178405376 unmapped: 24666112 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 336 ms_handle_reset con 0x559dd6859800 session 0x559dd08a36c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:28.969755+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178044928 unmapped: 25026560 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.347466469s of 10.660053253s, submitted: 136
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 338 handle_osd_map epochs [338,338], i have 338, src has [1,338]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 338 ms_handle_reset con 0x559dd0700000 session 0x559dd07d2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6858000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 338 ms_handle_reset con 0x559dd6858000 session 0x559dd2ec1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:29.969880+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 182468608 unmapped: 20602880 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:30.970024+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2919101 data_alloc: 268435456 data_used: 54828201
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 182468608 unmapped: 20602880 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:31.970207+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 338 heartbeat osd_stat(store_statfs(0x4f5a2d000/0x0/0x4ffc00000, data 0x50dc895/0x52bb000, compress 0x0/0x0/0x0, omap 0x41687, meta 0x4ece979), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 338 ms_handle_reset con 0x559dd6859000 session 0x559dd1b156c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 338 ms_handle_reset con 0x559dd6859400 session 0x559dd58608c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 182525952 unmapped: 20545536 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 338 ms_handle_reset con 0x559dcfffe400 session 0x559dd2fdba40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:32.970332+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179593216 unmapped: 23478272 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:33.970447+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179478528 unmapped: 23592960 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:34.970622+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f5f8e000/0x0/0x4ffc00000, data 0x4b7d388/0x4d5c000, compress 0x0/0x0/0x0, omap 0x41bac, meta 0x4ece454), peers [0,1] op hist [0,1])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179478528 unmapped: 23592960 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 340 ms_handle_reset con 0x559dd0700000 session 0x559dd07d3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:35.970808+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2870811 data_alloc: 268435456 data_used: 50327310
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179478528 unmapped: 23592960 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:36.971176+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 340 ms_handle_reset con 0x559dd1740c00 session 0x559dd1543c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179478528 unmapped: 23592960 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 341 heartbeat osd_stat(store_statfs(0x4f5f80000/0x0/0x4ffc00000, data 0x4b87110/0x4d6a000, compress 0x0/0x0/0x0, omap 0x421e3, meta 0x4ecde1d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:37.971283+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 341 ms_handle_reset con 0x559dd6859000 session 0x559dd1542e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 341 ms_handle_reset con 0x559dcfffe400 session 0x559dd09dee00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179503104 unmapped: 23568384 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:38.971433+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 342 ms_handle_reset con 0x559dd0700000 session 0x559dd2785a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 342 ms_handle_reset con 0x559dd6859400 session 0x559dd2785180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 342 ms_handle_reset con 0x559dd6859800 session 0x559dd2984700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:39.971592+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 342 heartbeat osd_stat(store_statfs(0x4f5f7b000/0x0/0x4ffc00000, data 0x4b88eee/0x4d6c000, compress 0x0/0x0/0x0, omap 0x424b8, meta 0x4ecdb48), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:40.971753+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.005645752s of 11.276721001s, submitted: 110
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 343 ms_handle_reset con 0x559dcfffe400 session 0x559dd2784000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 343 ms_handle_reset con 0x559dd0700000 session 0x559dd2f5bdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2877578 data_alloc: 268435456 data_used: 50327310
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 343 ms_handle_reset con 0x559dd6859000 session 0x559dd57f4000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:41.971915+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f5f7c000/0x0/0x4ffc00000, data 0x4b8a793/0x4d6e000, compress 0x0/0x0/0x0, omap 0x42aa9, meta 0x4ecd557), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:42.972022+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 344 ms_handle_reset con 0x559dd6859400 session 0x559dd1c2cc40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:43.972131+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:44.972390+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:45.972499+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2879270 data_alloc: 268435456 data_used: 50328905
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 344 heartbeat osd_stat(store_statfs(0x4f5f7b000/0x0/0x4ffc00000, data 0x4b8c3bb/0x4d71000, compress 0x0/0x0/0x0, omap 0x42c26, meta 0x4ecd3da), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:46.972630+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 344 handle_osd_map epochs [344,345], i have 345, src has [1,345]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:47.972747+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:48.972890+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a5000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 345 ms_handle_reset con 0x559dd14a5000 session 0x559dd2ec0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:49.973027+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:50.973163+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2885890 data_alloc: 268435456 data_used: 50337097
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 23543808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.343337059s of 10.450056076s, submitted: 62
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 346 ms_handle_reset con 0x559dd6859000 session 0x559dd1542380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f5f71000/0x0/0x4ffc00000, data 0x4b8fab8/0x4d79000, compress 0x0/0x0/0x0, omap 0x43221, meta 0x4eccddf), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:51.973308+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 346 ms_handle_reset con 0x559dd6859400 session 0x559dd27856c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 346 ms_handle_reset con 0x559dcfffe400 session 0x559dd23b16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 23093248 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3843c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 347 ms_handle_reset con 0x559dd14a4400 session 0x559dd07d2380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 ms_handle_reset con 0x559dd3843c00 session 0x559dd17d0fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 ms_handle_reset con 0x559dd0700000 session 0x559dd3be3180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:52.973440+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 180035584 unmapped: 23035904 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f4c1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:53.973573+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 ms_handle_reset con 0x559dd383f000 session 0x559dd2f4c700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 ms_handle_reset con 0x559dd2d67c00 session 0x559dd17d0380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 180035584 unmapped: 23035904 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 ms_handle_reset con 0x559dd14a4400 session 0x559dd5861880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 ms_handle_reset con 0x559dcfffe400 session 0x559dd1b141c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:54.973777+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 180068352 unmapped: 23003136 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:55.973915+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f5f72000/0x0/0x4ffc00000, data 0x4b1a2c7/0x4d05000, compress 0x0/0x0/0x0, omap 0x438d1, meta 0x4ecc72f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2894340 data_alloc: 268435456 data_used: 51193844
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 180068352 unmapped: 23003136 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 ms_handle_reset con 0x559dd2d67c00 session 0x559dd61c5dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 349 ms_handle_reset con 0x559dd383f000 session 0x559dd147bc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:56.974062+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f5fe1000/0x0/0x4ffc00000, data 0x4b1bee1/0x4d09000, compress 0x0/0x0/0x0, omap 0x43b34, meta 0x4ecc4cc), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 349 ms_handle_reset con 0x559dd6859400 session 0x559dd2f5a1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2788400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 349 ms_handle_reset con 0x559dd2788400 session 0x559dd5861c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 180076544 unmapped: 22994944 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 350 ms_handle_reset con 0x559dd6859000 session 0x559dd29856c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 350 ms_handle_reset con 0x559dd0700000 session 0x559dd27a5340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:57.974239+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 180076544 unmapped: 22994944 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 352 ms_handle_reset con 0x559dd2d67c00 session 0x559dd22f68c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:58.974371+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 352 ms_handle_reset con 0x559dcfffe400 session 0x559dd27856c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 352 ms_handle_reset con 0x559dd383f000 session 0x559dd17d0a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 171843584 unmapped: 31227904 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 352 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f5b180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 353 ms_handle_reset con 0x559dd0700000 session 0x559dd2785500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:59.974559+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 171155456 unmapped: 31916032 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 353 ms_handle_reset con 0x559dd2d67c00 session 0x559dd2ec0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:00.974689+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 353 ms_handle_reset con 0x559dd383f000 session 0x559dd2ec1500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2679067 data_alloc: 251658240 data_used: 34328918
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 171163648 unmapped: 31907840 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 353 ms_handle_reset con 0x559dd3312000 session 0x559dd3be36c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 353 ms_handle_reset con 0x559dd3309000 session 0x559dd2f4c000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.173216820s of 10.470589638s, submitted: 177
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:01.974777+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 354 ms_handle_reset con 0x559dd383f000 session 0x559dd23b0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 354 ms_handle_reset con 0x559dd0700000 session 0x559dd5860c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158466048 unmapped: 44605440 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f78d0000/0x0/0x4ffc00000, data 0x3223e29/0x3416000, compress 0x0/0x0/0x0, omap 0x44c29, meta 0x4ecb3d7), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 354 ms_handle_reset con 0x559dd6859400 session 0x559dd3be3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 355 ms_handle_reset con 0x559dd2d67c00 session 0x559dd2f4d6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:02.974910+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 355 ms_handle_reset con 0x559dd0700000 session 0x559dd2f4c540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3309000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 355 ms_handle_reset con 0x559dd2d67c00 session 0x559dd5860540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 355 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f5bdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158482432 unmapped: 44589056 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 356 ms_handle_reset con 0x559dd3309000 session 0x559dd2f5afc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:03.975049+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 356 ms_handle_reset con 0x559dd6859000 session 0x559dd08a3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 356 ms_handle_reset con 0x559dd383f000 session 0x559dd2f4ddc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 356 ms_handle_reset con 0x559dcfffe400 session 0x559dd2fdb340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158482432 unmapped: 44589056 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:04.975196+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 356 ms_handle_reset con 0x559dd2d67c00 session 0x559dd2f5ac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158482432 unmapped: 44589056 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 357 ms_handle_reset con 0x559dd0700000 session 0x559dd3be2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:05.975321+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3309000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 357 ms_handle_reset con 0x559dd3309000 session 0x559dd22f7500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2556417 data_alloc: 234881024 data_used: 20404097
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 44515328 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:06.975502+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 357 heartbeat osd_stat(store_statfs(0x4f8602000/0x0/0x4ffc00000, data 0x24f2ff8/0x26e8000, compress 0x0/0x0/0x0, omap 0x457c8, meta 0x4eca838), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 44515328 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 357 ms_handle_reset con 0x559dd0700000 session 0x559dd2fdb180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:07.975679+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 358 ms_handle_reset con 0x559dd383f000 session 0x559dd2f4da40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 358 ms_handle_reset con 0x559dcfffe400 session 0x559dd26d56c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f8604000/0x0/0x4ffc00000, data 0x24f2ff8/0x26e8000, compress 0x0/0x0/0x0, omap 0x457c8, meta 0x4eca838), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158007296 unmapped: 45064192 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6859400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f8604000/0x0/0x4ffc00000, data 0x24f2ff8/0x26e8000, compress 0x0/0x0/0x0, omap 0x457c8, meta 0x4eca838), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:08.975810+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 359 ms_handle_reset con 0x559dd2d67c00 session 0x559dd2f4d340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 359 ms_handle_reset con 0x559dd6859400 session 0x559dd08a3880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158007296 unmapped: 45064192 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:09.976012+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158007296 unmapped: 45064192 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f85fb000/0x0/0x4ffc00000, data 0x24f6993/0x26ed000, compress 0x0/0x0/0x0, omap 0x45e52, meta 0x4eca1ae), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:10.976214+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2563769 data_alloc: 234881024 data_used: 20405323
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 360 ms_handle_reset con 0x559dcfffe400 session 0x559dd57f4000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158023680 unmapped: 45047808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 360 ms_handle_reset con 0x559dd0700000 session 0x559dd08a3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:11.976418+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158023680 unmapped: 45047808 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.291426659s of 10.574503899s, submitted: 134
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 361 ms_handle_reset con 0x559dd2d67c00 session 0x559dd2f4d6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:12.976565+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 361 ms_handle_reset con 0x559dd383f000 session 0x559dd2ec0e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158056448 unmapped: 45015040 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:13.976697+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158056448 unmapped: 45015040 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:14.976907+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 361 heartbeat osd_stat(store_statfs(0x4f85f8000/0x0/0x4ffc00000, data 0x24fa177/0x26f0000, compress 0x0/0x0/0x0, omap 0x4642f, meta 0x4ec9bd1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158056448 unmapped: 45015040 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:15.977048+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5f800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 361 ms_handle_reset con 0x559dd2d5f800 session 0x559dd2f5ae00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2565869 data_alloc: 234881024 data_used: 20405209
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158056448 unmapped: 45015040 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 361 heartbeat osd_stat(store_statfs(0x4f85f8000/0x0/0x4ffc00000, data 0x24fa177/0x26f0000, compress 0x0/0x0/0x0, omap 0x4642f, meta 0x4ec9bd1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:17.316329+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158064640 unmapped: 45006848 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 362 ms_handle_reset con 0x559dcfffe400 session 0x559dd5860000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 363 ms_handle_reset con 0x559dd0700000 session 0x559dd61c5a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 363 ms_handle_reset con 0x559dd2d67c00 session 0x559dd11dbc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:18.316530+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 44965888 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:19.316737+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 44965888 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 363 heartbeat osd_stat(store_statfs(0x4f85f1000/0x0/0x4ffc00000, data 0x24fde78/0x26f7000, compress 0x0/0x0/0x0, omap 0x47a1b, meta 0x4ec85e5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 363 ms_handle_reset con 0x559dd383f000 session 0x559dd1b14700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:20.316904+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 44965888 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd67c2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 364 ms_handle_reset con 0x559dd2d4a000 session 0x559dd08a2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:21.317122+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2576538 data_alloc: 234881024 data_used: 20405209
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158113792 unmapped: 44957696 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 365 ms_handle_reset con 0x559dd67c2000 session 0x559dd07d2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:22.317293+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158113792 unmapped: 44957696 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 365 heartbeat osd_stat(store_statfs(0x4f85ea000/0x0/0x4ffc00000, data 0x2501612/0x26fe000, compress 0x0/0x0/0x0, omap 0x47f88, meta 0x4ec8078), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:23.317445+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158113792 unmapped: 44957696 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 365 ms_handle_reset con 0x559dcfffe400 session 0x559dd17d0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.230230331s of 11.308828354s, submitted: 106
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 366 ms_handle_reset con 0x559dd2d67c00 session 0x559dd1c2da40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:24.317611+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158130176 unmapped: 44941312 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 367 ms_handle_reset con 0x559dd0700000 session 0x559dd1b148c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:25.317802+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 44933120 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f85e4000/0x0/0x4ffc00000, data 0x2504d4a/0x2704000, compress 0x0/0x0/0x0, omap 0x4861e, meta 0x4ec79e2), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 367 ms_handle_reset con 0x559dd383f000 session 0x559dd1b141c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:26.317929+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2588672 data_alloc: 234881024 data_used: 20405209
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 44933120 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f85e4000/0x0/0x4ffc00000, data 0x2504d4a/0x2704000, compress 0x0/0x0/0x0, omap 0x4861e, meta 0x4ec79e2), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:27.318071+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 44933120 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 367 ms_handle_reset con 0x559dd0700000 session 0x559dd27a5180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:28.318222+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 367 handle_osd_map epochs [367,368], i have 368, src has [1,368]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 368 ms_handle_reset con 0x559dcfffe400 session 0x559dd147bc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 44933120 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 368 ms_handle_reset con 0x559dd383f000 session 0x559dd2fabdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd67c2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 368 ms_handle_reset con 0x559dd67c2000 session 0x559dd2984fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331dc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:29.318403+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 369 ms_handle_reset con 0x559dd331dc00 session 0x559dd07d2380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 369 ms_handle_reset con 0x559dd2d67c00 session 0x559dd6742000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 44933120 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 369 ms_handle_reset con 0x559dcfffe400 session 0x559dd5860a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 369 ms_handle_reset con 0x559dd0700000 session 0x559dd26d5880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383f000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd67c2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 369 ms_handle_reset con 0x559dd67c2000 session 0x559dd2f4c700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:30.318551+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 370 ms_handle_reset con 0x559dd383f000 session 0x559dd23b0c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 157343744 unmapped: 45727744 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 370 ms_handle_reset con 0x559dcfffe400 session 0x559dd1b14fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:31.318679+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2597488 data_alloc: 234881024 data_used: 20406379
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 157343744 unmapped: 45727744 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:32.318859+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 157343744 unmapped: 45727744 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f85df000/0x0/0x4ffc00000, data 0x250a237/0x270d000, compress 0x0/0x0/0x0, omap 0x48d1b, meta 0x4ec72e5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 371 ms_handle_reset con 0x559dd0700000 session 0x559dd09dfdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 371 ms_handle_reset con 0x559dd2d67c00 session 0x559dd3be3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:33.319037+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 157343744 unmapped: 45727744 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd67c2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.868107796s of 10.297609329s, submitted: 51
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:34.319242+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 157343744 unmapped: 45727744 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 372 ms_handle_reset con 0x559dd178e000 session 0x559dd26d4540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 372 ms_handle_reset con 0x559dd67c2000 session 0x559dd5860c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 372 ms_handle_reset con 0x559dcfffe400 session 0x559dd1b148c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:35.319358+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 372 heartbeat osd_stat(store_statfs(0x4f85d8000/0x0/0x4ffc00000, data 0x250db42/0x2712000, compress 0x0/0x0/0x0, omap 0x49295, meta 0x4ec6d6b), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 157368320 unmapped: 45703168 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 372 handle_osd_map epochs [372,373], i have 373, src has [1,373]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 373 ms_handle_reset con 0x559dd0700000 session 0x559dd1543dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:36.319468+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2606230 data_alloc: 234881024 data_used: 20407946
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 157368320 unmapped: 45703168 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 373 ms_handle_reset con 0x559dd2d67c00 session 0x559dd3be3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 374 ms_handle_reset con 0x559dd178ec00 session 0x559dd22f68c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 374 ms_handle_reset con 0x559dd178e000 session 0x559dd2f4cfc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 374 ms_handle_reset con 0x559dcfffe400 session 0x559dd07d2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:37.319650+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 44613632 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 374 heartbeat osd_stat(store_statfs(0x4f85cf000/0x0/0x4ffc00000, data 0x2510e89/0x2717000, compress 0x0/0x0/0x0, omap 0x496ee, meta 0x4ec6912), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 374 handle_osd_map epochs [375,375], i have 375, src has [1,375]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:38.319851+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158466048 unmapped: 44605440 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 375 handle_osd_map epochs [377,377], i have 375, src has [1,377]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 375 handle_osd_map epochs [376,377], i have 375, src has [1,377]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:39.319967+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158490624 unmapped: 44580864 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:40.320122+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158490624 unmapped: 44580864 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 378 ms_handle_reset con 0x559dd178ec00 session 0x559dd2984700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 378 ms_handle_reset con 0x559dd0700000 session 0x559dd2ec0a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:41.320280+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2623042 data_alloc: 234881024 data_used: 20413609
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158507008 unmapped: 44564480 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:42.320438+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158507008 unmapped: 44564480 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 379 handle_osd_map epochs [380,381], i have 379, src has [1,381]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f85c1000/0x0/0x4ffc00000, data 0x2519bf8/0x2727000, compress 0x0/0x0/0x0, omap 0x4a26c, meta 0x4ec5d94), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:43.320674+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331b400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 381 ms_handle_reset con 0x559dd331b400 session 0x559dd17d1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158515200 unmapped: 44556288 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 382 ms_handle_reset con 0x559dd2d67c00 session 0x559dd23b1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:44.320936+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158531584 unmapped: 44539904 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.352110863s of 10.664066315s, submitted: 212
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 383 ms_handle_reset con 0x559dcfffe400 session 0x559dd2faa8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:45.321058+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158539776 unmapped: 44531712 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 383 ms_handle_reset con 0x559dd0700000 session 0x559dd2ec08c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:46.321221+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2636302 data_alloc: 234881024 data_used: 20413609
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158179328 unmapped: 44892160 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 384 ms_handle_reset con 0x559dd178ec00 session 0x559dd08a21c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:47.321369+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158203904 unmapped: 44867584 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 384 ms_handle_reset con 0x559dd2d67c00 session 0x559dd1542e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd331b400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 384 ms_handle_reset con 0x559dd331b400 session 0x559dd2ec0c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f85b5000/0x0/0x4ffc00000, data 0x252272d/0x2735000, compress 0x0/0x0/0x0, omap 0x4b438, meta 0x4ec4bc8), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f85b5000/0x0/0x4ffc00000, data 0x252272d/0x2735000, compress 0x0/0x0/0x0, omap 0x4b507, meta 0x4ec4af9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:48.321587+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 44834816 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 385 ms_handle_reset con 0x559dcfffe400 session 0x559dd17d0540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:49.321776+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 44834816 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 386 ms_handle_reset con 0x559dd0700000 session 0x559dd2ec1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:50.321953+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158490624 unmapped: 44580864 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2680c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 387 ms_handle_reset con 0x559dd4f7ec00 session 0x559dd61c5a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 387 ms_handle_reset con 0x559dd2d67c00 session 0x559dd61c56c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 387 ms_handle_reset con 0x559dd2680c00 session 0x559dd08a36c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 387 ms_handle_reset con 0x559dd2d5a800 session 0x559dd147b880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:51.322115+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2653560 data_alloc: 234881024 data_used: 20414807
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158490624 unmapped: 44580864 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 387 ms_handle_reset con 0x559dcfffe400 session 0x559dd08a2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 387 ms_handle_reset con 0x559dd0700000 session 0x559dd61c5dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d67c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 387 ms_handle_reset con 0x559dd4f7ec00 session 0x559dd58601c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 387 ms_handle_reset con 0x559dd2d67c00 session 0x559dd2ec0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:52.322251+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158507008 unmapped: 44564480 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 389 ms_handle_reset con 0x559dd0700000 session 0x559dd2fda540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:53.322363+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f8581000/0x0/0x4ffc00000, data 0x254f483/0x2769000, compress 0x0/0x0/0x0, omap 0x4c711, meta 0x4ec38ef), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158523392 unmapped: 44548096 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 390 ms_handle_reset con 0x559dcfffe400 session 0x559dd23b01c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:54.322534+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158523392 unmapped: 44548096 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.235068321s of 10.441557884s, submitted: 116
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 ms_handle_reset con 0x559dd2d5a800 session 0x559dd2fabdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:55.322674+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 44523520 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:56.322850+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 ms_handle_reset con 0x559dd178ec00 session 0x559dd09de380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2665025 data_alloc: 234881024 data_used: 20418638
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 44523520 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 ms_handle_reset con 0x559dd4f7ec00 session 0x559dd07d3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:57.323131+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 ms_handle_reset con 0x559dcfffe400 session 0x559dd2faa700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 44523520 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 ms_handle_reset con 0x559dd178ec00 session 0x559dd2fdba40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 ms_handle_reset con 0x559dd0700000 session 0x559dd147b6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 ms_handle_reset con 0x559dd2d5a800 session 0x559dd5860a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f859f000/0x0/0x4ffc00000, data 0x252e8a6/0x274d000, compress 0x0/0x0/0x0, omap 0x51337, meta 0x4ebecc9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 391 handle_osd_map epochs [392,392], i have 392, src has [1,392]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:58.323294+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158564352 unmapped: 44507136 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:59.323434+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 393 ms_handle_reset con 0x559dd2d5d800 session 0x559dd58608c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158580736 unmapped: 44490752 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f8597000/0x0/0x4ffc00000, data 0x2531ef9/0x2753000, compress 0x0/0x0/0x0, omap 0x516b3, meta 0x4ebe94d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 393 ms_handle_reset con 0x559dd0700000 session 0x559dd57f5dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:00.323577+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158629888 unmapped: 44441600 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 393 ms_handle_reset con 0x559dd178ec00 session 0x559dd147b880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 393 ms_handle_reset con 0x559dcfffe400 session 0x559dd2fdafc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:01.323806+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670051 data_alloc: 234881024 data_used: 20413743
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158679040 unmapped: 44392448 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f8598000/0x0/0x4ffc00000, data 0x2531e86/0x2751000, compress 0x0/0x0/0x0, omap 0x516b3, meta 0x4ebe94d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 393 ms_handle_reset con 0x559dd2d5a800 session 0x559dd2ec0fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:02.323955+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158679040 unmapped: 44392448 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:03.324087+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 44367872 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:04.324211+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 ms_handle_reset con 0x559dd2789c00 session 0x559dd07d2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 44367872 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:05.324382+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.106964111s of 10.198370934s, submitted: 81
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 ms_handle_reset con 0x559dcfffe400 session 0x559dd147a8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156901376 unmapped: 46170112 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 ms_handle_reset con 0x559dd0700000 session 0x559dd61c5dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:06.324500+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2679337 data_alloc: 234881024 data_used: 19435269
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156901376 unmapped: 46170112 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 ms_handle_reset con 0x559dd178ec00 session 0x559dd3be2e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f8593000/0x0/0x4ffc00000, data 0x25358c3/0x2759000, compress 0x0/0x0/0x0, omap 0x51e4e, meta 0x4ebe1b2), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 ms_handle_reset con 0x559dd2d5a800 session 0x559dd2ec0c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0ac3000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 ms_handle_reset con 0x559dd0ac3000 session 0x559dd2ec1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 ms_handle_reset con 0x559dd2789c00 session 0x559dd07d2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:07.324604+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 ms_handle_reset con 0x559dcfffe400 session 0x559dd23b0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156901376 unmapped: 46170112 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f8593000/0x0/0x4ffc00000, data 0x25358c3/0x2759000, compress 0x0/0x0/0x0, omap 0x51e04, meta 0x4ebe1fc), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:08.324758+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156901376 unmapped: 46170112 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 ms_handle_reset con 0x559dd0700000 session 0x559dd61c5a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0ac3000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:09.324902+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 396 ms_handle_reset con 0x559dd0ac3000 session 0x559dd6742e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd178ec00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 396 ms_handle_reset con 0x559dd178ec00 session 0x559dd5860380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 157966336 unmapped: 45105152 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 396 ms_handle_reset con 0x559dcfffe400 session 0x559dd2fda000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 396 ms_handle_reset con 0x559dd0700000 session 0x559dd08a2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:10.325034+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0ac3000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 396 ms_handle_reset con 0x559dd2789c00 session 0x559dd22f7500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 396 ms_handle_reset con 0x559dd0ac3000 session 0x559dd23b16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 44933120 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 397 ms_handle_reset con 0x559dd2d5a800 session 0x559dd6742000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:11.325169+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2750503 data_alloc: 234881024 data_used: 19435869
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 397 ms_handle_reset con 0x559dcfffe400 session 0x559dd3be2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 44933120 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 397 ms_handle_reset con 0x559dd0700000 session 0x559dd58601c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f7aba000/0x0/0x4ffc00000, data 0x300903d/0x3230000, compress 0x0/0x0/0x0, omap 0x5261c, meta 0x4ebd9e4), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:12.325395+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 44933120 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0ac3000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 397 ms_handle_reset con 0x559dd0ac3000 session 0x559dd27a4e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 398 ms_handle_reset con 0x559dd2789c00 session 0x559dd2a13a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:13.325639+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158146560 unmapped: 44924928 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2788400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:14.325842+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158146560 unmapped: 44924928 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f7ab6000/0x0/0x4ffc00000, data 0x300abeb/0x3232000, compress 0x0/0x0/0x0, omap 0x52a3f, meta 0x4ebd5c1), peers [0,1] op hist [0,0,0,0,0,0,1])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 398 ms_handle_reset con 0x559dd2788400 session 0x559dd5861c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:15.326016+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158146560 unmapped: 44924928 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 398 ms_handle_reset con 0x559dcfffe400 session 0x559dd22f6fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.256335258s of 10.737761497s, submitted: 76
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:16.326165+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2751435 data_alloc: 234881024 data_used: 19435854
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158146560 unmapped: 44924928 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 399 ms_handle_reset con 0x559dd0700000 session 0x559dd2ec0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:17.326324+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158179328 unmapped: 44892160 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:18.326480+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158179328 unmapped: 44892160 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0ac3000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:19.326607+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3317400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 399 ms_handle_reset con 0x559dd2789c00 session 0x559dd17d0380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 399 ms_handle_reset con 0x559dd0ac3000 session 0x559dd2ec1340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 399 ms_handle_reset con 0x559dd3317400 session 0x559dd2784000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 399 ms_handle_reset con 0x559dcfffe400 session 0x559dd23b1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158203904 unmapped: 44867584 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f7ab6000/0x0/0x4ffc00000, data 0x300c7e8/0x3234000, compress 0x0/0x0/0x0, omap 0x52bc3, meta 0x4ebd43d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:20.326787+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158203904 unmapped: 44867584 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:21.326921+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2756247 data_alloc: 234881024 data_used: 19435854
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158203904 unmapped: 44867584 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:22.327045+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158203904 unmapped: 44867584 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 400 heartbeat osd_stat(store_statfs(0x4f7ab6000/0x0/0x4ffc00000, data 0x300c7e8/0x3234000, compress 0x0/0x0/0x0, omap 0x52f73, meta 0x4ebd08d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:23.327163+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 44859392 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:24.327290+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 400 ms_handle_reset con 0x559dd0700000 session 0x559dd07d3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 44859392 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:25.327426+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0ac3000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 400 ms_handle_reset con 0x559dd0ac3000 session 0x559dd2f5afc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 44859392 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 400 ms_handle_reset con 0x559dd2789c00 session 0x559dd1542380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:26.327593+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3315800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.261169434s of 10.318774223s, submitted: 48
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2761673 data_alloc: 234881024 data_used: 19435854
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 158081024 unmapped: 44990464 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3313400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 401 ms_handle_reset con 0x559dd3313400 session 0x559dd2fdb180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:27.327697+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 161234944 unmapped: 41836544 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f7ab4000/0x0/0x4ffc00000, data 0x300e301/0x3238000, compress 0x0/0x0/0x0, omap 0x53465, meta 0x4ebcb9b), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:28.327861+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3313400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 162643968 unmapped: 40427520 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 401 ms_handle_reset con 0x559dd3313400 session 0x559dd2ec1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 401 ms_handle_reset con 0x559dd330a400 session 0x559dd2f4ca80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 401 ms_handle_reset con 0x559dd3315800 session 0x559dd17d0a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:29.327972+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 401 ms_handle_reset con 0x559dd0700000 session 0x559dd147a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 402 ms_handle_reset con 0x559dcfffe400 session 0x559dd08a36c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 162643968 unmapped: 40427520 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 402 ms_handle_reset con 0x559dcfffe400 session 0x559dd07d2380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:30.328108+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 402 ms_handle_reset con 0x559dd0700000 session 0x559dd3be3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 40419328 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 402 ms_handle_reset con 0x559dd330a400 session 0x559dd2f4ddc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3313400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 402 ms_handle_reset con 0x559dd3313400 session 0x559dd23b1500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3315800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 402 ms_handle_reset con 0x559dd3315800 session 0x559dd2a121c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:31.328280+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2832994 data_alloc: 251658240 data_used: 30773582
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 162668544 unmapped: 40402944 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 402 ms_handle_reset con 0x559dcfffe400 session 0x559dd2faa1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:32.328439+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 403 ms_handle_reset con 0x559dd0700000 session 0x559dd2984a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 162701312 unmapped: 40370176 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:33.328604+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 403 ms_handle_reset con 0x559dd330a400 session 0x559dd2ec1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3313400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 159383552 unmapped: 43687936 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 403 ms_handle_reset con 0x559dd3313400 session 0x559dd5860a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f7aaa000/0x0/0x4ffc00000, data 0x301361b/0x3240000, compress 0x0/0x0/0x0, omap 0x54632, meta 0x4ebb9ce), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:34.328796+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155164672 unmapped: 47906816 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:35.328943+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155164672 unmapped: 47906816 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0ac3000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 403 ms_handle_reset con 0x559dd0ac3000 session 0x559dd2785500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:36.329092+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2713754 data_alloc: 234881024 data_used: 19436467
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155189248 unmapped: 47882240 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.657566071s of 10.857511520s, submitted: 134
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x254362a/0x2771000, compress 0x0/0x0/0x0, omap 0x545a0, meta 0x4ebba60), peers [0,1] op hist [0,1])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 403 ms_handle_reset con 0x559dcfffe400 session 0x559dd2ec0c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:37.329219+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 403 ms_handle_reset con 0x559dd0700000 session 0x559dd57f4000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155058176 unmapped: 48013312 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3313400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 404 ms_handle_reset con 0x559dd3313400 session 0x559dd08a3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:38.329364+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155066368 unmapped: 48005120 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 404 ms_handle_reset con 0x559dd2789c00 session 0x559dd2f5bdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f8576000/0x0/0x4ffc00000, data 0x25450a9/0x2774000, compress 0x0/0x0/0x0, omap 0x54bce, meta 0x4ebb432), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330cc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 405 ms_handle_reset con 0x559dd2d5a400 session 0x559dd2a12000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:39.329562+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155090944 unmapped: 47980544 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 406 ms_handle_reset con 0x559dd330cc00 session 0x559dd5860000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 406 ms_handle_reset con 0x559dd330a400 session 0x559dd07d2e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:40.329733+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155090944 unmapped: 47980544 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 406 ms_handle_reset con 0x559dcfffe400 session 0x559dd2fabc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:41.331086+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 406 ms_handle_reset con 0x559dd0700000 session 0x559dd23b1340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2778402 data_alloc: 234881024 data_used: 19436482
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155172864 unmapped: 47898624 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3313400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 406 ms_handle_reset con 0x559dd3313400 session 0x559dd08a3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 407 ms_handle_reset con 0x559dcfffe400 session 0x559dd26d48c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:42.332309+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 407 heartbeat osd_stat(store_statfs(0x4f7c5b000/0x0/0x4ffc00000, data 0x2e5c37d/0x308f000, compress 0x0/0x0/0x0, omap 0x55376, meta 0x4ebac8a), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155172864 unmapped: 47898624 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 407 ms_handle_reset con 0x559dd0700000 session 0x559dd08a3880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 408 ms_handle_reset con 0x559dd330a400 session 0x559dd1543180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 408 ms_handle_reset con 0x559dd2789c00 session 0x559dd57f5a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:43.333387+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155172864 unmapped: 47898624 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330cc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:44.333943+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3cf5c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 408 ms_handle_reset con 0x559dd3cf5c00 session 0x559dd2fab500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155172864 unmapped: 47898624 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:45.334119+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 409 ms_handle_reset con 0x559dd330cc00 session 0x559dd17d0540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 409 ms_handle_reset con 0x559dcfffe400 session 0x559dd23b16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c53000/0x0/0x4ffc00000, data 0x2e6005f/0x3097000, compress 0x0/0x0/0x0, omap 0x5592f, meta 0x4eba6d1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155189248 unmapped: 47882240 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:46.334910+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 409 ms_handle_reset con 0x559dd0700000 session 0x559dd22f7a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2793337 data_alloc: 234881024 data_used: 19436482
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155189248 unmapped: 47882240 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 409 ms_handle_reset con 0x559dd330a400 session 0x559dd2f4ca80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3311400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 409 ms_handle_reset con 0x559dd3311400 session 0x559dd2fdafc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 410 ms_handle_reset con 0x559dd0700000 session 0x559dd147a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:47.335120+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 48922624 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.643310547s of 10.817337990s, submitted: 69
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 411 ms_handle_reset con 0x559dcfffe400 session 0x559dd3be2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 411 ms_handle_reset con 0x559dd2789c00 session 0x559dd07d2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:48.335317+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 48922624 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 411 heartbeat osd_stat(store_statfs(0x4f7c49000/0x0/0x4ffc00000, data 0x2e633af/0x309f000, compress 0x0/0x0/0x0, omap 0x55eec, meta 0x4eba114), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:49.335605+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330cc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154173440 unmapped: 48898048 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 412 ms_handle_reset con 0x559dd330cc00 session 0x559dd2ec0a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0ac3800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:50.335741+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 412 ms_handle_reset con 0x559dd0ac3800 session 0x559dd1543dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 412 ms_handle_reset con 0x559dd330a400 session 0x559dd07d2380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 412 ms_handle_reset con 0x559dcfffe400 session 0x559dd5861a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 48881664 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 412 ms_handle_reset con 0x559dd0700000 session 0x559dd3be2e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 412 ms_handle_reset con 0x559dd2789c00 session 0x559dd27a4e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:51.336141+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2801546 data_alloc: 234881024 data_used: 19436482
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330cc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 412 ms_handle_reset con 0x559dd330cc00 session 0x559dd08a2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 48881664 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 412 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f5a8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:52.336266+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 154550272 unmapped: 48521216 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 413 ms_handle_reset con 0x559dd330d800 session 0x559dd1542380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:53.336436+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 155910144 unmapped: 47161344 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2ef2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 414 ms_handle_reset con 0x559dd2ef2000 session 0x559dd3be3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:54.336717+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156049408 unmapped: 47022080 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f7c16000/0x0/0x4ffc00000, data 0x2e9272f/0x30d2000, compress 0x0/0x0/0x0, omap 0x56669, meta 0x4eb9997), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 ms_handle_reset con 0x559dd2d4b000 session 0x559dd3be2c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 ms_handle_reset con 0x559dd2d5b000 session 0x559dd2985180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 ms_handle_reset con 0x559dd330a400 session 0x559dd27a5340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:55.336932+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 ms_handle_reset con 0x559dd330a800 session 0x559dd5861880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156180480 unmapped: 46891008 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 ms_handle_reset con 0x559dcfffe400 session 0x559dd2984a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 ms_handle_reset con 0x559dd2d4b000 session 0x559dd58608c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:56.337061+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f7c10000/0x0/0x4ffc00000, data 0x2e947f7/0x30d6000, compress 0x0/0x0/0x0, omap 0x567ea, meta 0x4eb9816), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2877689 data_alloc: 251658240 data_used: 28950440
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156180480 unmapped: 46891008 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:57.337209+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f7c10000/0x0/0x4ffc00000, data 0x2e947f7/0x30d6000, compress 0x0/0x0/0x0, omap 0x567ea, meta 0x4eb9816), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 ms_handle_reset con 0x559dd2d5b000 session 0x559dd2fdba40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156180480 unmapped: 46891008 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 416 ms_handle_reset con 0x559dcfffe400 session 0x559dd6742e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f7c10000/0x0/0x4ffc00000, data 0x2e947f7/0x30d6000, compress 0x0/0x0/0x0, omap 0x567ea, meta 0x4eb9816), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.856781960s of 10.029344559s, submitted: 81
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:58.337394+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 46874624 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 417 ms_handle_reset con 0x559dd2d4b000 session 0x559dd07d3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:59.337524+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 46874624 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:00.337688+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 46874624 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 417 ms_handle_reset con 0x559dd330a800 session 0x559dd08a3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:01.337836+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 417 handle_osd_map epochs [417,418], i have 418, src has [1,418]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2ef2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f7c09000/0x0/0x4ffc00000, data 0x2e99b52/0x30e1000, compress 0x0/0x0/0x0, omap 0x572aa, meta 0x4eb8d56), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 418 ms_handle_reset con 0x559dd2ef2000 session 0x559dd2faac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2893273 data_alloc: 251658240 data_used: 28951610
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156205056 unmapped: 46866432 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:02.337972+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 419 ms_handle_reset con 0x559dd330d800 session 0x559dd1b14fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 419 ms_handle_reset con 0x559dd330a400 session 0x559dd58601c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 156221440 unmapped: 46850048 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:03.338093+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163405824 unmapped: 39665664 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:04.338265+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f79b4000/0x0/0x4ffc00000, data 0x30e91eb/0x3334000, compress 0x0/0x0/0x0, omap 0x57626, meta 0x4eb89da), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f4ca80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dd2d4b000 session 0x559dd2f5ac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163086336 unmapped: 39985152 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:05.338526+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163086336 unmapped: 39985152 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2ef2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dd2ef2000 session 0x559dd1543dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:06.338785+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2957657 data_alloc: 251658240 data_used: 29701845
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163094528 unmapped: 39976960 heap: 203071488 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dd330a800 session 0x559dd5861a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:07.338949+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dd330a800 session 0x559dd07d2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163348480 unmapped: 48128000 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2ef2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dd330a400 session 0x559dd2fda380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dcfffe400 session 0x559dd2fdb880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dd2ef2000 session 0x559dd2985500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330d800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0861c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dd330d800 session 0x559dd2faa380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 ms_handle_reset con 0x559dd0861c00 session 0x559dd23b1880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:08.339243+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.037155151s of 10.692111969s, submitted: 248
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 163520512 unmapped: 47955968 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 422 ms_handle_reset con 0x559dcfffe400 session 0x559dd3be2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2ef2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 422 ms_handle_reset con 0x559dd2ef2000 session 0x559dd2f4ce00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 422 ms_handle_reset con 0x559dd2d4b000 session 0x559dd2faa8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f4f75000/0x0/0x4ffc00000, data 0x5b2bdcb/0x5d77000, compress 0x0/0x0/0x0, omap 0x577d4, meta 0x4eb882c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:09.339458+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 164593664 unmapped: 46882816 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 423 ms_handle_reset con 0x559dd330a400 session 0x559dd2f4c000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 423 ms_handle_reset con 0x559dd330a400 session 0x559dd2984a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:10.339691+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 164585472 unmapped: 46891008 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 424 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f4c1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:11.339902+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3163357 data_alloc: 251658240 data_used: 29701763
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0861c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 46858240 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:12.340070+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 164618240 unmapped: 46858240 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 425 ms_handle_reset con 0x559dd0861c00 session 0x559dd2ec0540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 426 ms_handle_reset con 0x559dd2d4b000 session 0x559dd2a13c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:13.340329+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 165675008 unmapped: 45801472 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2ef2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:14.340570+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f4f64000/0x0/0x4ffc00000, data 0x5b34228/0x5d82000, compress 0x0/0x0/0x0, omap 0x58aac, meta 0x4eb7554), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 165675008 unmapped: 45801472 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f4f6a000/0x0/0x4ffc00000, data 0x5b34228/0x5d82000, compress 0x0/0x0/0x0, omap 0x58e23, meta 0x4eb71dd), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 427 ms_handle_reset con 0x559dd2ef2000 session 0x559dd07d3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:15.340725+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 427 ms_handle_reset con 0x559dcfffe400 session 0x559dd6742000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 165699584 unmapped: 45776896 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:16.340933+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0861c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3169853 data_alloc: 251658240 data_used: 29701763
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 165699584 unmapped: 45776896 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 427 ms_handle_reset con 0x559dd2d4b000 session 0x559dd57f4e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:17.341091+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 165699584 unmapped: 45776896 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f4f65000/0x0/0x4ffc00000, data 0x5b35e44/0x5d85000, compress 0x0/0x0/0x0, omap 0x58f9b, meta 0x4eb7065), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 428 ms_handle_reset con 0x559dd330a400 session 0x559dd22f6fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 429 ms_handle_reset con 0x559dd0861c00 session 0x559dd1542e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 429 ms_handle_reset con 0x559dd330a800 session 0x559dd1542700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 429 ms_handle_reset con 0x559dcfffe400 session 0x559dd2f4c380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:18.341575+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0861c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4b000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.478651047s of 10.032291412s, submitted: 187
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 166764544 unmapped: 44711936 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:19.343321+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 166764544 unmapped: 44711936 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:20.343535+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 166764544 unmapped: 44711936 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:21.343886+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3176273 data_alloc: 251658240 data_used: 29959465
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 166764544 unmapped: 44711936 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:22.344025+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173678592 unmapped: 37797888 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:23.344166+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4f5d000/0x0/0x4ffc00000, data 0x5b3b178/0x5d8d000, compress 0x0/0x0/0x0, omap 0x59857, meta 0x4eb67a9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173678592 unmapped: 37797888 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:24.344314+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173678592 unmapped: 37797888 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4f5d000/0x0/0x4ffc00000, data 0x5b3b178/0x5d8d000, compress 0x0/0x0/0x0, omap 0x59857, meta 0x4eb67a9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:25.344466+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173678592 unmapped: 37797888 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4f5d000/0x0/0x4ffc00000, data 0x5b3b178/0x5d8d000, compress 0x0/0x0/0x0, omap 0x59857, meta 0x4eb67a9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:26.344589+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3226071 data_alloc: 251658240 data_used: 38052137
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173678592 unmapped: 37797888 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:27.344737+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173678592 unmapped: 37797888 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:28.344890+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173678592 unmapped: 37797888 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.150025368s of 10.155489922s, submitted: 12
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 432 ms_handle_reset con 0x559dd330a400 session 0x559dd23b16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:29.345011+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f4f5d000/0x0/0x4ffc00000, data 0x5b3b178/0x5d8d000, compress 0x0/0x0/0x0, omap 0x59857, meta 0x4eb67a9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173686784 unmapped: 37789696 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:30.345200+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 173686784 unmapped: 37789696 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:31.345387+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3236020 data_alloc: 251658240 data_used: 38072617
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 174030848 unmapped: 37445632 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:32.345498+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 191791104 unmapped: 19685376 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:33.345650+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 17924096 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:34.345773+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f2b84000/0x0/0x4ffc00000, data 0x6bc294a/0x6e18000, compress 0x0/0x0/0x0, omap 0x5a1e5, meta 0x6055e1b), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 17924096 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f2b84000/0x0/0x4ffc00000, data 0x6bc294a/0x6e18000, compress 0x0/0x0/0x0, omap 0x5a1e5, meta 0x6055e1b), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:35.346012+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 432 ms_handle_reset con 0x559dd1741c00 session 0x559dd2f5a8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 193576960 unmapped: 17899520 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:36.346186+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3352036 data_alloc: 251658240 data_used: 39847721
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 193585152 unmapped: 17891328 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 432 handle_osd_map epochs [432,433], i have 433, src has [1,433]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 433 ms_handle_reset con 0x559dd330a000 session 0x559dd61c5dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:37.346299+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189980672 unmapped: 21495808 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0861000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0701c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 433 ms_handle_reset con 0x559dd0861000 session 0x559dd2ec0a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:38.346429+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 433 ms_handle_reset con 0x559dd0701c00 session 0x559dd2fab500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 433 ms_handle_reset con 0x559dd4f7f400 session 0x559dd2fab340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189980672 unmapped: 21495808 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 433 ms_handle_reset con 0x559dcfffe400 session 0x559dd57f4fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:39.346573+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.138085365s of 10.536491394s, submitted: 169
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 434 ms_handle_reset con 0x559dd1741c00 session 0x559dd23b0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 434 ms_handle_reset con 0x559dd330a000 session 0x559dd17d1340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189980672 unmapped: 21495808 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:40.346804+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f2d2a000/0x0/0x4ffc00000, data 0x6bc6148/0x6e20000, compress 0x0/0x0/0x0, omap 0x5a7cf, meta 0x6055831), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190013440 unmapped: 21463040 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:41.346940+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3346286 data_alloc: 251658240 data_used: 39851915
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190013440 unmapped: 21463040 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:42.347069+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 434 ms_handle_reset con 0x559dd0861c00 session 0x559dd22f7880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 434 ms_handle_reset con 0x559dd2d4b000 session 0x559dd2984540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dcfffe400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 434 ms_handle_reset con 0x559dcfffe400 session 0x559dd5860380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190013440 unmapped: 21463040 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0701c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 434 ms_handle_reset con 0x559dd0701c00 session 0x559dd09de380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:43.347263+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f2d2d000/0x0/0x4ffc00000, data 0x6bc6138/0x6e1f000, compress 0x0/0x0/0x0, omap 0x5a7cf, meta 0x6055831), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 435 ms_handle_reset con 0x559dd1741c00 session 0x559dd5861c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 435 ms_handle_reset con 0x559dd1741c00 session 0x559dd2faa700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189743104 unmapped: 21733376 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:44.347380+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 436 ms_handle_reset con 0x559dd4f7f400 session 0x559dd1b14a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f2d28000/0x0/0x4ffc00000, data 0x6bc7cf0/0x6e22000, compress 0x0/0x0/0x0, omap 0x5a947, meta 0x60556b9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 436 ms_handle_reset con 0x559dd330a400 session 0x559dd2fdb180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd67c3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189915136 unmapped: 21561344 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:45.347541+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f2cf8000/0x0/0x4ffc00000, data 0x6bf38f0/0x6e50000, compress 0x0/0x0/0x0, omap 0x5adbd, meta 0x6055243), peers [0,1] op hist [1])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190078976 unmapped: 21397504 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:46.347650+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689e800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 436 ms_handle_reset con 0x559dd689e800 session 0x559dd23b08c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3359442 data_alloc: 251658240 data_used: 39932299
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190119936 unmapped: 21356544 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:47.347801+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5ac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190119936 unmapped: 21356544 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 436 ms_handle_reset con 0x559dd2d5ac00 session 0x559dd57f4000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:48.347983+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190119936 unmapped: 21356544 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f2cf7000/0x0/0x4ffc00000, data 0x6bf3900/0x6e51000, compress 0x0/0x0/0x0, omap 0x5adbd, meta 0x6055243), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 436 handle_osd_map epochs [437,437], i have 437, src has [1,437]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 437 handle_osd_map epochs [437,437], i have 437, src has [1,437]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 437 ms_handle_reset con 0x559dd1741c00 session 0x559dd2f4d6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:49.348107+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190119936 unmapped: 21356544 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.486777306s of 10.550638199s, submitted: 45
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 438 ms_handle_reset con 0x559dd330a400 session 0x559dd17d16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:50.348229+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 438 ms_handle_reset con 0x559dd4f7f400 session 0x559dd09dee00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689e800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 438 ms_handle_reset con 0x559dd689e800 session 0x559dd2faa8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190128128 unmapped: 21348352 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:51.348401+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3363794 data_alloc: 251658240 data_used: 39932299
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190144512 unmapped: 21331968 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 438 ms_handle_reset con 0x559dd6aaa800 session 0x559dd2f4ce00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:52.348600+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f2cf1000/0x0/0x4ffc00000, data 0x6bf70df/0x6e57000, compress 0x0/0x0/0x0, omap 0x5b0ea, meta 0x6054f16), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190152704 unmapped: 21323776 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 438 handle_osd_map epochs [438,439], i have 439, src has [1,439]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:53.348735+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f2cf0000/0x0/0x4ffc00000, data 0x6bf8b7a/0x6e5a000, compress 0x0/0x0/0x0, omap 0x5b5da, meta 0x6054a26), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190185472 unmapped: 21291008 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 439 ms_handle_reset con 0x559dd1741c00 session 0x559dd2985500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:54.348883+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 440 ms_handle_reset con 0x559dd330a400 session 0x559dd2fda380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190193664 unmapped: 21282816 heap: 211476480 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 441 ms_handle_reset con 0x559dd4f7f400 session 0x559dd1b14fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:55.349037+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689e800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 441 ms_handle_reset con 0x559dd689e800 session 0x559dd07d3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaa800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 193527808 unmapped: 21143552 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 441 ms_handle_reset con 0x559dd6aaa800 session 0x559dd3be3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:56.349180+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3415371 data_alloc: 251658240 data_used: 41279984
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 193888256 unmapped: 20783104 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:57.349333+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 441 ms_handle_reset con 0x559dd1741c00 session 0x559dd23b1340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 193937408 unmapped: 20733952 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:58.349585+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f29dd000/0x0/0x4ffc00000, data 0x6f07375/0x716b000, compress 0x0/0x0/0x0, omap 0x5b8ce, meta 0x6054732), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 23904256 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 443 ms_handle_reset con 0x559dd330a400 session 0x559dd27848c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:59.349774+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f29dc000/0x0/0x4ffc00000, data 0x6f08e10/0x716e000, compress 0x0/0x0/0x0, omap 0x5bdc2, meta 0x605423e), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 443 ms_handle_reset con 0x559dd4f7f400 session 0x559dd1b15180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 23904256 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689e800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.979341507s of 10.077736855s, submitted: 70
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 444 ms_handle_reset con 0x559dd689e800 session 0x559dd2ec0000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:00.349903+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189587456 unmapped: 25083904 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:01.350009+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a2c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 444 ms_handle_reset con 0x559dd14a2c00 session 0x559dd2fdb500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f29d7000/0x0/0x4ffc00000, data 0x6f0c556/0x7173000, compress 0x0/0x0/0x0, omap 0x5c0b8, meta 0x6053f48), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3417842 data_alloc: 251658240 data_used: 41296953
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189595648 unmapped: 25075712 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:02.350144+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 444 ms_handle_reset con 0x559dd1741c00 session 0x559dd147a8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330a400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f29d7000/0x0/0x4ffc00000, data 0x6f0c556/0x7173000, compress 0x0/0x0/0x0, omap 0x5c0b8, meta 0x6053f48), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189595648 unmapped: 25075712 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:03.350276+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189595648 unmapped: 25075712 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 444 handle_osd_map epochs [445,446], i have 444, src has [1,446]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:04.350408+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189603840 unmapped: 25067520 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:05.350527+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689e800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 446 ms_handle_reset con 0x559dd689e800 session 0x559dd23b0a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f29d1000/0x0/0x4ffc00000, data 0x6f0fba9/0x7179000, compress 0x0/0x0/0x0, omap 0x5c5b0, meta 0x6053a50), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189603840 unmapped: 25067520 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:06.350648+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3446088 data_alloc: 251658240 data_used: 41259650
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f29b4000/0x0/0x4ffc00000, data 0x6f58799/0x7196000, compress 0x0/0x0/0x0, omap 0x5c955, meta 0x60536ab), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 24870912 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:07.350763+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 24870912 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 447 ms_handle_reset con 0x559dd06f2000 session 0x559dd08a2a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:08.350903+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f29b4000/0x0/0x4ffc00000, data 0x6f58799/0x7196000, compress 0x0/0x0/0x0, omap 0x5c955, meta 0x60536ab), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 24870912 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:09.351037+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189825024 unmapped: 24846336 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 449 ms_handle_reset con 0x559dd30e4400 session 0x559dd07d36c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:10.351190+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2680400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.304457664s of 10.407533646s, submitted: 107
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 449 ms_handle_reset con 0x559dd2680400 session 0x559dd23b0c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f29ad000/0x0/0x4ffc00000, data 0x6f5bdfc/0x719d000, compress 0x0/0x0/0x0, omap 0x5d640, meta 0x60529c0), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 189849600 unmapped: 24821760 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 450 ms_handle_reset con 0x559dd06f2000 session 0x559dd08a3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:11.351340+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 450 ms_handle_reset con 0x559dd1741c00 session 0x559dd2ec0c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2680400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3455723 data_alloc: 251658240 data_used: 41255554
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190005248 unmapped: 24666112 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 450 ms_handle_reset con 0x559dd2680400 session 0x559dd17d0540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:12.351545+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190005248 unmapped: 24666112 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:13.351733+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 451 ms_handle_reset con 0x559dd30e4400 session 0x559dd27a4e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190013440 unmapped: 24657920 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:14.351880+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd689e800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190013440 unmapped: 24657920 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:15.352036+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 452 ms_handle_reset con 0x559dd689e800 session 0x559dd08a2700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f29a8000/0x0/0x4ffc00000, data 0x6f5f477/0x71a2000, compress 0x0/0x0/0x0, omap 0x5dc5f, meta 0x60523a1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190070784 unmapped: 24600576 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 452 ms_handle_reset con 0x559dd06f2000 session 0x559dd22f6380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:16.352160+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 453 ms_handle_reset con 0x559dd1741c00 session 0x559dd6742a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3465540 data_alloc: 251658240 data_used: 41259650
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190103552 unmapped: 24567808 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2680400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 453 ms_handle_reset con 0x559dd2680400 session 0x559dd2fabdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:17.352545+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 453 ms_handle_reset con 0x559dd30e4400 session 0x559dd2784700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190152704 unmapped: 24518656 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 453 handle_osd_map epochs [453,454], i have 453, src has [1,454]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:18.352989+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3310c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 454 ms_handle_reset con 0x559dd3310c00 session 0x559dd147b880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190423040 unmapped: 24248320 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 454 heartbeat osd_stat(store_statfs(0x4f299e000/0x0/0x4ffc00000, data 0x6f6471c/0x71ac000, compress 0x0/0x0/0x0, omap 0x5e459, meta 0x6051ba7), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:19.353135+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190431232 unmapped: 24240128 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:20.353351+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 454 handle_osd_map epochs [454,455], i have 455, src has [1,455]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 455 ms_handle_reset con 0x559dd06f2000 session 0x559dd2784000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 455 ms_handle_reset con 0x559dd1740800 session 0x559dd2f5bdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 455 ms_handle_reset con 0x559dd67c3400 session 0x559dd61c5340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.542074203s of 10.304674149s, submitted: 127
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 455 ms_handle_reset con 0x559dd1741c00 session 0x559dd15421c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190455808 unmapped: 24215552 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:21.353501+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2680400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 455 ms_handle_reset con 0x559dd2680400 session 0x559dd2fda380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3469440 data_alloc: 251658240 data_used: 42052324
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190480384 unmapped: 24190976 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:22.353689+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 456 ms_handle_reset con 0x559dd06f2000 session 0x559dd2fdac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 456 ms_handle_reset con 0x559dd1740800 session 0x559dd61c5880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 456 ms_handle_reset con 0x559dd1741c00 session 0x559dd2a121c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190521344 unmapped: 24150016 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:23.353855+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd67c3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 457 ms_handle_reset con 0x559dd67c3400 session 0x559dd5860a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e4400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 457 heartbeat osd_stat(store_statfs(0x4f2cc9000/0x0/0x4ffc00000, data 0x6c348ed/0x6e7e000, compress 0x0/0x0/0x0, omap 0x5ee82, meta 0x605117e), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3310c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 457 ms_handle_reset con 0x559dd30e4400 session 0x559dd2ec0fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 457 ms_handle_reset con 0x559dd3310c00 session 0x559dd2f5a1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190545920 unmapped: 24125440 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:24.354079+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190570496 unmapped: 24100864 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:25.354239+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 458 ms_handle_reset con 0x559dd0700000 session 0x559dd147b180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 458 ms_handle_reset con 0x559dd2789c00 session 0x559dd17d1880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 458 ms_handle_reset con 0x559dd06f2000 session 0x559dd23b01c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190570496 unmapped: 24100864 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:26.354392+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 458 heartbeat osd_stat(store_statfs(0x4f2d10000/0x0/0x4ffc00000, data 0x6bc5419/0x6e3a000, compress 0x0/0x0/0x0, omap 0x5f0fd, meta 0x6050f03), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3446256 data_alloc: 251658240 data_used: 41923591
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 190570496 unmapped: 24100864 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:27.354618+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1740800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 458 ms_handle_reset con 0x559dd1740800 session 0x559dd2ec1180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184672256 unmapped: 29999104 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:28.354805+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184672256 unmapped: 29999104 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:29.354967+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 459 heartbeat osd_stat(store_statfs(0x4f3e6c000/0x0/0x4ffc00000, data 0x5a67eb4/0x5cde000, compress 0x0/0x0/0x0, omap 0x5f2ea, meta 0x6050d16), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184672256 unmapped: 29999104 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:30.355139+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.773141861s of 10.000968933s, submitted: 103
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 183754752 unmapped: 30916608 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:31.355323+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 459 handle_osd_map epochs [459,460], i have 460, src has [1,460]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 460 heartbeat osd_stat(store_statfs(0x4f3e69000/0x0/0x4ffc00000, data 0x5a69ac0/0x5ce1000, compress 0x0/0x0/0x0, omap 0x5f871, meta 0x605078f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3275082 data_alloc: 251658240 data_used: 30577671
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184803328 unmapped: 29868032 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:32.355439+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 460 ms_handle_reset con 0x559dd06f2000 session 0x559dd2785880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184827904 unmapped: 29843456 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:33.355586+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f3e69000/0x0/0x4ffc00000, data 0x5a69ac0/0x5ce1000, compress 0x0/0x0/0x0, omap 0x5f9d8, meta 0x6050628), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184827904 unmapped: 29843456 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:34.355766+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184827904 unmapped: 29843456 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:35.355949+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184827904 unmapped: 29843456 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:36.356095+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3277516 data_alloc: 251658240 data_used: 30573575
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184827904 unmapped: 29843456 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:37.356303+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184827904 unmapped: 29843456 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:38.356458+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd0700000 session 0x559dd2fdbc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 184827904 unmapped: 29843456 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:39.356627+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f3e64000/0x0/0x4ffc00000, data 0x5a6b55b/0x5ce4000, compress 0x0/0x0/0x0, omap 0x5fbbf, meta 0x6050441), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2789c00 session 0x559dd1542fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3310c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd3310c00 session 0x559dd07d3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 185450496 unmapped: 29220864 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:40.356783+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 185450496 unmapped: 29220864 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:41.356911+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd330a400 session 0x559dd2ec0000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.175397873s of 10.737386703s, submitted: 76
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd4f7f400 session 0x559dd2985180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd06f2000 session 0x559dd2a136c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3287242 data_alloc: 251658240 data_used: 33424391
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 185671680 unmapped: 28999680 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:42.357039+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd0700000 session 0x559dd6742fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 185671680 unmapped: 28999680 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:43.357190+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f3e65000/0x0/0x4ffc00000, data 0x5a6cfda/0x5ce7000, compress 0x0/0x0/0x0, omap 0x5fe39, meta 0x60501c7), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2789c00 session 0x559dd17d1880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3310c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 186728448 unmapped: 27942912 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd3310c00 session 0x559dd2fdb340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:44.357306+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd1741c00 session 0x559dd147b180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 185679872 unmapped: 28991488 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:45.357440+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd06f2000 session 0x559dd2ec1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f3847000/0x0/0x4ffc00000, data 0x608afda/0x6305000, compress 0x0/0x0/0x0, omap 0x6004c, meta 0x604ffb4), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 185679872 unmapped: 28991488 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:46.357648+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f3847000/0x0/0x4ffc00000, data 0x608afda/0x6305000, compress 0x0/0x0/0x0, omap 0x6004c, meta 0x604ffb4), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3323595 data_alloc: 251658240 data_used: 33424391
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 185679872 unmapped: 28991488 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd0700000 session 0x559dd2fda380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:47.357755+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2789c00 session 0x559dd5860c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176742400 unmapped: 37928960 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:48.357962+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176742400 unmapped: 37928960 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:49.358166+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176742400 unmapped: 37928960 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:50.358318+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176742400 unmapped: 37928960 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:51.358474+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f650c000/0x0/0x4ffc00000, data 0x2bc6f68/0x2e3f000, compress 0x0/0x0/0x0, omap 0x6004c, meta 0x604ffb4), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977467 data_alloc: 234881024 data_used: 20494759
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176742400 unmapped: 37928960 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:52.358666+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176742400 unmapped: 37928960 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:53.358819+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176742400 unmapped: 37928960 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3310c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd3310c00 session 0x559dd2ec0e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:54.358941+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd06f2000 session 0x559dd23b16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 176742400 unmapped: 37928960 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:55.359102+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.746451378s of 13.885400772s, submitted: 68
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd0700000 session 0x559dd07d3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2789c00 session 0x559dd2a121c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd1741c00 session 0x559dd2f4c000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd4f7f400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd4f7f400 session 0x559dd2f4c700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:56.359249+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f6d0c000/0x0/0x4ffc00000, data 0x2bc6f8b/0x2e40000, compress 0x0/0x0/0x0, omap 0x6025f, meta 0x604fda1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2980717 data_alloc: 234881024 data_used: 19481925
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:57.359359+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:58.359522+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:59.359741+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:00.359874+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f6d0c000/0x0/0x4ffc00000, data 0x2bc6f8b/0x2e40000, compress 0x0/0x0/0x0, omap 0x6025f, meta 0x604fda1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:01.360022+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017709 data_alloc: 234881024 data_used: 25773381
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:02.360159+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:03.360285+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:04.360408+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:05.360530+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:06.360662+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f6d0c000/0x0/0x4ffc00000, data 0x2bc6f8b/0x2e40000, compress 0x0/0x0/0x0, omap 0x6025f, meta 0x604fda1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017709 data_alloc: 234881024 data_used: 25773381
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 38871040 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:07.360782+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.942583084s of 11.967862129s, submitted: 14
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178544640 unmapped: 36126720 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:08.360977+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178814976 unmapped: 35856384 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:09.361092+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178814976 unmapped: 35856384 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:10.361220+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178814976 unmapped: 35856384 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:11.361423+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f654b000/0x0/0x4ffc00000, data 0x337ff8b/0x35f9000, compress 0x0/0x0/0x0, omap 0x6025f, meta 0x604fda1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073115 data_alloc: 234881024 data_used: 25901381
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178814976 unmapped: 35856384 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:12.361608+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178814976 unmapped: 35856384 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:13.361825+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178814976 unmapped: 35856384 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:14.361976+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178143232 unmapped: 36528128 heap: 214671360 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:15.362295+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178511872 unmapped: 57147392 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:16.362502+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd1741c00 session 0x559dd08a3dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2789c00 session 0x559dd2784540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3321441 data_alloc: 234881024 data_used: 25901381
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 57106432 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:17.362762+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f3707000/0x0/0x4ffc00000, data 0x61cafed/0x6445000, compress 0x0/0x0/0x0, omap 0x60375, meta 0x604fc8b), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.278882027s of 10.300660133s, submitted: 122
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd06f2000 session 0x559dd2784000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd0700000 session 0x559dd7893c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd67c3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd67c3400 session 0x559dd1b15dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178585600 unmapped: 57073664 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:18.362980+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:19.363168+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178585600 unmapped: 57073664 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:20.363320+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178585600 unmapped: 57073664 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd06f2000 session 0x559dd23b0540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd0700000 session 0x559dd57f56c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd1741c00 session 0x559dd17d0a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2789c00 session 0x559dd2ec1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd67c3400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd67c3400 session 0x559dd2984700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd06f2000 session 0x559dd2f4ce00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd0700000 session 0x559dd147b6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd1741c00 session 0x559dd2faafc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2789c00 session 0x559dd5861c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:21.363481+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 57049088 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3346491 data_alloc: 234881024 data_used: 25901346
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:22.363647+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 57049088 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:23.363786+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 57049088 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f3366000/0x0/0x4ffc00000, data 0x656bfda/0x67e6000, compress 0x0/0x0/0x0, omap 0x60588, meta 0x604fa78), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:24.363924+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 57049088 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:25.364091+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 57049088 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:26.364303+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178610176 unmapped: 57049088 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0861800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd0861800 session 0x559dd27841c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3350938 data_alloc: 234881024 data_used: 25901346
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:27.364432+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 56688640 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd06f2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0700000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:28.364664+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 56688640 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:29.364853+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 56680448 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f333b000/0x0/0x4ffc00000, data 0x6595ffd/0x6811000, compress 0x0/0x0/0x0, omap 0x60588, meta 0x604fa78), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:30.365075+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 181485568 unmapped: 54173696 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:31.365252+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 181485568 unmapped: 54173696 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.689688683s of 13.800403595s, submitted: 35
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd1741c00 session 0x559dd07d36c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0701800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3375303 data_alloc: 251658240 data_used: 29609858
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:32.365403+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 54165504 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:33.365558+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 54165504 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f333a000/0x0/0x4ffc00000, data 0x6596020/0x6812000, compress 0x0/0x0/0x0, omap 0x60588, meta 0x604fa78), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:34.365722+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 54165504 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:35.365838+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 186441728 unmapped: 49217536 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:36.365965+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 41345024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2795000 session 0x559dd1b14fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd6aaac00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:37.366351+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3466439 data_alloc: 268435456 data_used: 45213058
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194445312 unmapped: 41213952 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:38.366536+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194641920 unmapped: 41017344 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f333a000/0x0/0x4ffc00000, data 0x6596020/0x6812000, compress 0x0/0x0/0x0, omap 0x60588, meta 0x604fa78), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:39.366668+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194674688 unmapped: 40984576 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f328d000/0x0/0x4ffc00000, data 0x6643020/0x68bf000, compress 0x0/0x0/0x0, omap 0x60588, meta 0x604fa78), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:40.366780+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 197861376 unmapped: 37797888 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f2fa0000/0x0/0x4ffc00000, data 0x6928020/0x6ba4000, compress 0x0/0x0/0x0, omap 0x60588, meta 0x604fa78), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:41.366914+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199106560 unmapped: 36552704 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:42.367054+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3505137 data_alloc: 268435456 data_used: 45360514
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199122944 unmapped: 36536320 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:43.367188+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199122944 unmapped: 36536320 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f2f7b000/0x0/0x4ffc00000, data 0x694d020/0x6bc9000, compress 0x0/0x0/0x0, omap 0x60588, meta 0x604fa78), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:44.367351+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199155712 unmapped: 36503552 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:45.367472+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199155712 unmapped: 36503552 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.040661812s of 14.241411209s, submitted: 49
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:46.367610+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213024768 unmapped: 22634496 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:47.367741+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3571169 data_alloc: 268435456 data_used: 47584642
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 24977408 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:48.367924+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 210690048 unmapped: 24969216 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f2724000/0x0/0x4ffc00000, data 0x71ac020/0x7428000, compress 0x0/0x0/0x0, omap 0x60588, meta 0x604fa78), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:49.368081+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211091456 unmapped: 24567808 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:50.368234+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211091456 unmapped: 24567808 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4dc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2d4dc00 session 0x559dd2a13c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:51.368391+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211091456 unmapped: 24567808 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f2724000/0x0/0x4ffc00000, data 0x71ac020/0x7428000, compress 0x0/0x0/0x0, omap 0x60588, meta 0x604fa78), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:52.368616+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580369 data_alloc: 268435456 data_used: 48833922
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211091456 unmapped: 24567808 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:53.368776+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211091456 unmapped: 24567808 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:54.368969+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211140608 unmapped: 24518656 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:55.369145+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd2789c00 session 0x559dd26d56c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd0701800 session 0x559dd2784540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211140608 unmapped: 24518656 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.651609421s of 10.000156403s, submitted: 196
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2789c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 ms_handle_reset con 0x559dd1741c00 session 0x559dd2fda540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:56.369353+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211238912 unmapped: 24420352 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:57.369506+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3583617 data_alloc: 268435456 data_used: 48867714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211238912 unmapped: 24420352 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 463 heartbeat osd_stat(store_statfs(0x4f2717000/0x0/0x4ffc00000, data 0x71b6b99/0x7433000, compress 0x0/0x0/0x0, omap 0x60a44, meta 0x604f5bc), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:58.369695+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211304448 unmapped: 24354816 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:59.369874+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211312640 unmapped: 24346624 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 464 heartbeat osd_stat(store_statfs(0x4f270f000/0x0/0x4ffc00000, data 0x71bf797/0x7439000, compress 0x0/0x0/0x0, omap 0x60bb0, meta 0x604f450), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:00.370013+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211443712 unmapped: 24215552 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 464 handle_osd_map epochs [464,465], i have 465, src has [1,465]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f26f3000/0x0/0x4ffc00000, data 0x71e2395/0x7457000, compress 0x0/0x0/0x0, omap 0x60f46, meta 0x604f0ba), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:01.370164+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212541440 unmapped: 23117824 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f26f3000/0x0/0x4ffc00000, data 0x71e2395/0x7457000, compress 0x0/0x0/0x0, omap 0x60f46, meta 0x604f0ba), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:02.370388+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3601971 data_alloc: 268435456 data_used: 48872360
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212541440 unmapped: 23117824 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f26f3000/0x0/0x4ffc00000, data 0x71e2395/0x7457000, compress 0x0/0x0/0x0, omap 0x60f46, meta 0x604f0ba), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:03.370595+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212541440 unmapped: 23117824 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f26f3000/0x0/0x4ffc00000, data 0x71e2395/0x7457000, compress 0x0/0x0/0x0, omap 0x60f46, meta 0x604f0ba), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:04.370776+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212574208 unmapped: 23085056 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:05.370903+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212574208 unmapped: 23085056 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.131147385s of 10.164322853s, submitted: 18
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:06.371039+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212623360 unmapped: 23035904 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f26f1000/0x0/0x4ffc00000, data 0x71e4395/0x7459000, compress 0x0/0x0/0x0, omap 0x60f46, meta 0x604f0ba), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:07.371229+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3600835 data_alloc: 268435456 data_used: 48872360
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 23003136 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f26f3000/0x0/0x4ffc00000, data 0x71e4395/0x7459000, compress 0x0/0x0/0x0, omap 0x60f46, meta 0x604f0ba), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:08.371423+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 23003136 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:09.371671+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 23003136 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 ms_handle_reset con 0x559dd2795000 session 0x559dd2784a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:10.371777+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 23003136 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4dc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 ms_handle_reset con 0x559dd2d4dc00 session 0x559dd1543880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:11.371902+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 23003136 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 ms_handle_reset con 0x559dd2d5d400 session 0x559dd23b0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd30e5800
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f26f3000/0x0/0x4ffc00000, data 0x71e4395/0x7459000, compress 0x0/0x0/0x0, omap 0x60f46, meta 0x604f0ba), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 ms_handle_reset con 0x559dd30e5800 session 0x559dd22f7880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd1741c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:12.372031+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3605737 data_alloc: 268435456 data_used: 48872360
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211468288 unmapped: 24190976 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:13.372154+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211468288 unmapped: 24190976 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:14.372278+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 23814144 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 465 handle_osd_map epochs [465,466], i have 466, src has [1,466]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26cb000/0x0/0x4ffc00000, data 0x72093a5/0x747f000, compress 0x0/0x0/0x0, omap 0x60f46, meta 0x604f0ba), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:15.372547+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211902464 unmapped: 23756800 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:16.372737+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211902464 unmapped: 23756800 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26c6000/0x0/0x4ffc00000, data 0x720af41/0x7482000, compress 0x0/0x0/0x0, omap 0x61406, meta 0x604ebfa), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:17.372862+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3616527 data_alloc: 268435456 data_used: 50324904
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211902464 unmapped: 23756800 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:18.373135+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211902464 unmapped: 23756800 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.042698860s of 13.061352730s, submitted: 9
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26c6000/0x0/0x4ffc00000, data 0x720af41/0x7482000, compress 0x0/0x0/0x0, omap 0x61406, meta 0x604ebfa), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:19.373296+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 23691264 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:20.373449+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 23691264 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:21.373634+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 23691264 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:22.373806+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3647457 data_alloc: 268435456 data_used: 50320808
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 23691264 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:23.373932+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 23691264 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4dc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd2d4dc00 session 0x559dd61c5dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:24.374081+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26c5000/0x0/0x4ffc00000, data 0x74ecf41/0x7487000, compress 0x0/0x0/0x0, omap 0x61630, meta 0x604e9d0), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 23691264 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd2d5d400 session 0x559dd61c4fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd0417c00 session 0x559dd61c4c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd0416400 session 0x559dd5861880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:25.374301+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 212115456 unmapped: 23543808 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd14a3c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:26.374443+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213008384 unmapped: 22650880 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:27.374590+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3671149 data_alloc: 268435456 data_used: 51258890
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213557248 unmapped: 22102016 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:28.374791+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213557248 unmapped: 22102016 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:29.375012+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213557248 unmapped: 22102016 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:30.375194+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213622784 unmapped: 22036480 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.780358315s of 11.797229767s, submitted: 11
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:31.375385+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213729280 unmapped: 21929984 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:32.375545+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3671581 data_alloc: 268435456 data_used: 51258890
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213729280 unmapped: 21929984 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:33.375700+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213745664 unmapped: 21913600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:34.375902+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213745664 unmapped: 21913600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:35.376071+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213745664 unmapped: 21913600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:36.376242+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213745664 unmapped: 21913600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:37.376438+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3671581 data_alloc: 268435456 data_used: 51258890
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213745664 unmapped: 21913600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:38.376666+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213745664 unmapped: 21913600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:39.376800+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 213745664 unmapped: 21913600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:40.376942+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 21315584 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:41.377070+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 21266432 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:42.377193+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682461 data_alloc: 268435456 data_used: 52077066
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 21266432 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.950303078s of 11.955555916s, submitted: 5
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:43.377355+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 21266432 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:44.377462+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 21266432 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:45.377573+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 21266432 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:46.377755+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 21266432 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:47.377885+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3682605 data_alloc: 268435456 data_used: 52077066
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 21266432 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:48.378136+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 21266432 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:49.378371+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 21258240 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:50.378520+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 21258240 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26a1000/0x0/0x4ffc00000, data 0x7510f41/0x74ab000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:51.378646+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd1741c00 session 0x559dd7893a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd2795000 session 0x559dd07d2e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214933504 unmapped: 20725760 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd0416400 session 0x559dd17d1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:52.378780+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3679244 data_alloc: 268435456 data_used: 52573706
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214933504 unmapped: 20725760 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:53.378944+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214933504 unmapped: 20725760 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:54.379184+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214933504 unmapped: 20725760 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f26c6000/0x0/0x4ffc00000, data 0x74ecf31/0x7486000, compress 0x0/0x0/0x0, omap 0x615b4, meta 0x604ea4c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:55.379331+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 214933504 unmapped: 20725760 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.524424553s of 13.536228180s, submitted: 6
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd0417c00 session 0x559dd17d16c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:56.379461+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4dc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd2d4dc00 session 0x559dd2ec1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203956224 unmapped: 31703040 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:57.379629+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3265468 data_alloc: 251658240 data_used: 33672122
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203956224 unmapped: 31703040 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:58.379894+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203956224 unmapped: 31703040 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:59.380080+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203956224 unmapped: 31703040 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f4c5e000/0x0/0x4ffc00000, data 0x3e48ecf/0x3de1000, compress 0x0/0x0/0x0, omap 0x61863, meta 0x604e79d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:00.380221+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203956224 unmapped: 31703040 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:01.380353+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203956224 unmapped: 31703040 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f4c5e000/0x0/0x4ffc00000, data 0x3e48ecf/0x3de1000, compress 0x0/0x0/0x0, omap 0x61863, meta 0x604e79d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:02.380542+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3265468 data_alloc: 251658240 data_used: 33672122
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203956224 unmapped: 31703040 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:03.380820+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 204013568 unmapped: 31645696 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:04.381001+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 204013568 unmapped: 31645696 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd14a3c00 session 0x559dd08a2540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd383fc00 session 0x559dd2785180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd0416400 session 0x559dd6742000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:05.381171+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203997184 unmapped: 31662080 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:06.381309+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203997184 unmapped: 31662080 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f4c82000/0x0/0x4ffc00000, data 0x3e24ecf/0x3dbd000, compress 0x0/0x0/0x0, omap 0x61863, meta 0x604e79d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:07.381441+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3260068 data_alloc: 251658240 data_used: 34153402
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203997184 unmapped: 31662080 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd0417c00 session 0x559dd2fabdc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:08.381633+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203997184 unmapped: 31662080 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 ms_handle_reset con 0x559dd2795000 session 0x559dd61c5180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:09.381796+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 203997184 unmapped: 31662080 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.795621872s of 13.868969917s, submitted: 44
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:10.381927+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 200040448 unmapped: 35618816 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:11.382134+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f5d8f000/0x0/0x4ffc00000, data 0x3b44abf/0x3dbb000, compress 0x0/0x0/0x0, omap 0x619cc, meta 0x604e634), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 200040448 unmapped: 35618816 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 467 ms_handle_reset con 0x559dd6aaac00 session 0x559dd2f5b180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 467 ms_handle_reset con 0x559dd2d66400 session 0x559dd61c56c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:12.382304+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 467 ms_handle_reset con 0x559dd0416400 session 0x559dd2784c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3237048 data_alloc: 251658240 data_used: 32310104
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 200040448 unmapped: 35618816 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:13.382430+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 200040448 unmapped: 35618816 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:14.382600+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 200040448 unmapped: 35618816 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:15.382733+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f5d8f000/0x0/0x4ffc00000, data 0x3b44abf/0x3dbb000, compress 0x0/0x0/0x0, omap 0x619cc, meta 0x604e634), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 200040448 unmapped: 35618816 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:16.382882+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 200040448 unmapped: 35618816 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 467 ms_handle_reset con 0x559dd0417c00 session 0x559dd2fdafc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:17.383077+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f5d8f000/0x0/0x4ffc00000, data 0x3b44abf/0x3dbb000, compress 0x0/0x0/0x0, omap 0x619cc, meta 0x604e634), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096930 data_alloc: 234881024 data_used: 23298904
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194109440 unmapped: 41549824 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:18.383259+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:19.383422+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:20.383624+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:21.383766+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:22.383902+0000)
Feb 02 16:02:22 compute-0 ceph-mon[75334]: from='client.19162 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3801929998' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 02 16:02:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4202042859' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 02 16:02:22 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2242434616' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100424 data_alloc: 234881024 data_used: 23298904
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f6b67000/0x0/0x4ffc00000, data 0x2d6b53e/0x2fe3000, compress 0x0/0x0/0x0, omap 0x61b70, meta 0x604e490), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:23.384119+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:24.384311+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:25.384460+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:26.384654+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:27.384773+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100424 data_alloc: 234881024 data_used: 23298904
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:28.384994+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f6b67000/0x0/0x4ffc00000, data 0x2d6b53e/0x2fe3000, compress 0x0/0x0/0x0, omap 0x61b70, meta 0x604e490), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:29.385199+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:30.385328+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:31.385533+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f6b67000/0x0/0x4ffc00000, data 0x2d6b53e/0x2fe3000, compress 0x0/0x0/0x0, omap 0x61b70, meta 0x604e490), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:32.385689+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100424 data_alloc: 234881024 data_used: 23298904
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:33.385916+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f6b67000/0x0/0x4ffc00000, data 0x2d6b53e/0x2fe3000, compress 0x0/0x0/0x0, omap 0x61b70, meta 0x604e490), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:34.386122+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:35.386328+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:36.386530+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:37.386666+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100424 data_alloc: 234881024 data_used: 23298904
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:38.386840+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 41541632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.968898773s of 29.017204285s, submitted: 31
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:39.387027+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 468 handle_osd_map epochs [468,469], i have 469, src has [1,469]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 ms_handle_reset con 0x559dd2795000 session 0x559dd26d5880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f6b67000/0x0/0x4ffc00000, data 0x2d6b53e/0x2fe3000, compress 0x0/0x0/0x0, omap 0x61b70, meta 0x604e490), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195256320 unmapped: 40402944 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:40.387220+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195256320 unmapped: 40402944 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:41.387484+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383fc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 ms_handle_reset con 0x559dd383fc00 session 0x559dd09de540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 ms_handle_reset con 0x559dd0417c00 session 0x559dd1b14a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 ms_handle_reset con 0x559dd0416400 session 0x559dd57f5180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 ms_handle_reset con 0x559dd2795000 session 0x559dd57f5340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195256320 unmapped: 40402944 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 ms_handle_reset con 0x559dd2d66400 session 0x559dd2a12000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4dc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:42.387612+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 ms_handle_reset con 0x559dd2d4dc00 session 0x559dd2f5a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3106612 data_alloc: 234881024 data_used: 23298904
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195280896 unmapped: 40378368 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:43.387781+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 469 handle_osd_map epochs [469,470], i have 470, src has [1,470]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 470 ms_handle_reset con 0x559dd0416400 session 0x559dd23b0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 40345600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:44.387989+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 40345600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:45.388148+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f6b60000/0x0/0x4ffc00000, data 0x2d6ecca/0x2fe9000, compress 0x0/0x0/0x0, omap 0x61f7b, meta 0x604e085), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 40345600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:46.388313+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 40345600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:47.388444+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109088 data_alloc: 234881024 data_used: 23298904
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 40345600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:48.388638+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 40345600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f6b60000/0x0/0x4ffc00000, data 0x2d6ecca/0x2fe9000, compress 0x0/0x0/0x0, omap 0x61f7b, meta 0x604e085), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:49.388780+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 40345600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.733218193s of 10.869009018s, submitted: 65
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 470 ms_handle_reset con 0x559dd0417c00 session 0x559dcedd1c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f6b60000/0x0/0x4ffc00000, data 0x2d6ecca/0x2fe9000, compress 0x0/0x0/0x0, omap 0x61f7b, meta 0x604e085), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:50.388966+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 40345600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:51.389156+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195321856 unmapped: 40337408 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 470 handle_osd_map epochs [470,471], i have 471, src has [1,471]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 471 heartbeat osd_stat(store_statfs(0x4f6b5f000/0x0/0x4ffc00000, data 0x2d6ed2c/0x2fea000, compress 0x0/0x0/0x0, omap 0x61f7b, meta 0x604e085), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 471 ms_handle_reset con 0x559dd2795000 session 0x559dd26d5880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:52.389371+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115227 data_alloc: 234881024 data_used: 23303063
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195321856 unmapped: 40337408 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 471 ms_handle_reset con 0x559dd2d66400 session 0x559dd147b500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d5d400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd2d5d400 session 0x559dd2a12e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:53.389531+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd0416400 session 0x559dd2784fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd0417c00 session 0x559dd2f4dc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195411968 unmapped: 40247296 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd2795000 session 0x559dd2fdac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd2d66400 session 0x559dd2f5afc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f6b53000/0x0/0x4ffc00000, data 0x2d724a9/0x2ff5000, compress 0x0/0x0/0x0, omap 0x6262a, meta 0x604d9d6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:54.389689+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195411968 unmapped: 40247296 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:55.389864+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd0417000 session 0x559dd2785180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195420160 unmapped: 40239104 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:56.390029+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd0416400 session 0x559dd2fdb880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd0417000 session 0x559dd2f5ac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195428352 unmapped: 40230912 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f6b5a000/0x0/0x4ffc00000, data 0x2d72427/0x2ff2000, compress 0x0/0x0/0x0, omap 0x6262a, meta 0x604d9d6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:57.390233+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f6b5a000/0x0/0x4ffc00000, data 0x2d72427/0x2ff2000, compress 0x0/0x0/0x0, omap 0x6262a, meta 0x604d9d6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3120652 data_alloc: 234881024 data_used: 23303063
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195428352 unmapped: 40230912 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:58.390469+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f6b5a000/0x0/0x4ffc00000, data 0x2d72427/0x2ff2000, compress 0x0/0x0/0x0, omap 0x6262a, meta 0x604d9d6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195428352 unmapped: 40230912 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd0417c00 session 0x559dd2784540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f6b5b000/0x0/0x4ffc00000, data 0x2d723c5/0x2ff1000, compress 0x0/0x0/0x0, omap 0x6262a, meta 0x604d9d6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:59.390634+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195444736 unmapped: 40214528 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 ms_handle_reset con 0x559dd2795000 session 0x559dd61c4c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.968999863s of 10.111907959s, submitted: 48
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 473 ms_handle_reset con 0x559dd2d66400 session 0x559dd2fdafc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:00.390745+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195444736 unmapped: 40214528 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:01.390869+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 473 ms_handle_reset con 0x559dd0416400 session 0x559dd26d5a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195444736 unmapped: 40214528 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:02.390989+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122195 data_alloc: 234881024 data_used: 23302965
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195444736 unmapped: 40214528 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 21K writes, 86K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 7584 syncs, 2.83 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8613 writes, 28K keys, 8613 commit groups, 1.0 writes per commit group, ingest: 26.31 MB, 0.04 MB/s
                                           Interval WAL: 8613 writes, 3721 syncs, 2.31 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:03.391554+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195444736 unmapped: 40214528 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:04.391760+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f6b58000/0x0/0x4ffc00000, data 0x2d73ef1/0x2ff2000, compress 0x0/0x0/0x0, omap 0x62795, meta 0x604d86b), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195444736 unmapped: 40214528 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:05.391904+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195444736 unmapped: 40214528 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:06.392160+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195444736 unmapped: 40214528 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 473 ms_handle_reset con 0x559dd0417000 session 0x559dd17d1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:07.392393+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125477 data_alloc: 234881024 data_used: 23302965
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195461120 unmapped: 40198144 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:08.392581+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195461120 unmapped: 40198144 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:09.392728+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 474 ms_handle_reset con 0x559dd2795000 session 0x559dd2fda540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f6b53000/0x0/0x4ffc00000, data 0x2d759e2/0x2ff7000, compress 0x0/0x0/0x0, omap 0x62976, meta 0x604d68a), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195461120 unmapped: 40198144 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 474 handle_osd_map epochs [474,475], i have 475, src has [1,475]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.986194611s of 10.044680595s, submitted: 38
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d66400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:10.392837+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 475 ms_handle_reset con 0x559dd0417400 session 0x559dd07d2e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195461120 unmapped: 40198144 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 476 ms_handle_reset con 0x559dd2d66400 session 0x559dd11daa80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:11.393063+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 476 ms_handle_reset con 0x559dd0416400 session 0x559dd61c5c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 476 ms_handle_reset con 0x559dd0417c00 session 0x559dd2f5b340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f6b4e000/0x0/0x4ffc00000, data 0x2d775f0/0x2ffc000, compress 0x0/0x0/0x0, omap 0x62d51, meta 0x604d2af), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195051520 unmapped: 40607744 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f6b4e000/0x0/0x4ffc00000, data 0x2d775f0/0x2ffc000, compress 0x0/0x0/0x0, omap 0x62d51, meta 0x604d2af), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:12.393213+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 477 ms_handle_reset con 0x559dd0417000 session 0x559dd2784000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 477 ms_handle_reset con 0x559dd0417400 session 0x559dd2f4ce00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3141851 data_alloc: 234881024 data_used: 23303079
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195100672 unmapped: 40558592 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:13.393440+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2795000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195125248 unmapped: 40534016 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:14.393563+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 478 ms_handle_reset con 0x559dd2795000 session 0x559dd27a5180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 478 ms_handle_reset con 0x559dd0416400 session 0x559dd3be3a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:15.393744+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:16.393913+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f6b47000/0x0/0x4ffc00000, data 0x2d7c8a4/0x3001000, compress 0x0/0x0/0x0, omap 0x63d87, meta 0x604c279), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:17.394061+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3143115 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:18.394287+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f6b47000/0x0/0x4ffc00000, data 0x2d7c8a4/0x3001000, compress 0x0/0x0/0x0, omap 0x63d87, meta 0x604c279), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:19.394430+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 479 heartbeat osd_stat(store_statfs(0x4f6b46000/0x0/0x4ffc00000, data 0x2d7e33f/0x3004000, compress 0x0/0x0/0x0, omap 0x63f61, meta 0x604c09f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:20.394784+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:21.394961+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:22.395123+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 479 heartbeat osd_stat(store_statfs(0x4f6b46000/0x0/0x4ffc00000, data 0x2d7e33f/0x3004000, compress 0x0/0x0/0x0, omap 0x63f61, meta 0x604c09f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 479 heartbeat osd_stat(store_statfs(0x4f6b46000/0x0/0x4ffc00000, data 0x2d7e33f/0x3004000, compress 0x0/0x0/0x0, omap 0x63f61, meta 0x604c09f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3145425 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.482936859s of 12.945261955s, submitted: 79
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:23.395348+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b46000/0x0/0x4ffc00000, data 0x2d7e33f/0x3004000, compress 0x0/0x0/0x0, omap 0x63f61, meta 0x604c09f), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:24.395564+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:25.395798+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:26.395949+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:27.396111+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3148199 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:28.396327+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b43000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x6413b, meta 0x604bec5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:29.396529+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 ms_handle_reset con 0x559dd0417000 session 0x559dd23b1dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:30.396667+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 ms_handle_reset con 0x559dd0417400 session 0x559dd2faa000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 ms_handle_reset con 0x559dd0417c00 session 0x559dd57f4e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:31.396837+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:32.397002+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4c400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 ms_handle_reset con 0x559dd2d4c400 session 0x559dd1b14700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152714 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 40517632 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:33.397217+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b44000/0x0/0x4ffc00000, data 0x2d7fdce/0x3008000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.443208694s of 10.484853745s, submitted: 31
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 ms_handle_reset con 0x559dd0416400 session 0x559dd2faac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 ms_handle_reset con 0x559dd0417000 session 0x559dd08a3180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:34.397407+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:35.397590+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:36.397779+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:37.397961+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151377 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:38.398212+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:39.398350+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:40.398479+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:41.398613+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:42.398948+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151377 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:43.399138+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:44.399317+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:45.399431+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:46.399574+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:47.399757+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151377 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:48.399927+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:49.400148+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:50.400332+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 40484864 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:51.400479+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.792482376s of 17.824825287s, submitted: 17
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195182592 unmapped: 40476672 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:52.400607+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151377 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195182592 unmapped: 40476672 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:53.400765+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:54.400912+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:55.401102+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:56.401245+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:57.401381+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151377 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:58.401606+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:59.401809+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:00.402019+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:01.402213+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:02.402380+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151377 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:03.402585+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:04.402793+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:05.402951+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:06.403126+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:07.403506+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151377 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:08.404924+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:09.405472+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:10.406439+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:11.407063+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:12.407789+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151377 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:13.408269+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:14.408645+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f6b45000/0x0/0x4ffc00000, data 0x2d7fdbe/0x3007000, compress 0x0/0x0/0x0, omap 0x64283, meta 0x604bd7d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:15.409961+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:16.410391+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.715740204s of 25.856380463s, submitted: 90
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 ms_handle_reset con 0x559dd0417400 session 0x559dd2f4ca80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:17.410629+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152911 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:18.411220+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 40435712 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:19.411559+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 ms_handle_reset con 0x559dd0417c00 session 0x559dd2784fc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 40427520 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:20.411821+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0860c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd330bc00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 ms_handle_reset con 0x559dd330bc00 session 0x559dd3be3340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 ms_handle_reset con 0x559dd0860c00 session 0x559dd2ec0380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f6b40000/0x0/0x4ffc00000, data 0x2d8195a/0x300a000, compress 0x0/0x0/0x0, omap 0x64957, meta 0x604b6a9), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195239936 unmapped: 40419328 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:21.411948+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 ms_handle_reset con 0x559dd0416400 session 0x559dd2f4d340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 ms_handle_reset con 0x559dd0417000 session 0x559dd2ec1a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195256320 unmapped: 40402944 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:22.412093+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 ms_handle_reset con 0x559dd0417400 session 0x559dd2f5a1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3313400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157242 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 ms_handle_reset con 0x559dd3313400 session 0x559dd6743180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195297280 unmapped: 40361984 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:23.412252+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 481 handle_osd_map epochs [481,482], i have 482, src has [1,482]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 482 ms_handle_reset con 0x559dd0416400 session 0x559dd07d2000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 482 ms_handle_reset con 0x559dd0417c00 session 0x559dd07d2e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195313664 unmapped: 40345600 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:24.412487+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 482 ms_handle_reset con 0x559dd0417000 session 0x559dd2a13dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f6b3d000/0x0/0x4ffc00000, data 0x2d8354a/0x300d000, compress 0x0/0x0/0x0, omap 0x64b19, meta 0x604b4e7), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 482 ms_handle_reset con 0x559dd0417400 session 0x559dd2fab180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:25.412773+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:26.412952+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:27.413113+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3158847 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:28.413327+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:29.413500+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:30.413796+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f6b3f000/0x0/0x4ffc00000, data 0x2d8354a/0x300d000, compress 0x0/0x0/0x0, omap 0x64c2a, meta 0x604b3d6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:31.414007+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:32.414328+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.675695419s of 15.849345207s, submitted: 83
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162341 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:33.414660+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f6b3f000/0x0/0x4ffc00000, data 0x2d8354a/0x300d000, compress 0x0/0x0/0x0, omap 0x64c2a, meta 0x604b3d6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:34.414907+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:35.415147+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:36.415448+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:37.415746+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f6b3a000/0x0/0x4ffc00000, data 0x2d84fc9/0x3010000, compress 0x0/0x0/0x0, omap 0x651ab, meta 0x604ae55), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162341 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:38.416138+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:39.416421+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:40.416697+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:41.416974+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:42.417223+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162341 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:43.417432+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f6b3a000/0x0/0x4ffc00000, data 0x2d84fc9/0x3010000, compress 0x0/0x0/0x0, omap 0x651ab, meta 0x604ae55), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:44.417578+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:45.417840+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:46.417994+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:47.418229+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162341 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:48.418473+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f6b3a000/0x0/0x4ffc00000, data 0x2d84fc9/0x3010000, compress 0x0/0x0/0x0, omap 0x651ab, meta 0x604ae55), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:49.418835+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:50.419038+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:51.419415+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:52.419553+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162341 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:53.419819+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:54.420021+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f6b3a000/0x0/0x4ffc00000, data 0x2d84fc9/0x3010000, compress 0x0/0x0/0x0, omap 0x651ab, meta 0x604ae55), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:55.420224+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:56.420454+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:57.420634+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162341 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:58.420922+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f6b3a000/0x0/0x4ffc00000, data 0x2d84fc9/0x3010000, compress 0x0/0x0/0x0, omap 0x651ab, meta 0x604ae55), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:59.421171+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:00.421342+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:01.421606+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f6b3a000/0x0/0x4ffc00000, data 0x2d84fc9/0x3010000, compress 0x0/0x0/0x0, omap 0x651ab, meta 0x604ae55), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:02.421783+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f6b3a000/0x0/0x4ffc00000, data 0x2d84fc9/0x3010000, compress 0x0/0x0/0x0, omap 0x651ab, meta 0x604ae55), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162341 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:03.422003+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:04.422171+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0860c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:05.422416+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:06.422640+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 ms_handle_reset con 0x559dd0860c00 session 0x559dd2f5b340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:07.422844+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f6b3a000/0x0/0x4ffc00000, data 0x2d84fc9/0x3010000, compress 0x0/0x0/0x0, omap 0x651ab, meta 0x604ae55), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3162341 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:08.423026+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195338240 unmapped: 40321024 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.761226654s of 35.771827698s, submitted: 10
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0416400 session 0x559dd29848c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:09.423269+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:10.423401+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0417000 session 0x559dd61c4000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:11.423605+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6b36000/0x0/0x4ffc00000, data 0x2d86b75/0x3014000, compress 0x0/0x0/0x0, omap 0x6541f, meta 0x604abe1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:12.423794+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:13.423947+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3167560 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:14.424105+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:15.424324+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6b38000/0x0/0x4ffc00000, data 0x2d86b75/0x3014000, compress 0x0/0x0/0x0, omap 0x6541f, meta 0x604abe1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:16.424563+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:17.424811+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:18.424994+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3167560 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6b38000/0x0/0x4ffc00000, data 0x2d86b75/0x3014000, compress 0x0/0x0/0x0, omap 0x6541f, meta 0x604abe1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:19.425294+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:20.425458+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:21.425628+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195346432 unmapped: 40312832 heap: 235659264 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6b38000/0x0/0x4ffc00000, data 0x2d86b75/0x3014000, compress 0x0/0x0/0x0, omap 0x6541f, meta 0x604abe1), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:22.425763+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.430121422s of 13.450010300s, submitted: 11
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 198115328 unmapped: 41222144 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0417400 session 0x559dd23b0700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0417c00 session 0x559dd22f7a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd2d4d000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd2d4d000 session 0x559dd2fdafc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0416400 session 0x559dd09dee00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0417000 session 0x559dd09de540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:23.425917+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216877 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 44883968 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:24.426108+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 44883968 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:25.426233+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 44883968 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:26.426402+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 44883968 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f637b000/0x0/0x4ffc00000, data 0x3543b75/0x37d1000, compress 0x0/0x0/0x0, omap 0x6566d, meta 0x604a993), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:27.426579+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 44883968 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:28.426858+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216877 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 44883968 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f637b000/0x0/0x4ffc00000, data 0x3543b75/0x37d1000, compress 0x0/0x0/0x0, omap 0x6566d, meta 0x604a993), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:29.427022+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 44883968 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0417400 session 0x559dd08a21c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:30.427190+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194584576 unmapped: 44752896 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:31.427363+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:32.427514+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:33.427696+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3263857 data_alloc: 251658240 data_used: 31203710
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f637b000/0x0/0x4ffc00000, data 0x3543b75/0x37d1000, compress 0x0/0x0/0x0, omap 0x6566d, meta 0x604a993), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f637b000/0x0/0x4ffc00000, data 0x3543b75/0x37d1000, compress 0x0/0x0/0x0, omap 0x6566d, meta 0x604a993), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:34.427853+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:35.428013+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:36.428162+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:37.428389+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:38.428630+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3263857 data_alloc: 251658240 data_used: 31203710
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f637b000/0x0/0x4ffc00000, data 0x3543b75/0x37d1000, compress 0x0/0x0/0x0, omap 0x6566d, meta 0x604a993), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:39.428765+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:40.428943+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 45195264 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.437725067s of 18.536348343s, submitted: 23
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:41.429124+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199319552 unmapped: 40017920 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f5ccf000/0x0/0x4ffc00000, data 0x3be7b75/0x3e75000, compress 0x0/0x0/0x0, omap 0x6566d, meta 0x604a993), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:42.429284+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:43.429422+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3314529 data_alloc: 251658240 data_used: 32265598
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:44.429552+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:45.429690+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:46.429882+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:47.430032+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f5cc3000/0x0/0x4ffc00000, data 0x3bf3b75/0x3e81000, compress 0x0/0x0/0x0, omap 0x6566d, meta 0x604a993), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:48.430186+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3314545 data_alloc: 251658240 data_used: 32265598
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:49.430328+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:50.430476+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:51.430589+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:52.430729+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:53.430887+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3314545 data_alloc: 251658240 data_used: 32265598
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 199491584 unmapped: 39845888 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f5cc3000/0x0/0x4ffc00000, data 0x3bf3b75/0x3e81000, compress 0x0/0x0/0x0, omap 0x6566d, meta 0x604a993), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd174a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.759460449s of 13.054812431s, submitted: 69
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:54.431012+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 198828032 unmapped: 40509440 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:55.431176+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 198828032 unmapped: 40509440 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f5ccb000/0x0/0x4ffc00000, data 0x3bf3b75/0x3e81000, compress 0x0/0x0/0x0, omap 0x6566d, meta 0x604a993), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:56.431394+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 198828032 unmapped: 40509440 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:57.431600+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 198828032 unmapped: 40509440 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd174a000 session 0x559dd2faa8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:58.431789+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3307553 data_alloc: 251658240 data_used: 32265614
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 198828032 unmapped: 40509440 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0417c00 session 0x559dd7893a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd383e000 session 0x559dd2fab880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:59.431983+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0416400 session 0x559dcedd1500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 44859392 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6b38000/0x0/0x4ffc00000, data 0x2d86b75/0x3014000, compress 0x0/0x0/0x0, omap 0x65880, meta 0x604a780), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:00.432256+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 44859392 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:01.432443+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 44859392 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:02.432612+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 44859392 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:03.432815+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6b38000/0x0/0x4ffc00000, data 0x2d86b75/0x3014000, compress 0x0/0x0/0x0, omap 0x65880, meta 0x604a780), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3176057 data_alloc: 234881024 data_used: 23303566
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 44859392 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0417000 session 0x559dd1b15340
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:04.432947+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.583738327s of 10.669684410s, submitted: 26
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 44859392 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 ms_handle_reset con 0x559dd0417400 session 0x559dd2f4d500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 484 handle_osd_map epochs [484,485], i have 485, src has [1,485]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:05.433155+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 485 ms_handle_reset con 0x559dd0417c00 session 0x559dd09de1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f6b35000/0x0/0x4ffc00000, data 0x2d886f3/0x3015000, compress 0x0/0x0/0x0, omap 0x65d09, meta 0x604a2f7), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f6b35000/0x0/0x4ffc00000, data 0x2d886f3/0x3015000, compress 0x0/0x0/0x0, omap 0x65d09, meta 0x604a2f7), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:06.433301+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 485 ms_handle_reset con 0x559dd0416400 session 0x559dd07d3180
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:07.433421+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 486 ms_handle_reset con 0x559dd0417000 session 0x559dd147b6c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f6b34000/0x0/0x4ffc00000, data 0x2d84281/0x3016000, compress 0x0/0x0/0x0, omap 0x65e5a, meta 0x604a1a6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:08.433630+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f6b34000/0x0/0x4ffc00000, data 0x2d84281/0x3016000, compress 0x0/0x0/0x0, omap 0x65e5a, meta 0x604a1a6), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3178903 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 486 ms_handle_reset con 0x559dd0417400 session 0x559dd08a28c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:09.433822+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:10.433988+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:11.434239+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:12.434367+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f6b30000/0x0/0x4ffc00000, data 0x2d7de71/0x3018000, compress 0x0/0x0/0x0, omap 0x665a4, meta 0x6049a5c), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:13.434498+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182999 data_alloc: 234881024 data_used: 23303550
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:14.434641+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:15.434840+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f6b2f000/0x0/0x4ffc00000, data 0x2d7f90c/0x301b000, compress 0x0/0x0/0x0, omap 0x66731, meta 0x60498cf), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:16.435628+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:17.436176+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f6b2f000/0x0/0x4ffc00000, data 0x2d7f90c/0x301b000, compress 0x0/0x0/0x0, omap 0x66731, meta 0x60498cf), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:18.436488+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3183127 data_alloc: 234881024 data_used: 23304163
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:19.437349+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f6b2f000/0x0/0x4ffc00000, data 0x2d7f90c/0x301b000, compress 0x0/0x0/0x0, omap 0x66731, meta 0x60498cf), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.151403427s of 14.886940956s, submitted: 95
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 44851200 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd383e000 session 0x559dd17d0540
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd174a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd174a000 session 0x559dd29848c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:20.439106+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:21.439439+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:22.440293+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:23.440483+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f6b2c000/0x0/0x4ffc00000, data 0x2d8138b/0x301e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3185389 data_alloc: 234881024 data_used: 23304163
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:24.441103+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:25.441298+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:26.442734+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:27.442873+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:28.443049+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3185389 data_alloc: 234881024 data_used: 23304163
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:29.443757+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f6b2c000/0x0/0x4ffc00000, data 0x2d8138b/0x301e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f6b2c000/0x0/0x4ffc00000, data 0x2d8138b/0x301e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:30.444116+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:31.444268+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:32.444543+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f6b2c000/0x0/0x4ffc00000, data 0x2d8138b/0x301e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:33.445244+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f6b2c000/0x0/0x4ffc00000, data 0x2d8138b/0x301e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3185389 data_alloc: 234881024 data_used: 23304163
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:34.445375+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:35.445511+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:36.445796+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f6b2c000/0x0/0x4ffc00000, data 0x2d8138b/0x301e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:37.445935+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:38.446103+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3185389 data_alloc: 234881024 data_used: 23304163
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:39.446215+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 43819008 heap: 239337472 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f6b2c000/0x0/0x4ffc00000, data 0x2d8138b/0x301e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd0416400 session 0x559dd2ec0380
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.769853592s of 20.783802032s, submitted: 15
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:40.446334+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd0417000 session 0x559dd2fda1c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:41.446494+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:42.446778+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f572e000/0x0/0x4ffc00000, data 0x418138b/0x441e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:43.447018+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291929 data_alloc: 234881024 data_used: 23304163
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:44.447189+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:45.447349+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:46.447552+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:47.447768+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f572e000/0x0/0x4ffc00000, data 0x418138b/0x441e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:48.447992+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291929 data_alloc: 234881024 data_used: 23304163
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f572e000/0x0/0x4ffc00000, data 0x418138b/0x441e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:49.448206+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd0417400 session 0x559dd07d2e00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:50.448781+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd174a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd174a000 session 0x559dd147ac40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:51.449054+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd383e000 session 0x559dd08a3500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd0416400 session 0x559dd2f4d500
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:52.449187+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:53.449334+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291929 data_alloc: 234881024 data_used: 23304163
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:54.449449+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f572e000/0x0/0x4ffc00000, data 0x418138b/0x441e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 195526656 unmapped: 48013312 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:55.449567+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196141056 unmapped: 47398912 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:56.449695+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 47276032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:57.449829+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 47276032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:58.450080+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3343385 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 47276032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:59.450296+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f572e000/0x0/0x4ffc00000, data 0x418138b/0x441e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 47276032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:00.450475+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 47276032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:01.450674+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 47276032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:02.450855+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 47276032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:03.451023+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3343385 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 47276032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:04.451189+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f572e000/0x0/0x4ffc00000, data 0x418138b/0x441e000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x60495b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 47276032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.809602737s of 24.948778152s, submitted: 5
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:05.451625+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 37994496 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:06.451809+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 208027648 unmapped: 35512320 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:07.452115+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:08.452396+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3401817 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f2a1f000/0x0/0x4ffc00000, data 0x4b5038b/0x4ded000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x83895b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:09.452585+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:10.452753+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:11.452946+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:12.453096+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:13.453223+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3401817 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:14.453378+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f2a1f000/0x0/0x4ffc00000, data 0x4b5038b/0x4ded000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x83895b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd0417000 session 0x559dd57f4c40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd0417400 session 0x559dd2a13dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:15.453798+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd174a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 ms_handle_reset con 0x559dd174a000 session 0x559dd2f4c8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:16.453977+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:17.454120+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:18.454256+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3401817 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f2a1f000/0x0/0x4ffc00000, data 0x4b5038b/0x4ded000, compress 0x0/0x0/0x0, omap 0x66a4b, meta 0x83895b5), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:19.454416+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.226264000s of 14.474633217s, submitted: 58
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _renew_subs
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:20.454605+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:21.454768+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:22.454919+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1a000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66b9d, meta 0x8389463), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:23.455044+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3405023 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:24.455183+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:25.455327+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:26.455407+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:27.455508+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:28.455655+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1a000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66b9d, meta 0x8389463), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3405023 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1a000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66b9d, meta 0x8389463), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:29.455814+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:30.455936+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1a000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66b9d, meta 0x8389463), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:31.456136+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1a000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66b9d, meta 0x8389463), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:32.456265+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd383e000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.731680870s of 12.736555099s, submitted: 2
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd383e000 session 0x559dd7893a40
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201736192 unmapped: 41803776 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:33.456394+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3407745 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:34.456553+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f4a/0x4df1000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:35.456732+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:36.456838+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:37.456986+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:38.457273+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3407745 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:39.457395+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:40.457502+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f4a/0x4df1000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:41.457635+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f4a/0x4df1000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:42.457759+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:43.457879+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3407745 data_alloc: 251658240 data_used: 32047587
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:44.458465+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:45.458600+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f4a/0x4df1000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:46.458797+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:47.458951+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f4a/0x4df1000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:48.459159+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3414657 data_alloc: 251658240 data_used: 33038819
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:49.459285+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:50.459414+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f4a/0x4df1000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:51.459521+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f4a/0x4df1000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:52.459637+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:53.459778+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3414657 data_alloc: 251658240 data_used: 33038819
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:54.459934+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:55.460051+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:56.460167+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:57.460406+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f4a/0x4df1000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:58.460807+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3414657 data_alloc: 251658240 data_used: 33038819
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:59.460947+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:00.461117+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:01.461282+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.610149384s of 29.630218506s, submitted: 10
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:02.461422+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2914000/0x0/0x4ffc00000, data 0x4c58f4a/0x4ef8000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets getting new tickets!
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:03.461689+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _finish_auth 0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:03.462730+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3421285 data_alloc: 251658240 data_used: 33042915
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:04.461812+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:05.462219+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:06.462391+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:07.462519+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:08.462693+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2914000/0x0/0x4ffc00000, data 0x4c58f4a/0x4ef8000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3421285 data_alloc: 251658240 data_used: 33042915
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:09.462899+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:10.463045+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:11.463215+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2914000/0x0/0x4ffc00000, data 0x4c58f4a/0x4ef8000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:12.463384+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:13.463911+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.961565018s of 11.972284317s, submitted: 1
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd0416400 session 0x559dd61c4000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd0417000 session 0x559dd2fab880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3423201 data_alloc: 251658240 data_used: 33255907
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201744384 unmapped: 41795584 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:14.464034+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0417400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd0417400 session 0x559dd6742a80
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:15.464159+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:16.464319+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:17.464452+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2914000/0x0/0x4ffc00000, data 0x4c58f27/0x4ef7000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:18.464620+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd174a000
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd174a000 session 0x559dd2f5a8c0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416405 data_alloc: 251658240 data_used: 33251811
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:19.464787+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:20.464970+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd3843c00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:21.465203+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd3843c00 session 0x559dd2f5afc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:22.465397+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:23.465562+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416405 data_alloc: 251658240 data_used: 33251811
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:24.465849+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:25.466048+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:26.466247+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:27.466384+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:28.466565+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416405 data_alloc: 251658240 data_used: 33251811
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:29.466786+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:30.466937+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:31.467076+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:32.467262+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:33.467444+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416405 data_alloc: 251658240 data_used: 33251811
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:34.467604+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:35.467773+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:36.468000+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:37.468209+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:38.468423+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416405 data_alloc: 251658240 data_used: 33251811
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:39.468563+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:40.468783+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:41.468969+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:42.469180+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:43.469483+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416405 data_alloc: 251658240 data_used: 33251811
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:44.469609+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:45.469769+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:46.469929+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:47.470090+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1b000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:48.470286+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416405 data_alloc: 251658240 data_used: 33251811
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:49.470443+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:50.470596+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 41705472 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:51.470785+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.397254944s of 37.441268921s, submitted: 25
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:52.470913+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:53.471093+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:54.471235+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:55.471374+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:56.471523+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:57.471650+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:58.471830+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:59.471933+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:00.472059+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:01.472209+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:02.472299+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:03.472420+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:04.472592+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:05.472772+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:06.472918+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:07.473029+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:08.473205+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:09.473407+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:10.473609+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:11.473781+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:12.473940+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:13.474133+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:14.474436+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:15.474619+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:16.474814+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:17.474920+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:18.475115+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:19.475249+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:20.475416+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:21.475567+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:22.475792+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:23.475899+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:24.476029+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:25.476224+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:26.476368+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:27.476526+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:28.476740+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:29.476871+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:30.477059+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:31.477287+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:32.477503+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:33.477698+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:34.477862+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:35.478033+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:36.478266+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:37.478486+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:38.478648+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:39.478805+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:40.478945+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:41.479076+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:42.479209+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:43.479379+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:44.479565+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:45.479841+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:46.480055+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:47.480251+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:48.480446+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:49.480582+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:50.480732+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:51.480860+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:52.481021+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:53.481161+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:54.481295+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:55.481461+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:56.481628+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:57.481766+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:58.481919+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:59.482118+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:00.482324+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:01.482494+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:02.482644+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:03.482834+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:04.482982+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:05.483192+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:06.483372+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 41697280 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:07.483556+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201850880 unmapped: 41689088 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:08.483792+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201850880 unmapped: 41689088 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:09.483946+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201850880 unmapped: 41689088 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:10.484082+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201850880 unmapped: 41689088 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:11.484285+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201850880 unmapped: 41689088 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:12.484466+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201850880 unmapped: 41689088 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:13.484621+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201859072 unmapped: 41680896 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:14.484823+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201859072 unmapped: 41680896 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:15.485001+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201859072 unmapped: 41680896 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:16.485186+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201859072 unmapped: 41680896 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:17.485346+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201859072 unmapped: 41680896 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:18.485532+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201859072 unmapped: 41680896 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:19.485792+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201859072 unmapped: 41680896 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:20.485926+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201859072 unmapped: 41680896 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:21.486062+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201867264 unmapped: 41672704 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:22.486256+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201867264 unmapped: 41672704 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:23.486392+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201867264 unmapped: 41672704 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:24.486621+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201867264 unmapped: 41672704 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:25.486826+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201867264 unmapped: 41672704 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:26.487006+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201867264 unmapped: 41672704 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:27.487197+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201867264 unmapped: 41672704 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:28.487394+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201875456 unmapped: 41664512 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:29.487597+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201875456 unmapped: 41664512 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:30.487796+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201875456 unmapped: 41664512 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:31.487938+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201875456 unmapped: 41664512 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:32.488122+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201875456 unmapped: 41664512 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:33.488271+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201875456 unmapped: 41664512 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:34.488461+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201875456 unmapped: 41664512 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:35.488784+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201875456 unmapped: 41664512 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:36.488994+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201883648 unmapped: 41656320 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:37.489205+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201883648 unmapped: 41656320 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:38.489352+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201883648 unmapped: 41656320 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:39.489483+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201883648 unmapped: 41656320 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:40.489673+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201883648 unmapped: 41656320 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:41.489811+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201883648 unmapped: 41656320 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:42.489915+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201883648 unmapped: 41656320 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:43.490021+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201883648 unmapped: 41656320 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:44.490207+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201891840 unmapped: 41648128 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:45.490443+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201891840 unmapped: 41648128 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:46.490615+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201891840 unmapped: 41648128 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:47.490873+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201891840 unmapped: 41648128 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:48.491067+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201891840 unmapped: 41648128 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:49.491250+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201891840 unmapped: 41648128 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:50.491439+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201891840 unmapped: 41648128 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:51.491601+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201891840 unmapped: 41648128 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:52.491856+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201900032 unmapped: 41639936 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:53.492076+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201900032 unmapped: 41639936 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:54.492209+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201900032 unmapped: 41639936 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:55.492365+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201900032 unmapped: 41639936 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:56.492506+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201900032 unmapped: 41639936 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:57.492691+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201900032 unmapped: 41639936 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:58.493326+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201900032 unmapped: 41639936 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:59.493481+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201900032 unmapped: 41639936 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:00.493647+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201908224 unmapped: 41631744 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:01.493836+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201908224 unmapped: 41631744 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:02.493962+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201908224 unmapped: 41631744 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:03.494221+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201916416 unmapped: 41623552 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:04.494365+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201916416 unmapped: 41623552 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:05.494498+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201916416 unmapped: 41623552 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:06.494626+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201916416 unmapped: 41623552 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:07.494856+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201916416 unmapped: 41623552 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:08.495091+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 41615360 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:09.495222+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 41615360 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:10.495391+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 41615360 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:11.495583+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 41615360 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:12.495841+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 41615360 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:13.496049+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 41615360 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:14.496226+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:15.496427+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:16.496647+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:17.496888+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:18.877913+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:19.878025+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:20.878210+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:21.878435+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:22.878620+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:23.878774+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201932800 unmapped: 41607168 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:24.878937+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201940992 unmapped: 41598976 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:25.879126+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201940992 unmapped: 41598976 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:26.879277+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201940992 unmapped: 41598976 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:27.879517+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201940992 unmapped: 41598976 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:28.879770+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201940992 unmapped: 41598976 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:29.879957+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201940992 unmapped: 41598976 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:30.880149+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201940992 unmapped: 41598976 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:31.880334+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201940992 unmapped: 41598976 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:32.880504+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201949184 unmapped: 41590784 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:33.880649+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201949184 unmapped: 41590784 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:34.880814+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201949184 unmapped: 41590784 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:35.880997+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201949184 unmapped: 41590784 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:36.881128+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201949184 unmapped: 41590784 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:37.881297+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201949184 unmapped: 41590784 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:38.881577+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201949184 unmapped: 41590784 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:39.881843+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201949184 unmapped: 41590784 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:40.882085+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201957376 unmapped: 41582592 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:41.882244+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201957376 unmapped: 41582592 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:42.882367+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201957376 unmapped: 41582592 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:43.882500+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201957376 unmapped: 41582592 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:44.882648+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:45.882789+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:46.882946+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:47.883117+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:48.883302+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:49.883459+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:50.883611+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:51.883899+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:52.884078+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:53.884211+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:54.884410+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:55.884569+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:56.884763+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:57.884920+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:58.885089+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:59.885230+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:00.885371+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:01.885532+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:02.885771+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:03.885926+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:04.886050+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201990144 unmapped: 41549824 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:05.886185+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201990144 unmapped: 41549824 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:06.886333+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201990144 unmapped: 41549824 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:07.886471+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201990144 unmapped: 41549824 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:08.886737+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201990144 unmapped: 41549824 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:09.886930+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201990144 unmapped: 41549824 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:10.887085+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201990144 unmapped: 41549824 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:11.887248+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201990144 unmapped: 41549824 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:12.887410+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 41541632 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:13.887592+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 41541632 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:14.887772+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 41541632 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:15.887931+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 41541632 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:16.888121+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 41541632 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:17.888342+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 41541632 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:18.888559+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 41541632 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:19.888768+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 41541632 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:20.888913+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202006528 unmapped: 41533440 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:21.889054+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202006528 unmapped: 41533440 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:22.889253+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202006528 unmapped: 41533440 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:23.889434+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202006528 unmapped: 41533440 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:24.889595+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202006528 unmapped: 41533440 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:25.889824+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202006528 unmapped: 41533440 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:26.890039+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202006528 unmapped: 41533440 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:27.890193+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202006528 unmapped: 41533440 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:28.890361+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202014720 unmapped: 41525248 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:29.890507+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202022912 unmapped: 41517056 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:30.890625+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202022912 unmapped: 41517056 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:31.890858+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202022912 unmapped: 41517056 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:32.891112+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202022912 unmapped: 41517056 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:33.891284+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202022912 unmapped: 41517056 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:34.891458+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202022912 unmapped: 41517056 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:35.891600+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202022912 unmapped: 41517056 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:36.891781+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 41508864 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:37.891994+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 41508864 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:38.892187+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 41508864 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:39.892831+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 41508864 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:40.893033+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 41508864 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:41.893174+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 41508864 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:42.893442+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 41508864 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:43.893583+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 41508864 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:44.894527+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 41500672 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:45.894668+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 41500672 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:46.894786+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 41500672 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:47.895013+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 41500672 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:48.895295+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 41500672 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:49.895463+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 41500672 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:50.895670+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 41500672 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:51.895777+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 41500672 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:52.896016+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202047488 unmapped: 41492480 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:53.896199+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202047488 unmapped: 41492480 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:54.896333+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202047488 unmapped: 41492480 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:55.896501+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202047488 unmapped: 41492480 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:56.896681+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202047488 unmapped: 41492480 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:57.896880+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202047488 unmapped: 41492480 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:58.897074+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202047488 unmapped: 41492480 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:59.897233+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202047488 unmapped: 41492480 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:00.897418+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 41484288 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:01.897577+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 41484288 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:02.897804+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 22K writes, 89K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 22K writes, 8124 syncs, 2.79 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1230 writes, 3506 keys, 1230 commit groups, 1.0 writes per commit group, ingest: 4.33 MB, 0.01 MB/s
                                           Interval WAL: 1230 writes, 540 syncs, 2.28 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 41484288 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:03.897990+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 41484288 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:04.898144+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 41484288 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:05.898294+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 41484288 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:06.898452+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 41484288 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:07.898597+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 41484288 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:08.898871+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202063872 unmapped: 41476096 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:09.899011+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 41467904 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:10.899149+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 41467904 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:11.899312+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 41467904 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:12.899477+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 41467904 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:13.899670+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 41467904 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:14.899823+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 41467904 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:15.899987+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 41467904 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:16.900175+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 41459712 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:17.900837+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 41459712 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:18.901002+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 41459712 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:19.901205+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 41459712 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:20.901449+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 41459712 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:21.901671+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 41459712 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:22.901853+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 41459712 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:23.902060+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202088448 unmapped: 41451520 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:24.902236+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202096640 unmapped: 41443328 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:25.902411+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202096640 unmapped: 41443328 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:26.902585+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202096640 unmapped: 41443328 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:27.902774+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202096640 unmapped: 41443328 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:28.902949+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202096640 unmapped: 41443328 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:29.903144+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202096640 unmapped: 41443328 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:30.903297+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202096640 unmapped: 41443328 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:31.903472+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202104832 unmapped: 41435136 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:32.903689+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202104832 unmapped: 41435136 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:33.903931+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202104832 unmapped: 41435136 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:34.904170+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202104832 unmapped: 41435136 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:35.904338+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202121216 unmapped: 41418752 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:36.904487+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202121216 unmapped: 41418752 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:37.904650+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202121216 unmapped: 41418752 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:38.904892+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202121216 unmapped: 41418752 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:39.905052+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202129408 unmapped: 41410560 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:40.905217+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202137600 unmapped: 41402368 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:41.905415+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202137600 unmapped: 41402368 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:42.905739+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202137600 unmapped: 41402368 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:43.905992+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202137600 unmapped: 41402368 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:44.906280+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202137600 unmapped: 41402368 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:45.906438+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202137600 unmapped: 41402368 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:46.906607+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202137600 unmapped: 41402368 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:47.906782+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202145792 unmapped: 41394176 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:48.907027+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202145792 unmapped: 41394176 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:49.907365+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202145792 unmapped: 41394176 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:50.907813+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.002746582s of 300.037414551s, submitted: 24
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202153984 unmapped: 41385984 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:51.908016+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202162176 unmapped: 41377792 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:52.908768+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202162176 unmapped: 41377792 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:53.908998+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202203136 unmapped: 41336832 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:54.909171+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 41328640 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:55.909337+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 41328640 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:56.909495+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 41328640 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:57.909859+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 41328640 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:58.910183+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 41328640 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:59.910510+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 41328640 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:00.910772+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 41328640 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:01.910976+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 41328640 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:02.911189+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 41328640 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:03.911488+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202219520 unmapped: 41320448 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:04.911788+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202219520 unmapped: 41320448 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:05.912145+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202219520 unmapped: 41320448 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:06.912348+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202219520 unmapped: 41320448 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:07.912540+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202219520 unmapped: 41320448 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:08.912780+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202219520 unmapped: 41320448 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:09.912988+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202219520 unmapped: 41320448 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:10.913211+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202219520 unmapped: 41320448 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:11.913384+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202227712 unmapped: 41312256 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:12.913588+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202227712 unmapped: 41312256 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:13.913745+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202227712 unmapped: 41312256 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:14.913895+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202227712 unmapped: 41312256 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:15.914109+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202227712 unmapped: 41312256 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:16.914260+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202227712 unmapped: 41312256 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:18.129673+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202227712 unmapped: 41312256 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:19.129917+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202227712 unmapped: 41312256 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:20.130107+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202235904 unmapped: 41304064 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:21.130240+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202235904 unmapped: 41304064 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:22.130433+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202235904 unmapped: 41304064 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:23.130612+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202235904 unmapped: 41304064 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:24.130783+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202244096 unmapped: 41295872 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:25.130893+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202244096 unmapped: 41295872 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:26.131028+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202244096 unmapped: 41295872 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:27.131162+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202244096 unmapped: 41295872 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:28.131349+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202244096 unmapped: 41295872 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:29.131489+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202252288 unmapped: 41287680 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:30.131670+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202252288 unmapped: 41287680 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:31.131813+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202252288 unmapped: 41287680 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:32.131964+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202260480 unmapped: 41279488 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:33.132178+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202260480 unmapped: 41279488 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:34.132326+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202268672 unmapped: 41271296 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:35.132474+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202268672 unmapped: 41271296 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:36.132607+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202268672 unmapped: 41271296 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:37.132789+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202276864 unmapped: 41263104 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:38.132948+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202276864 unmapped: 41263104 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:39.133138+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202276864 unmapped: 41263104 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:40.133294+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202276864 unmapped: 41263104 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:41.133433+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202276864 unmapped: 41263104 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:42.133563+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202276864 unmapped: 41263104 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:43.133841+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202276864 unmapped: 41263104 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:44.134031+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202285056 unmapped: 41254912 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:45.134196+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202285056 unmapped: 41254912 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:46.134349+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202285056 unmapped: 41254912 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:47.134559+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202285056 unmapped: 41254912 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:48.134823+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202285056 unmapped: 41254912 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:49.135042+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202285056 unmapped: 41254912 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:50.135180+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202285056 unmapped: 41254912 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:51.135359+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202285056 unmapped: 41254912 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:52.135521+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202293248 unmapped: 41246720 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:53.135780+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202293248 unmapped: 41246720 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:54.135930+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202309632 unmapped: 41230336 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:55.136063+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202309632 unmapped: 41230336 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:56.136215+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202309632 unmapped: 41230336 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:57.136351+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202309632 unmapped: 41230336 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:58.136477+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202309632 unmapped: 41230336 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:59.136748+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202309632 unmapped: 41230336 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:00.136915+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 41222144 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:01.137067+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 41222144 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:02.137205+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 41213952 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:03.137338+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 41213952 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:04.137541+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 41213952 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:05.137754+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 41213952 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:06.137919+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 41213952 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:07.138085+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 41213952 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:08.138232+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 41213952 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:09.138417+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 41205760 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:10.138600+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 41205760 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:11.138796+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 41205760 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:12.138966+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 41205760 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:13.139131+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 41205760 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:14.139272+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202342400 unmapped: 41197568 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:15.139424+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202342400 unmapped: 41197568 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:16.139678+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 41181184 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:17.139899+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 41181184 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:18.140085+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 41181184 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:19.140828+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 41181184 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:20.141008+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 41181184 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:21.141167+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 41181184 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:22.141327+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 41181184 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:23.141497+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 41181184 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:24.141671+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202366976 unmapped: 41172992 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:25.141889+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202366976 unmapped: 41172992 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:26.142056+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202366976 unmapped: 41172992 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:27.142253+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202366976 unmapped: 41172992 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:28.142452+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202366976 unmapped: 41172992 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:29.142779+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202366976 unmapped: 41172992 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:30.142991+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 41164800 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:31.143198+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 41164800 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:32.143427+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202383360 unmapped: 41156608 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:33.143576+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 41148416 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:34.143797+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 41148416 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:35.143962+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 41148416 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:36.144159+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 41148416 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:37.144335+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 41148416 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:38.144486+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 41148416 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:39.144758+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:40.144915+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 41148416 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:41.145097+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 41140224 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:42.145315+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 41140224 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:43.145452+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 41140224 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:44.145624+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 41140224 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:45.145804+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 41140224 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:46.145995+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 41140224 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:47.146158+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 41140224 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:48.146317+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 41140224 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:49.146498+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 41132032 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:50.146687+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202416128 unmapped: 41123840 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:51.147047+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202424320 unmapped: 41115648 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:52.147223+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202424320 unmapped: 41115648 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:53.147452+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202424320 unmapped: 41115648 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:54.147627+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202424320 unmapped: 41115648 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:55.147796+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202424320 unmapped: 41115648 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:56.147973+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202424320 unmapped: 41115648 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:57.148147+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 41107456 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:58.148298+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 41107456 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:59.148467+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 41107456 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:00.148631+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 41107456 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:01.148808+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 41107456 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:02.148986+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 41099264 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:03.149195+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 41099264 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:04.149377+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 41099264 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:05.149532+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 41091072 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:06.149755+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416341 data_alloc: 251658240 data_used: 33255872
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202457088 unmapped: 41082880 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:07.149912+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202457088 unmapped: 41082880 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2a1c000/0x0/0x4ffc00000, data 0x4b51f27/0x4df0000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:08.150067+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202457088 unmapped: 41082880 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:09.150338+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202457088 unmapped: 41082880 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:10.150521+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202457088 unmapped: 41082880 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd06f2000 session 0x559dd1b15dc0
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd0700000 session 0x559dd2faae00
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd2789c00 session 0x559dd2fda700
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: handle_auth_request added challenge on 0x559dd0416400
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 138.905349731s of 139.052856445s, submitted: 90
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 ms_handle_reset con 0x559dd0416400 session 0x559dd7d29880
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:11.150693+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:12.150931+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:13.151114+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:14.151443+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:15.151650+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:16.151897+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:17.152032+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:18.152178+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:19.152339+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:20.152494+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:21.152678+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:22.152955+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:23.153118+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:24.153265+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:25.153480+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:26.153622+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:27.153813+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:28.154013+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:29.154308+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:30.154586+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:31.154768+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:32.154936+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:33.155112+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:34.155274+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:35.155490+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:36.155664+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:37.155843+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:38.156042+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:39.156281+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:40.156551+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:41.156742+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:42.156924+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:43.157073+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:44.157223+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:45.157479+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:46.157662+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:47.157896+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:48.158091+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:49.158302+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:50.158449+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:51.158745+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:52.158977+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:53.159138+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:54.159302+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:55.159459+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:56.159696+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:57.159937+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:58.160138+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:59.160360+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:00.160525+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:01.160681+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:02.160920+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:03.161112+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:04.161377+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:05.161603+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 41574400 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:06.161776+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:07.161926+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:08.162115+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:09.162354+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:10.162490+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:11.162806+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:12.163062+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:13.163287+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:14.163503+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:15.163684+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:16.163904+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:17.164092+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:18.164293+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:19.164558+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:20.164846+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:21.165047+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:22.165256+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 41566208 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:23.165444+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:24.165625+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:25.165838+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:26.166037+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:27.166199+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:28.166392+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:29.166594+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:30.166777+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:31.166966+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:32.167160+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:33.167390+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:34.167594+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:35.167801+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:36.167949+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:37.168093+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:38.168214+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:39.168395+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:40.168607+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:41.168835+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:42.169023+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:43.169174+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:44.169350+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:45.169530+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:46.169698+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:47.169865+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 41558016 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:48.170039+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 41541632 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:49.170246+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 41484288 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: do_command 'config diff' '{prefix=config diff}'
Feb 02 16:02:22 compute-0 ceph-osd[88227]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:50.170387+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: do_command 'config show' '{prefix=config show}'
Feb 02 16:02:22 compute-0 ceph-osd[88227]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 02 16:02:22 compute-0 ceph-osd[88227]: do_command 'counter dump' '{prefix=counter dump}'
Feb 02 16:02:22 compute-0 ceph-osd[88227]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 02 16:02:22 compute-0 ceph-osd[88227]: do_command 'counter schema' '{prefix=counter schema}'
Feb 02 16:02:22 compute-0 ceph-osd[88227]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202178560 unmapped: 41361408 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:51.170533+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:22 compute-0 ceph-osd[88227]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:22 compute-0 ceph-osd[88227]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336877 data_alloc: 251658240 data_used: 29415856
Feb 02 16:02:22 compute-0 ceph-osd[88227]: prioritycache tune_memory target: 4294967296 mapped: 202244096 unmapped: 41295872 heap: 243539968 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:22 compute-0 ceph-osd[88227]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f31a6000/0x0/0x4ffc00000, data 0x43c8ef4/0x4665000, compress 0x0/0x0/0x0, omap 0x66973, meta 0x838968d), peers [0,1] op hist [])
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: tick
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_tickets
Feb 02 16:02:22 compute-0 ceph-osd[88227]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:52.170679+0000)
Feb 02 16:02:22 compute-0 ceph-osd[88227]: do_command 'log dump' '{prefix=log dump}'
Feb 02 16:02:23 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19176 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} v 0)
Feb 02 16:02:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 02 16:02:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164265541' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 02 16:02:23 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 16:02:23 compute-0 nova_compute[239545]: 2026-02-02 16:02:23.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:23 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19178 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} v 0)
Feb 02 16:02:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 02 16:02:23 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968487981' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: from='client.19168 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: pgmap v2149: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:23 compute-0 ceph-mon[75334]: from='client.19172 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3164265541' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 16:02:23 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3968487981' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 02 16:02:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:02:24 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19183 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:24 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 02 16:02:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061598163' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 02 16:02:24 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19186 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:24 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 02 16:02:24 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2585885944' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 02 16:02:24 compute-0 ceph-mon[75334]: from='client.19176 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:24 compute-0 ceph-mon[75334]: from='client.19178 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4061598163' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 02 16:02:24 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2585885944' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 02 16:02:25 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19190 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:25 compute-0 crontab[285309]: (root) LIST (root)
Feb 02 16:02:25 compute-0 nova_compute[239545]: 2026-02-02 16:02:25.403 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:25 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 02 16:02:25 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080359498' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 02 16:02:25 compute-0 nova_compute[239545]: 2026-02-02 16:02:25.638 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:25 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19194 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:26 compute-0 ceph-mon[75334]: from='client.19183 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:26 compute-0 ceph-mon[75334]: pgmap v2150: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:26 compute-0 ceph-mon[75334]: from='client.19186 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:26 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1080359498' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 02 16:02:26 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 02 16:02:26 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740277628' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 02 16:02:26 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19198 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:26 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:26 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19202 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:26 compute-0 nova_compute[239545]: 2026-02-02 16:02:26.540 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:27 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19204 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:27 compute-0 ceph-mgr[75628]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 02 16:02:27 compute-0 ceph-e43470b2-6632-573a-87d3-0f5428ec59e9-mgr-compute-0-rxryxi[75624]: 2026-02-02T16:02:27.004+0000 7f9528241640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 02 16:02:27 compute-0 ceph-mon[75334]: from='client.19190 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:27 compute-0 ceph-mon[75334]: from='client.19194 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:27 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2740277628' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 02 16:02:27 compute-0 ceph-mon[75334]: from='client.19198 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 02 16:02:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/112889341' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5630703 data_alloc: 234881024 data_used: 9601113
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 154411008 unmapped: 75628544 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:07.112282+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153346048 unmapped: 76693504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:08.112426+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 256 heartbeat osd_stat(store_statfs(0x4cdd72000/0x0/0x4ffc00000, data 0x2bc5e577/0x2bdd2000, compress 0x0/0x0/0x0, omap 0x3dffa, meta 0x6072006), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153346048 unmapped: 76693504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:09.112564+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 256 heartbeat osd_stat(store_statfs(0x4cdd72000/0x0/0x4ffc00000, data 0x2bc5e577/0x2bdd2000, compress 0x0/0x0/0x0, omap 0x3dffa, meta 0x6072006), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153346048 unmapped: 76693504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:10.112751+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 256 heartbeat osd_stat(store_statfs(0x4cdd72000/0x0/0x4ffc00000, data 0x2bc5e577/0x2bdd2000, compress 0x0/0x0/0x0, omap 0x3dffa, meta 0x6072006), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153427968 unmapped: 76611584 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:11.112922+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5646397 data_alloc: 234881024 data_used: 9961463
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.952277184s of 13.166198730s, submitted: 249
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 256 ms_handle_reset con 0x55d6184f3c00 session 0x55d6178bea80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153427968 unmapped: 76611584 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:12.113074+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153427968 unmapped: 76611584 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:13.113198+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153706496 unmapped: 76333056 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 258 ms_handle_reset con 0x55d6184f3c00 session 0x55d6184eafc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:14.113344+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153714688 unmapped: 76324864 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:15.113497+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153714688 unmapped: 76324864 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:16.113692+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 258 heartbeat osd_stat(store_statfs(0x4cdd4a000/0x0/0x4ffc00000, data 0x2bc8cc24/0x2be00000, compress 0x0/0x0/0x0, omap 0x3e3e0, meta 0x6071c20), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5722743 data_alloc: 234881024 data_used: 10042887
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f2400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619774000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 73719808 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:17.113833+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 258 ms_handle_reset con 0x55d619774000 session 0x55d61a136fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 258 ms_handle_reset con 0x55d6184f2400 session 0x55d61a136700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a08fc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 258 ms_handle_reset con 0x55d61a08fc00 session 0x55d619c7bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 75005952 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:18.114078+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b304000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b865800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 155181056 unmapped: 74858496 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 258 ms_handle_reset con 0x55d61b865800 session 0x55d61a5661c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:19.114225+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f2400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 258 ms_handle_reset con 0x55d6184f2400 session 0x55d619ac1dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 258 ms_handle_reset con 0x55d6184f3c00 session 0x55d6184eb500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 155353088 unmapped: 74686464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619774000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a08fc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 259 ms_handle_reset con 0x55d619774000 session 0x55d6178d6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:20.114666+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 259 heartbeat osd_stat(store_statfs(0x4cd353000/0x0/0x4ffc00000, data 0x2cb4e832/0x2c7ef000, compress 0x0/0x0/0x0, omap 0x3d635, meta 0x60729cb), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d00e400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 259 ms_handle_reset con 0x55d61d00e400 session 0x55d6198a0c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d00e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 260 ms_handle_reset con 0x55d61d00e000 session 0x55d616e51880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 260 ms_handle_reset con 0x55d61a08fc00 session 0x55d6178d3180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 74358784 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f2400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 260 ms_handle_reset con 0x55d6184f2400 session 0x55d617fd7a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 260 ms_handle_reset con 0x55d61b304000 session 0x55d61738a380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:21.114812+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5781939 data_alloc: 234881024 data_used: 10559113
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 74358784 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 260 ms_handle_reset con 0x55d6184f3c00 session 0x55d619652700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619774000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:22.115118+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d00e400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.674574852s of 10.796507835s, submitted: 190
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d00e800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 260 ms_handle_reset con 0x55d61d00e800 session 0x55d619c7b500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 155828224 unmapped: 74211328 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 261 ms_handle_reset con 0x55d619774000 session 0x55d61a1361c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:23.115385+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d00e800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f2400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 261 ms_handle_reset con 0x55d61d00e800 session 0x55d6184eae00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156909568 unmapped: 73129984 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 262 ms_handle_reset con 0x55d6184f2400 session 0x55d6178be380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 262 ms_handle_reset con 0x55d6184f3c00 session 0x55d61a136380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:24.115617+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 262 ms_handle_reset con 0x55d61d00e400 session 0x55d619bd0380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f2400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 262 ms_handle_reset con 0x55d6184f2400 session 0x55d616e51c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156934144 unmapped: 73105408 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 262 heartbeat osd_stat(store_statfs(0x4cd356000/0x0/0x4ffc00000, data 0x2cb525ac/0x2c7f4000, compress 0x0/0x0/0x0, omap 0x3d6bf, meta 0x6072941), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:25.115812+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 263 ms_handle_reset con 0x55d6184f3c00 session 0x55d616e51dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619774000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 263 ms_handle_reset con 0x55d619774000 session 0x55d61a204e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157171712 unmapped: 72867840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d00e800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:26.128851+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5788578 data_alloc: 234881024 data_used: 10559878
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157188096 unmapped: 72851456 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 264 ms_handle_reset con 0x55d61d00e800 session 0x55d61738ba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:27.129004+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a08fc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b304000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d620699c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 264 ms_handle_reset con 0x55d61b304000 session 0x55d61a136700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 264 ms_handle_reset con 0x55d61a08fc00 session 0x55d61a137500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f2400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 264 ms_handle_reset con 0x55d6184f2400 session 0x55d6177ab340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 72712192 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:28.129150+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619774000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 265 ms_handle_reset con 0x55d6184f3c00 session 0x55d6184f68c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b304000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 265 ms_handle_reset con 0x55d61b304000 session 0x55d619c7a1c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d00e800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157343744 unmapped: 72695808 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 265 ms_handle_reset con 0x55d61d00e800 session 0x55d6183c0a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 265 heartbeat osd_stat(store_statfs(0x4cd320000/0x0/0x4ffc00000, data 0x2cb815dc/0x2c82a000, compress 0x0/0x0/0x0, omap 0x3daea, meta 0x6072516), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:29.129316+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 266 ms_handle_reset con 0x55d619774000 session 0x55d6184f76c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 266 ms_handle_reset con 0x55d620699c00 session 0x55d6178d6e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f2400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157392896 unmapped: 72646656 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 266 ms_handle_reset con 0x55d6184f3c00 session 0x55d6177a8c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:30.129475+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a08fc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 267 ms_handle_reset con 0x55d61a08fc00 session 0x55d619becfc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 267 ms_handle_reset con 0x55d6184f2400 session 0x55d61738a8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 267 heartbeat osd_stat(store_statfs(0x4cd31a000/0x0/0x4ffc00000, data 0x2cb83212/0x2c82e000, compress 0x0/0x0/0x0, omap 0x3daea, meta 0x6072516), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157417472 unmapped: 72622080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:31.129600+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619774000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 268 ms_handle_reset con 0x55d619774000 session 0x55d6177b2000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5803976 data_alloc: 234881024 data_used: 10560463
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a08fc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 268 ms_handle_reset con 0x55d61a08fc00 session 0x55d619906540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 268 ms_handle_reset con 0x55d6184f3c00 session 0x55d6184f6540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d620699c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 268 ms_handle_reset con 0x55d620699c00 session 0x55d617346a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157450240 unmapped: 72589312 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:32.129853+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b304000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.468647003s of 10.465318680s, submitted: 178
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 ms_handle_reset con 0x55d61b304000 session 0x55d6177aa540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b304000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 ms_handle_reset con 0x55d6177fb000 session 0x55d619c7b6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157466624 unmapped: 72572928 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 ms_handle_reset con 0x55d61b304000 session 0x55d61a1368c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:33.129969+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157483008 unmapped: 72556544 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 ms_handle_reset con 0x55d61b304c00 session 0x55d6177a81c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 ms_handle_reset con 0x55d61a199400 session 0x55d61838b880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:34.130126+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 heartbeat osd_stat(store_statfs(0x4cd317000/0x0/0x4ffc00000, data 0x2cb8804f/0x2c831000, compress 0x0/0x0/0x0, omap 0x3d793, meta 0x607286d), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 ms_handle_reset con 0x55d6184f3c00 session 0x55d619d77180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 152322048 unmapped: 77717504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:35.130438+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 152322048 unmapped: 77717504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:36.130568+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5668044 data_alloc: 218103808 data_used: 3341505
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 152322048 unmapped: 77717504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 heartbeat osd_stat(store_statfs(0x4ce192000/0x0/0x4ffc00000, data 0x2bd1203f/0x2b9ba000, compress 0x0/0x0/0x0, omap 0x3d096, meta 0x6072f6a), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:37.130794+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 152322048 unmapped: 77717504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:38.131020+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 ms_handle_reset con 0x55d6177fbc00 session 0x55d619653c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 ms_handle_reset con 0x55d6177fb400 session 0x55d6197736c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 152322048 unmapped: 77717504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 ms_handle_reset con 0x55d6184f3c00 session 0x55d6184ea000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:39.131180+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 148594688 unmapped: 81444864 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 heartbeat osd_stat(store_statfs(0x4ce422000/0x0/0x4ffc00000, data 0x2ba8203f/0x2b72a000, compress 0x0/0x0/0x0, omap 0x3ce09, meta 0x60731f7), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:40.131326+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 148611072 unmapped: 81428480 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:41.131527+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5639872 data_alloc: 218103808 data_used: 642241
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 148611072 unmapped: 81428480 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:42.131685+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 272 ms_handle_reset con 0x55d6177fb000 session 0x55d61738a700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a199400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 272 ms_handle_reset con 0x55d61a199400 session 0x55d61a06dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.799713135s of 10.036121368s, submitted: 162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 81420288 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:43.131994+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 158474240 unmapped: 71565312 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 273 ms_handle_reset con 0x55d6177fb400 session 0x55d6184f6380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:44.132238+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 274 ms_handle_reset con 0x55d61983f400 session 0x55d6178d2700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 274 ms_handle_reset con 0x55d6177fb000 session 0x55d6178d7a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149889024 unmapped: 80150528 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:45.132453+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 274 heartbeat osd_stat(store_statfs(0x4cda20000/0x0/0x4ffc00000, data 0x2c480c97/0x2c128000, compress 0x0/0x0/0x0, omap 0x3d6a4, meta 0x607295c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149889024 unmapped: 80150528 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:46.132649+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5712330 data_alloc: 218103808 data_used: 642127
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 275 ms_handle_reset con 0x55d61983ec00 session 0x55d619907340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 80216064 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:47.132824+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 80216064 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:48.133011+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 275 heartbeat osd_stat(store_statfs(0x4cda20000/0x0/0x4ffc00000, data 0x2c482851/0x2c12a000, compress 0x0/0x0/0x0, omap 0x3d931, meta 0x60726cf), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 80216064 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:49.133193+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 276 ms_handle_reset con 0x55d619763c00 session 0x55d619772a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 80224256 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:50.133342+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 276 ms_handle_reset con 0x55d6177fb000 session 0x55d61a1f0380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 80224256 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:51.133491+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5660872 data_alloc: 218103808 data_used: 642127
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 80224256 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:52.133636+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 276 ms_handle_reset con 0x55d61983ec00 session 0x55d6178d2540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 ms_handle_reset con 0x55d6177fb400 session 0x55d6184f7180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 ms_handle_reset con 0x55d61983f400 session 0x55d619ac0000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 80207872 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619762000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.924193382s of 10.295066833s, submitted: 141
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:53.133791+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 ms_handle_reset con 0x55d619762000 session 0x55d6198a0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 ms_handle_reset con 0x55d619763000 session 0x55d61a5661c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 ms_handle_reset con 0x55d6177fb000 session 0x55d6177a96c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149831680 unmapped: 80207872 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:54.133963+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 heartbeat osd_stat(store_statfs(0x4ce427000/0x0/0x4ffc00000, data 0x2ba720d5/0x2b723000, compress 0x0/0x0/0x0, omap 0x3de70, meta 0x6072190), peers [0,2] op hist [0,0,0,0,1,0,0,0,1,1])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 149921792 unmapped: 80117760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:55.134112+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 158629888 unmapped: 71409664 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:56.134201+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 ms_handle_reset con 0x55d61983ec00 session 0x55d6178d28c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619e7b800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6331488 data_alloc: 218103808 data_used: 646836
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 ms_handle_reset con 0x55d619e7b800 session 0x55d61a2041c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b3f8c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 150667264 unmapped: 79372288 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 heartbeat osd_stat(store_statfs(0x4c5829000/0x0/0x4ffc00000, data 0x346720d5/0x34323000, compress 0x0/0x0/0x0, omap 0x3eacf, meta 0x6071531), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:57.134297+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 ms_handle_reset con 0x55d61b3f8c00 session 0x55d6178bfc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 278 ms_handle_reset con 0x55d6177fb000 session 0x55d61a780380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 158031872 unmapped: 72007680 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:58.134456+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 158457856 unmapped: 71581696 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:59.134612+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 18K writes, 75K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 6125 syncs, 3.03 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 30.76 MB, 0.05 MB/s
                                           Interval WAL: 11K writes, 4716 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159088640 unmapped: 70950912 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d61983f400 session 0x55d6178bf340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:00.134746+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d619763000 session 0x55d616e516c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d61983ec00 session 0x55d61a12b880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d6177fb400 session 0x55d6177aa380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d6177fb000 session 0x55d6177b3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d6177fbc00 session 0x55d6198a1180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 154796032 unmapped: 75243520 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:01.134879+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d6177fb400 session 0x55d619984c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d619763000 session 0x55d6178d2fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d61983f400 session 0x55d61a1f1880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d6177fb000 session 0x55d6199f2000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5796595 data_alloc: 218103808 data_used: 646934
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 75849728 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d6177fb400 session 0x55d617f5d180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:02.135048+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d619763000 session 0x55d6197721c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 ms_handle_reset con 0x55d61983ec00 session 0x55d61a12afc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d6177fbc00 session 0x55d619c181c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 heartbeat osd_stat(store_statfs(0x4ce062000/0x0/0x4ffc00000, data 0x2be36742/0x2baea000, compress 0x0/0x0/0x0, omap 0x3f76e, meta 0x6070892), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 154214400 unmapped: 75825152 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d6177fb000 session 0x55d619d761c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d6177fb400 session 0x55d61738b880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d6177fbc00 session 0x55d6198a1880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:03.135168+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.524050713s of 10.017764091s, submitted: 317
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d619763000 session 0x55d61738afc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d61983ec00 session 0x55d6178d2380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d6177fb000 session 0x55d619984e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d6177fb400 session 0x55d61a204e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d619763000 session 0x55d619ac0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d61983ec00 session 0x55d617f5d880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619e7b800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d619e7b800 session 0x55d6184ea000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d6177fbc00 session 0x55d619ac0fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d616f47c00 session 0x55d615c75340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619e7b800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153018368 unmapped: 77021184 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:04.135318+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d6177fb400 session 0x55d619653c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: mgrc ms_handle_reset ms_handle_reset con 0x55d61776b000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/273425939
Feb 02 16:02:27 compute-0 ceph-osd[87170]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/273425939,v1:192.168.122.100:6801/273425939]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: get_auth_request con 0x55d619762000 auth_method 0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: mgrc handle_mgr_configure stats_period=5
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153239552 unmapped: 76800000 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:05.135463+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d61758b800 session 0x55d6183c1dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 heartbeat osd_stat(store_statfs(0x4cd5e1000/0x0/0x4ffc00000, data 0x2c8b42f0/0x2c56a000, compress 0x0/0x0/0x0, omap 0x3f4fc, meta 0x6070b04), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 ms_handle_reset con 0x55d617fc4800 session 0x55d61a5d5180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183d2000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 281 ms_handle_reset con 0x55d6177fb000 session 0x55d6198a1a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153239552 unmapped: 76800000 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:06.135580+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d617fc4800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5843429 data_alloc: 218103808 data_used: 647421
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 ms_handle_reset con 0x55d619763000 session 0x55d6178bee00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8f800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 153395200 unmapped: 76644352 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:07.135742+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e9400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 ms_handle_reset con 0x55d6174e9400 session 0x55d6184ebdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 ms_handle_reset con 0x55d619a8f800 session 0x55d61779fdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 ms_handle_reset con 0x55d617fc4800 session 0x55d619d77180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 ms_handle_reset con 0x55d61776a000 session 0x55d619907340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 ms_handle_reset con 0x55d6177fb000 session 0x55d619d77340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 151740416 unmapped: 78299136 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:08.135876+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 ms_handle_reset con 0x55d6177fb400 session 0x55d6184f6540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 ms_handle_reset con 0x55d6177fb000 session 0x55d619ac1180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 ms_handle_reset con 0x55d6177fb400 session 0x55d619c181c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d617fc4800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8f800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 283 ms_handle_reset con 0x55d617fc4800 session 0x55d6199f2000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 151560192 unmapped: 78479360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:09.135988+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 283 heartbeat osd_stat(store_statfs(0x4ce8c6000/0x0/0x4ffc00000, data 0x2b0ee710/0x2b284000, compress 0x0/0x0/0x0, omap 0x3ffbe, meta 0x6070042), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 284 ms_handle_reset con 0x55d61776a000 session 0x55d6178befc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 284 ms_handle_reset con 0x55d619760c00 session 0x55d619653c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 154624000 unmapped: 75415552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 284 ms_handle_reset con 0x55d61776a000 session 0x55d61779fdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:10.136107+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 284 ms_handle_reset con 0x55d619763000 session 0x55d619c7b6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 155779072 unmapped: 74260480 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:11.136215+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4020610 data_alloc: 234881024 data_used: 9472781
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 155779072 unmapped: 74260480 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:12.136347+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 285 ms_handle_reset con 0x55d6177fb000 session 0x55d61a12afc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 285 ms_handle_reset con 0x55d6177fb400 session 0x55d6177a8380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 73146368 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:13.136470+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f54c0000/0x0/0x4ffc00000, data 0x1cf1eb4/0x1e88000, compress 0x0/0x0/0x0, omap 0x4057b, meta 0x606fa85), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 73146368 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:14.136649+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 73146368 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:15.137000+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 73146368 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:16.137129+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2440235 data_alloc: 234881024 data_used: 9468685
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f54c0000/0x0/0x4ffc00000, data 0x1cf1eb4/0x1e88000, compress 0x0/0x0/0x0, omap 0x4057b, meta 0x606fa85), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 73146368 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:17.137358+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f54c0000/0x0/0x4ffc00000, data 0x1cf1eb4/0x1e88000, compress 0x0/0x0/0x0, omap 0x4057b, meta 0x606fa85), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 73146368 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:18.137505+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.255355835s of 15.675593376s, submitted: 339
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:19.930397+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 156917760 unmapped: 73121792 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d617fc4800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 286 ms_handle_reset con 0x55d617fc4800 session 0x55d6177ab340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:20.930512+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 161824768 unmapped: 68214784 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 287 ms_handle_reset con 0x55d61776a000 session 0x55d61a204e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2501183 data_alloc: 234881024 data_used: 9935433
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:21.930627+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163127296 unmapped: 66912256 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 287 heartbeat osd_stat(store_statfs(0x4f7569000/0x0/0x4ffc00000, data 0x24384ff/0x25d2000, compress 0x0/0x0/0x0, omap 0x409a0, meta 0x606f660), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:22.930965+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163160064 unmapped: 66879488 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 287 heartbeat osd_stat(store_statfs(0x4f7569000/0x0/0x4ffc00000, data 0x24384ff/0x25d2000, compress 0x0/0x0/0x0, omap 0x409a0, meta 0x606f660), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 287 ms_handle_reset con 0x55d6177fb400 session 0x55d617fd6a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 287 ms_handle_reset con 0x55d6177fb000 session 0x55d6198a0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 287 heartbeat osd_stat(store_statfs(0x4f7569000/0x0/0x4ffc00000, data 0x24384ff/0x25d2000, compress 0x0/0x0/0x0, omap 0x409a0, meta 0x606f660), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:23.931101+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163160064 unmapped: 66879488 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d617fc4800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:24.931260+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163160064 unmapped: 66879488 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 288 ms_handle_reset con 0x55d617fc4800 session 0x55d619c7b340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:25.931418+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163160064 unmapped: 66879488 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 288 ms_handle_reset con 0x55d619760c00 session 0x55d6199f3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 288 ms_handle_reset con 0x55d619763000 session 0x55d6184ebdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2500267 data_alloc: 234881024 data_used: 9936018
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:26.931617+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163299328 unmapped: 66740224 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:27.931798+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163299328 unmapped: 66740224 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.795214653s of 10.060761452s, submitted: 112
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:28.931996+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163299328 unmapped: 66740224 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d61776a000 session 0x55d6177a9340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 heartbeat osd_stat(store_statfs(0x4f7550000/0x0/0x4ffc00000, data 0x245dc57/0x25fc000, compress 0x0/0x0/0x0, omap 0x4f79f, meta 0x6060861), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:29.932128+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163332096 unmapped: 66707456 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d6177fb000 session 0x55d619d77180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d6177fb400 session 0x55d617f5d500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d617fc4800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d617fc4800 session 0x55d617346fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:30.932253+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163332096 unmapped: 66707456 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d61776a000 session 0x55d619906000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d6177fb000 session 0x55d617fd61c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2506446 data_alloc: 234881024 data_used: 9936635
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:31.932531+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163332096 unmapped: 66707456 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d619763000 session 0x55d6184f7880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d619760400 session 0x55d619907340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:32.932688+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163471360 unmapped: 66568192 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b307c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d61b307c00 session 0x55d617fd7500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 ms_handle_reset con 0x55d6177fb000 session 0x55d6198a1500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:33.932755+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 66420736 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d619760400 session 0x55d616e50fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:34.932880+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 heartbeat osd_stat(store_statfs(0x4f754a000/0x0/0x4ffc00000, data 0x2463c57/0x2602000, compress 0x0/0x0/0x0, omap 0x4f937, meta 0x60606c9), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 66420736 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d61776a000 session 0x55d61738a700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d6177fb400 session 0x55d61738ae00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:35.933017+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 66420736 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d619763000 session 0x55d61779f6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d6177fb000 session 0x55d6178bee00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d619760400 session 0x55d617f5d180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b306000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d61b306000 session 0x55d6178bfc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619776400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d619776400 session 0x55d6177a96c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fde400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d619fde400 session 0x55d616e50fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fde400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d619a8f800 session 0x55d6178be540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d6177fbc00 session 0x55d6184ea000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2529099 data_alloc: 234881024 data_used: 10193226
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 290 ms_handle_reset con 0x55d619fde400 session 0x55d61738ae00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d6177fb000 session 0x55d6198a0000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:36.933645+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d619760400 session 0x55d619984540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 164290560 unmapped: 65748992 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d6177fb000 session 0x55d6178d2380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d6177fb400 session 0x55d6177b2fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d6177fbc00 session 0x55d617fe2000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8f800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d61776a000 session 0x55d6178be700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d619a8f800 session 0x55d6177abdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:37.933771+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 158154752 unmapped: 71884800 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8f800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d619a8f800 session 0x55d619653c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d61776a000 session 0x55d6177ab6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:38.934026+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157384704 unmapped: 72654848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:39.934155+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.611393929s of 11.116248131s, submitted: 146
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d6177fb000 session 0x55d61779f6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157384704 unmapped: 72654848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0x18b03f7/0x1a4d000, compress 0x0/0x0/0x0, omap 0x4fa52, meta 0x60605ae), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:40.934276+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157384704 unmapped: 72654848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 ms_handle_reset con 0x55d6177fb400 session 0x55d6198a0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2402255 data_alloc: 218103808 data_used: 331660
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:41.934422+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fde400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 292 ms_handle_reset con 0x55d619fde400 session 0x55d6178d2fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 292 ms_handle_reset con 0x55d6177fbc00 session 0x55d6199f3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157401088 unmapped: 72638464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fde400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 292 ms_handle_reset con 0x55d619fde400 session 0x55d6177ab180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 292 ms_handle_reset con 0x55d61776a000 session 0x55d619ac0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 293 ms_handle_reset con 0x55d6177fb000 session 0x55d61738b880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:42.934577+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 293 ms_handle_reset con 0x55d6177fb400 session 0x55d61a2041c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 72630272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 293 ms_handle_reset con 0x55d6177fb400 session 0x55d6178d2540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 293 ms_handle_reset con 0x55d61776a000 session 0x55d6177b3a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 293 ms_handle_reset con 0x55d61868fc00 session 0x55d61779ee00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:43.934824+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 72630272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fde400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:44.934972+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 ms_handle_reset con 0x55d619fde400 session 0x55d61a1f1880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159457280 unmapped: 70582272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f8134000/0x0/0x4ffc00000, data 0x187561c/0x1a16000, compress 0x0/0x0/0x0, omap 0x5037b, meta 0x605fc85), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8f800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:45.935068+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619776400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 ms_handle_reset con 0x55d619776400 session 0x55d6177b3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 ms_handle_reset con 0x55d619a8f800 session 0x55d6198a1180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159465472 unmapped: 70574080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f8134000/0x0/0x4ffc00000, data 0x187561c/0x1a16000, compress 0x0/0x0/0x0, omap 0x50403, meta 0x605fbfd), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2444395 data_alloc: 218103808 data_used: 6000444
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:46.936378+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8f800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 ms_handle_reset con 0x55d619a8f800 session 0x55d61a137500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159465472 unmapped: 70574080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 ms_handle_reset con 0x55d61776a000 session 0x55d61a136380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:47.936570+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 70565888 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 ms_handle_reset con 0x55d6177fb400 session 0x55d619bed500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619776400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 ms_handle_reset con 0x55d619776400 session 0x55d6178be700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:48.936700+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 70565888 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fde400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 ms_handle_reset con 0x55d619fde400 session 0x55d6178be380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 ms_handle_reset con 0x55d6177fb400 session 0x55d617fd6540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6196df400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:49.936993+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159399936 unmapped: 70639616 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.432394981s of 10.685340881s, submitted: 74
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 295 ms_handle_reset con 0x55d6196df400 session 0x55d6177b2c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 295 ms_handle_reset con 0x55d61776a000 session 0x55d61738a8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:50.937197+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159432704 unmapped: 70606848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2445961 data_alloc: 218103808 data_used: 6000931
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:51.937369+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 295 heartbeat osd_stat(store_statfs(0x4f8132000/0x0/0x4ffc00000, data 0x18771aa/0x1a18000, compress 0x0/0x0/0x0, omap 0x50c8c, meta 0x605f374), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 295 ms_handle_reset con 0x55d6174e6000 session 0x55d6184ea700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159588352 unmapped: 70451200 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 295 ms_handle_reset con 0x55d61983d000 session 0x55d619ac1340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 296 ms_handle_reset con 0x55d6174e6000 session 0x55d61a136380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 296 ms_handle_reset con 0x55d6174e6400 session 0x55d6178d6540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:52.937507+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159744000 unmapped: 70295552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 296 ms_handle_reset con 0x55d61776a000 session 0x55d61738a540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 296 ms_handle_reset con 0x55d6177fb400 session 0x55d6178d2a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:53.937622+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 159801344 unmapped: 70238208 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f8132000/0x0/0x4ffc00000, data 0x1878d38/0x1a1a000, compress 0x0/0x0/0x0, omap 0x50d9c, meta 0x605f264), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:54.937811+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6196df400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 296 ms_handle_reset con 0x55d6196df400 session 0x55d619984e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166756352 unmapped: 63283200 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:55.938015+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f73e2000/0x0/0x4ffc00000, data 0x25c2d38/0x2764000, compress 0x0/0x0/0x0, omap 0x51088, meta 0x605ef78), peers [0,2] op hist [1])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166461440 unmapped: 63578112 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 297 ms_handle_reset con 0x55d6174e6000 session 0x55d61a12ac40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2545910 data_alloc: 218103808 data_used: 6926529
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:56.938131+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 165740544 unmapped: 64299008 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:57.938342+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 298 ms_handle_reset con 0x55d61776a000 session 0x55d617fd7180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 298 ms_handle_reset con 0x55d6174e6400 session 0x55d6183c1dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 165756928 unmapped: 64282624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 298 ms_handle_reset con 0x55d6177fb400 session 0x55d6178d7340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:58.938472+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e5000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 298 ms_handle_reset con 0x55d6172e5000 session 0x55d6184ea380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 64258048 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:59.938623+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 64258048 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 298 ms_handle_reset con 0x55d6174e6000 session 0x55d619c7bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 298 ms_handle_reset con 0x55d61776a000 session 0x55d619ac1dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.340295792s of 10.090871811s, submitted: 366
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 299 ms_handle_reset con 0x55d6177fb400 session 0x55d6178bfa40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:00.938756+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 299 ms_handle_reset con 0x55d6174e6400 session 0x55d6177b28c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166862848 unmapped: 63176704 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f7324000/0x0/0x4ffc00000, data 0x2682433/0x2828000, compress 0x0/0x0/0x0, omap 0x51bea, meta 0x605e416), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a622800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 299 ms_handle_reset con 0x55d61a622800 session 0x55d619bed6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f7320000/0x0/0x4ffc00000, data 0x2683fc1/0x282a000, compress 0x0/0x0/0x0, omap 0x51c51, meta 0x605e3af), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2557949 data_alloc: 218103808 data_used: 6926627
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:01.938907+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166887424 unmapped: 63152128 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:02.939095+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166887424 unmapped: 63152128 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 ms_handle_reset con 0x55d6174e6000 session 0x55d6178be1c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:03.939227+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166912000 unmapped: 63127552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 ms_handle_reset con 0x55d6174e6400 session 0x55d6184f7a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 ms_handle_reset con 0x55d61776a000 session 0x55d6198a0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:04.939428+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166920192 unmapped: 63119360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 heartbeat osd_stat(store_statfs(0x4f72f9000/0x0/0x4ffc00000, data 0x26a7beb/0x2851000, compress 0x0/0x0/0x0, omap 0x52970, meta 0x605d690), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 heartbeat osd_stat(store_statfs(0x4f72fa000/0x0/0x4ffc00000, data 0x26a7bdb/0x2850000, compress 0x0/0x0/0x0, omap 0x52a84, meta 0x605d57c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 ms_handle_reset con 0x55d6177fb400 session 0x55d617fd7c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc40400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:05.939609+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 ms_handle_reset con 0x55d61bc40400 session 0x55d6184f6a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166920192 unmapped: 63119360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2558829 data_alloc: 218103808 data_used: 6926725
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:06.939773+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166920192 unmapped: 63119360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 ms_handle_reset con 0x55d6174e6000 session 0x55d61a137500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 ms_handle_reset con 0x55d6174e6400 session 0x55d61a7c4700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:07.939885+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167010304 unmapped: 63029248 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:08.940050+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167010304 unmapped: 63029248 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:09.940183+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167010304 unmapped: 63029248 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f72ed000/0x0/0x4ffc00000, data 0x26b45f8/0x285d000, compress 0x0/0x0/0x0, omap 0x52d36, meta 0x605d2ca), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:10.940352+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f72ed000/0x0/0x4ffc00000, data 0x26b45f8/0x285d000, compress 0x0/0x0/0x0, omap 0x52d36, meta 0x605d2ca), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167010304 unmapped: 63029248 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2562155 data_alloc: 218103808 data_used: 6926627
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:11.940546+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167010304 unmapped: 63029248 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.885825157s of 12.130884171s, submitted: 88
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:12.940654+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 301 ms_handle_reset con 0x55d61776a000 session 0x55d61a12a700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 63815680 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f72ed000/0x0/0x4ffc00000, data 0x26b466a/0x285f000, compress 0x0/0x0/0x0, omap 0x52fdc, meta 0x605d024), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:13.940795+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 63815680 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 301 ms_handle_reset con 0x55d6177fb400 session 0x55d615c75340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:14.941015+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166232064 unmapped: 63807488 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 302 ms_handle_reset con 0x55d61bc41000 session 0x55d6178bf500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:15.941145+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166240256 unmapped: 63799296 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f72e3000/0x0/0x4ffc00000, data 0x26b9278/0x2867000, compress 0x0/0x0/0x0, omap 0x5372f, meta 0x605c8d1), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 302 ms_handle_reset con 0x55d61bc41000 session 0x55d6184f7180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2574803 data_alloc: 218103808 data_used: 6926627
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:16.941282+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166240256 unmapped: 63799296 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:17.941518+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 302 ms_handle_reset con 0x55d6174e6400 session 0x55d61a5d4540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 302 ms_handle_reset con 0x55d61776a000 session 0x55d61a12b6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166969344 unmapped: 63070208 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f6efc000/0x0/0x4ffc00000, data 0x2aa12da/0x2c50000, compress 0x0/0x0/0x0, omap 0x5372f, meta 0x605c8d1), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:18.941676+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167002112 unmapped: 63037440 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 302 ms_handle_reset con 0x55d6177fb400 session 0x55d61a204a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f6efc000/0x0/0x4ffc00000, data 0x2aa12da/0x2c50000, compress 0x0/0x0/0x0, omap 0x5372f, meta 0x605c8d1), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:19.941853+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167043072 unmapped: 62996480 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:20.942042+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167043072 unmapped: 62996480 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 303 heartbeat osd_stat(store_statfs(0x4f6efb000/0x0/0x4ffc00000, data 0x2aa133c/0x2c51000, compress 0x0/0x0/0x0, omap 0x537b9, meta 0x605c847), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 303 ms_handle_reset con 0x55d61bc41400 session 0x55d619907a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2607103 data_alloc: 218103808 data_used: 6926725
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:21.942186+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 62988288 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 303 ms_handle_reset con 0x55d6174e6400 session 0x55d6178d3180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 303 ms_handle_reset con 0x55d61776a000 session 0x55d61a136540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:22.942335+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167067648 unmapped: 62971904 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 303 ms_handle_reset con 0x55d6177fb400 session 0x55d6177a8fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.516494751s of 11.022258759s, submitted: 69
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:23.942440+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167067648 unmapped: 62971904 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b3cf400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 303 ms_handle_reset con 0x55d61b3cf400 session 0x55d6177aa540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 303 ms_handle_reset con 0x55d61bc41c00 session 0x55d61a205180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 304 ms_handle_reset con 0x55d61bc41000 session 0x55d6184f6e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:24.942626+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167510016 unmapped: 62529536 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 304 ms_handle_reset con 0x55d6177fb000 session 0x55d619bd1500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:25.942794+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 167362560 unmapped: 62676992 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 304 ms_handle_reset con 0x55d6177fb400 session 0x55d619bed340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b3cf400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 304 ms_handle_reset con 0x55d61776a000 session 0x55d619984000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f82f8000/0x0/0x4ffc00000, data 0x16a2af8/0x1854000, compress 0x0/0x0/0x0, omap 0x5309f, meta 0x605cf61), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 304 ms_handle_reset con 0x55d61b3cf400 session 0x55d6198a0a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2476182 data_alloc: 218103808 data_used: 3971347
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:26.942973+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163225600 unmapped: 66813952 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 304 ms_handle_reset con 0x55d61776a000 session 0x55d6177a9a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 305 ms_handle_reset con 0x55d6177fb000 session 0x55d615c75340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:27.943101+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163250176 unmapped: 66789376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:28.943223+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163250176 unmapped: 66789376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 305 ms_handle_reset con 0x55d6177fb400 session 0x55d61a58f500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 305 ms_handle_reset con 0x55d61bc41000 session 0x55d6178d2a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:29.943331+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163233792 unmapped: 66805760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f82f8000/0x0/0x4ffc00000, data 0x16a45d4/0x1854000, compress 0x0/0x0/0x0, omap 0x53648, meta 0x605c9b8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:30.943479+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163233792 unmapped: 66805760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a059800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 305 ms_handle_reset con 0x55d61a059800 session 0x55d6178d2540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:31.943644+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2477128 data_alloc: 218103808 data_used: 3971347
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163233792 unmapped: 66805760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 305 ms_handle_reset con 0x55d61776a000 session 0x55d61a205500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:32.943784+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163233792 unmapped: 66805760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:33.943894+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887814522s of 10.178145409s, submitted: 109
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163233792 unmapped: 66805760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f82f3000/0x0/0x4ffc00000, data 0x16a6053/0x1857000, compress 0x0/0x0/0x0, omap 0x53717, meta 0x605c8e9), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 306 ms_handle_reset con 0x55d6177fb000 session 0x55d61738b500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:34.944030+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 163233792 unmapped: 66805760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:35.944240+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166453248 unmapped: 63586304 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:36.944484+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2552528 data_alloc: 218103808 data_used: 4954387
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166510592 unmapped: 63528960 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 306 ms_handle_reset con 0x55d61bc41000 session 0x55d61a1376c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d2fd000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619777400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 307 ms_handle_reset con 0x55d61d2fd000 session 0x55d617fd76c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:37.944624+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166797312 unmapped: 63242240 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 308 ms_handle_reset con 0x55d619777400 session 0x55d616e50540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 308 ms_handle_reset con 0x55d6177fb400 session 0x55d6198a1180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:38.944760+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166797312 unmapped: 63242240 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:39.944935+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 308 heartbeat osd_stat(store_statfs(0x4f789d000/0x0/0x4ffc00000, data 0x20f67ed/0x22ab000, compress 0x0/0x0/0x0, omap 0x540b9, meta 0x605bf47), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166797312 unmapped: 63242240 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 309 handle_osd_map epochs [309,309], i have 309, src has [1,309]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:40.945050+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 309 ms_handle_reset con 0x55d61776a000 session 0x55d6178be540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 63225856 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:41.945154+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2577800 data_alloc: 218103808 data_used: 4994835
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 63225856 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 310 ms_handle_reset con 0x55d6177fb000 session 0x55d6198a0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 310 ms_handle_reset con 0x55d61bc41000 session 0x55d6199f21c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:42.945254+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166838272 unmapped: 63201280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:43.945360+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166838272 unmapped: 63201280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:44.945527+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166838272 unmapped: 63201280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 310 heartbeat osd_stat(store_statfs(0x4f7879000/0x0/0x4ffc00000, data 0x211af87/0x22d1000, compress 0x0/0x0/0x0, omap 0x547c7, meta 0x605b839), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:45.945807+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166838272 unmapped: 63201280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:46.945973+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2579100 data_alloc: 218103808 data_used: 5003640
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166838272 unmapped: 63201280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:47.946187+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166838272 unmapped: 63201280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.871769905s of 14.502084732s, submitted: 226
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:48.946324+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166838272 unmapped: 63201280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d2fd000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:49.946464+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 311 ms_handle_reset con 0x55d61c27e000 session 0x55d61a136a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166838272 unmapped: 63201280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:50.946621+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f786e000/0x0/0x4ffc00000, data 0x2123aa0/0x22dc000, compress 0x0/0x0/0x0, omap 0x54dfd, meta 0x605b203), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 166838272 unmapped: 63201280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 311 ms_handle_reset con 0x55d61776a000 session 0x55d61a7c4700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f786e000/0x0/0x4ffc00000, data 0x2123aa0/0x22dc000, compress 0x0/0x0/0x0, omap 0x54dfd, meta 0x605b203), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:51.946812+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2597522 data_alloc: 218103808 data_used: 5032328
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 311 ms_handle_reset con 0x55d6177fb400 session 0x55d61a7c5180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 168189952 unmapped: 61849600 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a056c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 312 ms_handle_reset con 0x55d61bc41000 session 0x55d619bd1340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:52.946964+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 312 ms_handle_reset con 0x55d61d2fd000 session 0x55d619c7a380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 168189952 unmapped: 61849600 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 312 ms_handle_reset con 0x55d6177fb800 session 0x55d61838aa80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 313 ms_handle_reset con 0x55d61a056c00 session 0x55d619773dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 313 ms_handle_reset con 0x55d6177fb000 session 0x55d617fd6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:53.947087+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 168157184 unmapped: 61882368 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 313 ms_handle_reset con 0x55d6177fb400 session 0x55d6177abdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 314 ms_handle_reset con 0x55d61bc41000 session 0x55d6177ab6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 314 ms_handle_reset con 0x55d61776a000 session 0x55d61a06c380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:54.947256+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61874176 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 315 ms_handle_reset con 0x55d6177fb000 session 0x55d619f47500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a056c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 315 ms_handle_reset con 0x55d61a056c00 session 0x55d6178d3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:55.947357+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 315 ms_handle_reset con 0x55d6177fb400 session 0x55d619f46c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 168173568 unmapped: 61865984 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d2fd000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a053000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 316 ms_handle_reset con 0x55d61d2fd000 session 0x55d619f46700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:56.947482+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 316 ms_handle_reset con 0x55d61776a000 session 0x55d61a2041c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 316 ms_handle_reset con 0x55d61bc41000 session 0x55d619beaa80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 316 ms_handle_reset con 0x55d61a053000 session 0x55d61a566540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2620186 data_alloc: 218103808 data_used: 5007948
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 48029696 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 316 ms_handle_reset con 0x55d6177fb400 session 0x55d619beac40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 316 ms_handle_reset con 0x55d6177fb000 session 0x55d61838a8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a056c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f6982000/0x0/0x4ffc00000, data 0x3005d87/0x31ca000, compress 0x0/0x0/0x0, omap 0x554f0, meta 0x605ab10), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 317 ms_handle_reset con 0x55d61a056c00 session 0x55d61838b6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:57.947618+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 48029696 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 317 ms_handle_reset con 0x55d6177fb400 session 0x55d619ac0000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a053000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.680179596s of 10.111716270s, submitted: 146
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 ms_handle_reset con 0x55d6177fb000 session 0x55d6184f61c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:58.947800+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 ms_handle_reset con 0x55d61a053000 session 0x55d61838ba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 ms_handle_reset con 0x55d61bc41000 session 0x55d619bea8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 183115776 unmapped: 46923776 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d2fd000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 ms_handle_reset con 0x55d61d2fd000 session 0x55d619beb880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 ms_handle_reset con 0x55d6177fb000 session 0x55d61838ae00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 ms_handle_reset con 0x55d6177fb400 session 0x55d619f47dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:59.947912+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 heartbeat osd_stat(store_statfs(0x4f6974000/0x0/0x4ffc00000, data 0x300c3aa/0x31d4000, compress 0x0/0x0/0x0, omap 0x55277, meta 0x605ad89), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 183115776 unmapped: 46923776 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a053000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 ms_handle_reset con 0x55d61c27f400 session 0x55d619bea000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 ms_handle_reset con 0x55d61bc41000 session 0x55d619aa8540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:00.948049+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a622400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d2fc400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 319 ms_handle_reset con 0x55d61a622400 session 0x55d6178d2fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 319 ms_handle_reset con 0x55d6177f8800 session 0x55d619653340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 319 ms_handle_reset con 0x55d61bd56000 session 0x55d61a566c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178077696 unmapped: 51961856 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 319 ms_handle_reset con 0x55d6177fb000 session 0x55d619bec8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 319 heartbeat osd_stat(store_statfs(0x4f696f000/0x0/0x4ffc00000, data 0x300e010/0x31db000, compress 0x0/0x0/0x0, omap 0x55a9a, meta 0x605a566), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 319 ms_handle_reset con 0x55d6177fb400 session 0x55d619bec000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 319 ms_handle_reset con 0x55d61d2fc400 session 0x55d619f47880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:01.948208+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 320 ms_handle_reset con 0x55d61a053000 session 0x55d6178bea80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2753160 data_alloc: 234881024 data_used: 15321170
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177979392 unmapped: 52060160 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:02.948388+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 321 ms_handle_reset con 0x55d6177fb000 session 0x55d619653dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 321 ms_handle_reset con 0x55d6177fb400 session 0x55d619652fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178053120 unmapped: 51986432 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d61bd56000 session 0x55d619f47340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:03.948496+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d61bc41000 session 0x55d619653340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d6177f8800 session 0x55d619f47a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178102272 unmapped: 51937280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d6177fb000 session 0x55d619becfc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d6177fb400 session 0x55d617346e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a053000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d61a053000 session 0x55d619bd0c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d61bd56000 session 0x55d619653500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d61bc41000 session 0x55d619bd0000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d6177f8800 session 0x55d619bd1180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d6177fb000 session 0x55d61a7c5a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:04.948641+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d6177fb400 session 0x55d619773c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178110464 unmapped: 51929088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a053000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d61a053000 session 0x55d6196528c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:05.948776+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178110464 unmapped: 51929088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:06.948922+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2755601 data_alloc: 234881024 data_used: 15321253
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178110464 unmapped: 51929088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f696b000/0x0/0x4ffc00000, data 0x3013377/0x31df000, compress 0x0/0x0/0x0, omap 0x55e9e, meta 0x605a162), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d6177fb000 session 0x55d6196521c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d61bc41000 session 0x55d6173461c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 ms_handle_reset con 0x55d6177fb400 session 0x55d619652540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:07.949041+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a054800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 323 ms_handle_reset con 0x55d61a054800 session 0x55d619aa81c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178118656 unmapped: 51920896 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b307000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 323 ms_handle_reset con 0x55d61b307000 session 0x55d619f46a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 323 heartbeat osd_stat(store_statfs(0x4f6967000/0x0/0x4ffc00000, data 0x3014fe5/0x31e3000, compress 0x0/0x0/0x0, omap 0x5648c, meta 0x6059b74), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.859339714s of 10.105662346s, submitted: 143
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 324 ms_handle_reset con 0x55d61c27f400 session 0x55d61838a1c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 324 ms_handle_reset con 0x55d6177fb000 session 0x55d617f5d180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:08.949167+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 324 ms_handle_reset con 0x55d6177f8800 session 0x55d619aa8540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 324 ms_handle_reset con 0x55d6177fb400 session 0x55d619c7b500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a054800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 324 ms_handle_reset con 0x55d61a054800 session 0x55d6199f3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178151424 unmapped: 51888128 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 324 ms_handle_reset con 0x55d6177f8800 session 0x55d61a7c4540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 324 ms_handle_reset con 0x55d6177fb400 session 0x55d619906fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 325 ms_handle_reset con 0x55d6177fb000 session 0x55d61838bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:09.949265+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 325 ms_handle_reset con 0x55d61c27f400 session 0x55d6198a0e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178184192 unmapped: 51855360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:10.949361+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f695e000/0x0/0x4ffc00000, data 0x3018d2b/0x31ec000, compress 0x0/0x0/0x0, omap 0x569a5, meta 0x605965b), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 51167232 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 325 ms_handle_reset con 0x55d61c27c400 session 0x55d619bebdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:11.949487+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 325 ms_handle_reset con 0x55d6177f8800 session 0x55d61838ac40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2798972 data_alloc: 234881024 data_used: 19552437
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178880512 unmapped: 51159040 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 325 ms_handle_reset con 0x55d6177fb000 session 0x55d6177a8c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 325 ms_handle_reset con 0x55d6177fb400 session 0x55d617f5d6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 326 ms_handle_reset con 0x55d61c27c400 session 0x55d619906380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:12.949637+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f626d000/0x0/0x4ffc00000, data 0x370bd2a/0x38df000, compress 0x0/0x0/0x0, omap 0x56429, meta 0x6059bd7), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 179388416 unmapped: 50651136 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:13.967309+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618059800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 327 ms_handle_reset con 0x55d61bd56800 session 0x55d61a566540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 327 ms_handle_reset con 0x55d618059800 session 0x55d619bea000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 179519488 unmapped: 50520064 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 328 ms_handle_reset con 0x55d61c27f400 session 0x55d619f46e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:14.967474+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 328 ms_handle_reset con 0x55d6177f8800 session 0x55d6173468c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 179519488 unmapped: 50520064 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 328 ms_handle_reset con 0x55d6177fb000 session 0x55d617346700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:15.967648+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 179544064 unmapped: 50495488 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 328 ms_handle_reset con 0x55d61c27c400 session 0x55d61a7c5dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 329 ms_handle_reset con 0x55d61c27c400 session 0x55d619beb880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 329 ms_handle_reset con 0x55d6177fb400 session 0x55d617347340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:16.967796+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 329 ms_handle_reset con 0x55d6177f8800 session 0x55d619bec8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2813938 data_alloc: 234881024 data_used: 19552421
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 50462720 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 329 heartbeat osd_stat(store_statfs(0x4f6954000/0x0/0x4ffc00000, data 0x301faec/0x31f6000, compress 0x0/0x0/0x0, omap 0x56393, meta 0x6059c6d), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:17.967908+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 330 ms_handle_reset con 0x55d6177fb000 session 0x55d619aa9500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 179593216 unmapped: 50446336 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:18.968046+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 330 heartbeat osd_stat(store_statfs(0x4f694e000/0x0/0x4ffc00000, data 0x30215e9/0x31fa000, compress 0x0/0x0/0x0, omap 0x56aa1, meta 0x605955f), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 179593216 unmapped: 50446336 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618059800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.412773132s of 10.826214790s, submitted: 154
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 330 ms_handle_reset con 0x55d61c27f400 session 0x55d619773dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:19.968183+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 330 ms_handle_reset con 0x55d61c27f400 session 0x55d617347a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 179593216 unmapped: 50446336 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 330 handle_osd_map epochs [330,331], i have 330, src has [1,331]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 331 ms_handle_reset con 0x55d6177f8800 session 0x55d6178d3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 331 ms_handle_reset con 0x55d6177fb000 session 0x55d61a06c8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 331 ms_handle_reset con 0x55d618059800 session 0x55d617fd6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:20.968347+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180936704 unmapped: 49102848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 331 ms_handle_reset con 0x55d61c27c400 session 0x55d617f5ddc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:21.968472+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2845754 data_alloc: 234881024 data_used: 21149975
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 181501952 unmapped: 48537600 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f6783000/0x0/0x4ffc00000, data 0x31ef1a1/0x33c9000, compress 0x0/0x0/0x0, omap 0x56bfa, meta 0x6059406), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 332 ms_handle_reset con 0x55d6177fb400 session 0x55d6177ab6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 332 ms_handle_reset con 0x55d61c27c400 session 0x55d6177abdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 332 handle_osd_map epochs [332,333], i have 332, src has [1,333]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:22.968752+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 181657600 unmapped: 48381952 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:23.968878+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 181657600 unmapped: 48381952 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 334 heartbeat osd_stat(store_statfs(0x4f6778000/0x0/0x4ffc00000, data 0x31f43e6/0x33d2000, compress 0x0/0x0/0x0, omap 0x57461, meta 0x6058b9f), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:24.970488+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 334 heartbeat osd_stat(store_statfs(0x4f6778000/0x0/0x4ffc00000, data 0x31f43e6/0x33d2000, compress 0x0/0x0/0x0, omap 0x57461, meta 0x6058b9f), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 48275456 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d6177fb000 session 0x55d617fe21c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618059800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d618059800 session 0x55d61a1368c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d61c27f400 session 0x55d619bec1c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d6177f8800 session 0x55d619bed880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d61c27f400 session 0x55d619f46fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d6177fb000 session 0x55d619bd08c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:25.970602+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d6177fb400 session 0x55d619beba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618059800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d618059800 session 0x55d619bd0e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d6177f8800 session 0x55d619becc40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d6177fb000 session 0x55d619653500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177fb400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 ms_handle_reset con 0x55d6177fb400 session 0x55d61779f6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 182280192 unmapped: 47759360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 336 ms_handle_reset con 0x55d61c27f400 session 0x55d619bebc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:26.970725+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2936284 data_alloc: 234881024 data_used: 21153859
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 182280192 unmapped: 47759360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:27.970822+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 336 ms_handle_reset con 0x55d619a8cc00 session 0x55d6178bea80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 182280192 unmapped: 47759360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:28.970947+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 183115776 unmapped: 46923776 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.637457848s of 10.082945824s, submitted: 208
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 338 ms_handle_reset con 0x55d61c27c400 session 0x55d61a7c5a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 338 ms_handle_reset con 0x55d6174e8800 session 0x55d619653dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:29.971063+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 195805184 unmapped: 34234368 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:30.971190+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 338 heartbeat osd_stat(store_statfs(0x4f5b5b000/0x0/0x4ffc00000, data 0x3e0de5c/0x3fed000, compress 0x0/0x0/0x0, omap 0x57b21, meta 0x60584df), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 195854336 unmapped: 34185216 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:31.971328+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 338 ms_handle_reset con 0x55d619a8cc00 session 0x55d619aa8380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 338 ms_handle_reset con 0x55d6177f8800 session 0x55d617fe2540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3015193 data_alloc: 251658240 data_used: 33638587
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 34160640 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 338 ms_handle_reset con 0x55d619fdd800 session 0x55d61a06d880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:32.971474+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188760064 unmapped: 41279488 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:33.971624+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f676e000/0x0/0x4ffc00000, data 0x31fc94f/0x33dc000, compress 0x0/0x0/0x0, omap 0x56f72, meta 0x605908e), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188760064 unmapped: 41279488 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:34.971779+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 339 ms_handle_reset con 0x55d6177f8800 session 0x55d619c7b500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188760064 unmapped: 41279488 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 340 heartbeat osd_stat(store_statfs(0x4f676e000/0x0/0x4ffc00000, data 0x31fc9c1/0x33de000, compress 0x0/0x0/0x0, omap 0x5a620, meta 0x60559e0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 340 ms_handle_reset con 0x55d6174e8800 session 0x55d617346540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:35.971954+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 189734912 unmapped: 40304640 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:36.972092+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2887003 data_alloc: 234881024 data_used: 22887595
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 189743104 unmapped: 40296448 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983fc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:37.972195+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 341 ms_handle_reset con 0x55d61983fc00 session 0x55d61838ba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 341 ms_handle_reset con 0x55d61c27c400 session 0x55d6177ab6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188776448 unmapped: 41263104 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:38.972323+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 342 ms_handle_reset con 0x55d61983d800 session 0x55d61a7c5dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 342 ms_handle_reset con 0x55d619a8cc00 session 0x55d6199f3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188850176 unmapped: 41189376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:39.972447+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188850176 unmapped: 41189376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f675f000/0x0/0x4ffc00000, data 0x32024a8/0x33e9000, compress 0x0/0x0/0x0, omap 0x59f52, meta 0x60560ae), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.705589294s of 10.977404594s, submitted: 166
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:40.972566+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 343 ms_handle_reset con 0x55d61983d800 session 0x55d619beb180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188907520 unmapped: 41132032 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:41.972771+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2896168 data_alloc: 234881024 data_used: 22887693
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41123840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:42.972922+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 344 ms_handle_reset con 0x55d6174e8800 session 0x55d61a7c5180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 344 ms_handle_reset con 0x55d6177f8800 session 0x55d619f46e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983fc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41123840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 344 ms_handle_reset con 0x55d61983fc00 session 0x55d619bec000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 344 ms_handle_reset con 0x55d6174e8800 session 0x55d619653c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:43.973040+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41123840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:44.973182+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41123840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 344 heartbeat osd_stat(store_statfs(0x4f6760000/0x0/0x4ffc00000, data 0x3205972/0x33ea000, compress 0x0/0x0/0x0, omap 0x5a42f, meta 0x6055bd1), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:45.973295+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41123840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:46.973492+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2897096 data_alloc: 234881024 data_used: 22888577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41123840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:47.973670+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41123840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:48.973752+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41123840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 345 ms_handle_reset con 0x55d6177f8800 session 0x55d6178d7500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 345 ms_handle_reset con 0x55d61983d800 session 0x55d619652e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:49.973866+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188923904 unmapped: 41115648 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.964217186s of 10.096281052s, submitted: 83
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:50.973999+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 346 ms_handle_reset con 0x55d61c27c400 session 0x55d6184eb500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191258624 unmapped: 38780928 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cde8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d2fd400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 346 ms_handle_reset con 0x55d61d2fd400 session 0x55d619bed180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 346 heartbeat osd_stat(store_statfs(0x4f675a000/0x0/0x4ffc00000, data 0x32090af/0x33f2000, compress 0x0/0x0/0x0, omap 0x5af48, meta 0x60550b8), peers [0,2] op hist [0,1])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:51.974138+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 346 ms_handle_reset con 0x55d6174e8800 session 0x55d619653a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 346 ms_handle_reset con 0x55d619a8cc00 session 0x55d61a7c5340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2917778 data_alloc: 234881024 data_used: 25850513
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 347 ms_handle_reset con 0x55d6177f8800 session 0x55d617347180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191275008 unmapped: 38764544 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 347 ms_handle_reset con 0x55d61983d800 session 0x55d61a58fdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 ms_handle_reset con 0x55d61c27c400 session 0x55d61a06ddc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 ms_handle_reset con 0x55d61cde8800 session 0x55d617fe3a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:52.974264+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 ms_handle_reset con 0x55d6174e8800 session 0x55d61838aa80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 ms_handle_reset con 0x55d6177f8800 session 0x55d619aa81c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 39362560 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:53.974388+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 ms_handle_reset con 0x55d61bc41000 session 0x55d61a12bdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 ms_handle_reset con 0x55d61983d800 session 0x55d6196536c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 ms_handle_reset con 0x55d61983d800 session 0x55d619652fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190701568 unmapped: 39337984 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:54.974530+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190701568 unmapped: 39337984 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:55.974653+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f691c000/0x0/0x4ffc00000, data 0x304084f/0x322c000, compress 0x0/0x0/0x0, omap 0x5b42b, meta 0x6054bd5), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190701568 unmapped: 39337984 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 ms_handle_reset con 0x55d6177f8800 session 0x55d619beb880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 349 ms_handle_reset con 0x55d61bc41000 session 0x55d61738b340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cde8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:56.974802+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 349 ms_handle_reset con 0x55d619a8cc00 session 0x55d61838bdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 349 ms_handle_reset con 0x55d61a058c00 session 0x55d617fe2000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915777 data_alloc: 234881024 data_used: 25773258
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190906368 unmapped: 39133184 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 349 handle_osd_map epochs [349,350], i have 349, src has [1,350]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 350 ms_handle_reset con 0x55d61cde8800 session 0x55d61838bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 351 ms_handle_reset con 0x55d6174e8800 session 0x55d619c7b500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:57.974957+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190914560 unmapped: 39124992 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 352 ms_handle_reset con 0x55d6177f8800 session 0x55d6178d68c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 352 ms_handle_reset con 0x55d61983d800 session 0x55d61a566c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:58.975117+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 352 ms_handle_reset con 0x55d61a058c00 session 0x55d619772700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 352 ms_handle_reset con 0x55d619a8cc00 session 0x55d619ac0000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 352 ms_handle_reset con 0x55d61a058c00 session 0x55d619652c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 187088896 unmapped: 42950656 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 353 ms_handle_reset con 0x55d6174e8800 session 0x55d61a06cc40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:59.975292+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 353 ms_handle_reset con 0x55d6177f8800 session 0x55d61a2041c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 187088896 unmapped: 42950656 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.847323418s of 10.172392845s, submitted: 192
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:00.975469+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cde8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 353 ms_handle_reset con 0x55d61cde8800 session 0x55d6177ab880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 187097088 unmapped: 42942464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 353 ms_handle_reset con 0x55d6174e6400 session 0x55d61a205a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 353 ms_handle_reset con 0x55d61bc41c00 session 0x55d6199f3880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:01.975604+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 354 ms_handle_reset con 0x55d6177f8800 session 0x55d61a7c5c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 354 ms_handle_reset con 0x55d6174e8800 session 0x55d61a06cfc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2663908 data_alloc: 218103808 data_used: 2182931
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f75d6000/0x0/0x4ffc00000, data 0x2384445/0x2576000, compress 0x0/0x0/0x0, omap 0x5c8f2, meta 0x605370e), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 354 ms_handle_reset con 0x55d61bc41000 session 0x55d6199f3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 54362112 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 355 ms_handle_reset con 0x55d619a8cc00 session 0x55d6184f61c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:02.975778+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 355 ms_handle_reset con 0x55d6174e6400 session 0x55d61a58f180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 355 ms_handle_reset con 0x55d61983d800 session 0x55d61a7c4000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175685632 unmapped: 54353920 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 356 ms_handle_reset con 0x55d6174e8800 session 0x55d61738ba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:03.975908+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 356 ms_handle_reset con 0x55d61bc41c00 session 0x55d61a7c5dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 356 ms_handle_reset con 0x55d61a058c00 session 0x55d619906000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 356 ms_handle_reset con 0x55d6177f8800 session 0x55d617fd7500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175734784 unmapped: 54304768 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 356 ms_handle_reset con 0x55d6174e6400 session 0x55d6184eafc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:04.976250+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 356 ms_handle_reset con 0x55d6174e8800 session 0x55d6183c0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 356 ms_handle_reset con 0x55d619a8cc00 session 0x55d61a58ec40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173867008 unmapped: 56172544 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 357 ms_handle_reset con 0x55d61983d800 session 0x55d61a2d3340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:05.976369+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 357 ms_handle_reset con 0x55d619a8cc00 session 0x55d61a5661c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173867008 unmapped: 56172544 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:06.976533+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2664739 data_alloc: 218103808 data_used: 85153
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 357 ms_handle_reset con 0x55d6177f8800 session 0x55d619bd0e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173875200 unmapped: 56164352 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:07.976647+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 358 ms_handle_reset con 0x55d61bc41c00 session 0x55d619d77880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 358 ms_handle_reset con 0x55d6174e6400 session 0x55d619ac1180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 358 ms_handle_reset con 0x55d6174e6400 session 0x55d619beba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 358 heartbeat osd_stat(store_statfs(0x4f8662000/0x0/0x4ffc00000, data 0x12f3884/0x14ea000, compress 0x0/0x0/0x0, omap 0x5c35e, meta 0x6053ca2), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174120960 unmapped: 55918592 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:08.976766+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 359 ms_handle_reset con 0x55d61a058c00 session 0x55d619653340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 359 ms_handle_reset con 0x55d61983d800 session 0x55d61a7c4380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 359 ms_handle_reset con 0x55d6174e8800 session 0x55d619d77340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 359 ms_handle_reset con 0x55d6177f8800 session 0x55d619f46e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174170112 unmapped: 55869440 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:09.976895+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e6400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f865c000/0x0/0x4ffc00000, data 0x12f727e/0x14f0000, compress 0x0/0x0/0x0, omap 0x5b57e, meta 0x6054a82), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174120960 unmapped: 55918592 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:10.977058+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.689516068s of 10.111871719s, submitted: 247
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 360 ms_handle_reset con 0x55d6177f8800 session 0x55d6177abdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 360 ms_handle_reset con 0x55d6174e6400 session 0x55d619aa8000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 360 ms_handle_reset con 0x55d6174e8800 session 0x55d619984540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174129152 unmapped: 55910400 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:11.977217+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2674018 data_alloc: 218103808 data_used: 89265
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174129152 unmapped: 55910400 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 360 ms_handle_reset con 0x55d61a058c00 session 0x55d617346380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 361 ms_handle_reset con 0x55d61983d800 session 0x55d619984a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:12.977346+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 361 ms_handle_reset con 0x55d61983d800 session 0x55d61738a700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173965312 unmapped: 56074240 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:13.977480+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173965312 unmapped: 56074240 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:14.977644+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173965312 unmapped: 56074240 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:15.977769+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 361 heartbeat osd_stat(store_statfs(0x4f8658000/0x0/0x4ffc00000, data 0x12fa5c3/0x14f2000, compress 0x0/0x0/0x0, omap 0x5eeb8, meta 0x6051148), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173973504 unmapped: 56066048 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:16.977923+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2680264 data_alloc: 218103808 data_used: 89894
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 362 ms_handle_reset con 0x55d6177f8800 session 0x55d6177ab6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173981696 unmapped: 56057856 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:17.978071+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 363 ms_handle_reset con 0x55d6174e8800 session 0x55d619c7b500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173989888 unmapped: 56049664 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:18.978225+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173989888 unmapped: 56049664 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:19.978358+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 363 ms_handle_reset con 0x55d619a8cc00 session 0x55d61a58f340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173998080 unmapped: 56041472 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 364 handle_osd_map epochs [364,364], i have 364, src has [1,364]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 364 ms_handle_reset con 0x55d61bc41c00 session 0x55d6177b2a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:20.978480+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174006272 unmapped: 56033280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.401781082s of 10.696518898s, submitted: 168
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 365 ms_handle_reset con 0x55d61cdec400 session 0x55d619bd1c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 365 ms_handle_reset con 0x55d61a058c00 session 0x55d619df0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:21.978625+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f864e000/0x0/0x4ffc00000, data 0x12ff9d1/0x14fc000, compress 0x0/0x0/0x0, omap 0x5ec43, meta 0x60513bd), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2691516 data_alloc: 218103808 data_used: 89894
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174006272 unmapped: 56033280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:22.978768+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174006272 unmapped: 56033280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 366 ms_handle_reset con 0x55d6174e8800 session 0x55d61a2d2c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:23.978882+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 56025088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 367 ms_handle_reset con 0x55d6177f8800 session 0x55d61a58f880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 367 ms_handle_reset con 0x55d61bc41c00 session 0x55d617fe3a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:24.979030+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 56025088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:25.979193+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 367 ms_handle_reset con 0x55d6174e8800 session 0x55d6183c1880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174022656 unmapped: 56016896 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:26.979359+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f8643000/0x0/0x4ffc00000, data 0x1304d69/0x1507000, compress 0x0/0x0/0x0, omap 0x5f33c, meta 0x6050cc4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6177f8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2702637 data_alloc: 218103808 data_used: 90479
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 367 ms_handle_reset con 0x55d61cdec400 session 0x55d61779f500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f8643000/0x0/0x4ffc00000, data 0x1304d69/0x1507000, compress 0x0/0x0/0x0, omap 0x5f33c, meta 0x6050cc4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174039040 unmapped: 56000512 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:27.979531+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 368 ms_handle_reset con 0x55d61a058c00 session 0x55d619d76fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 368 ms_handle_reset con 0x55d619a8cc00 session 0x55d6183c0fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 55984128 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6184f3400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 368 ms_handle_reset con 0x55d6184f3400 session 0x55d6183c1c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:28.979696+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 369 ms_handle_reset con 0x55d6174e8800 session 0x55d619aa9dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 369 ms_handle_reset con 0x55d61983d800 session 0x55d61738aa80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 369 ms_handle_reset con 0x55d6177f8800 session 0x55d617f5ddc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 174080000 unmapped: 55959552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:29.979870+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 369 ms_handle_reset con 0x55d619a8cc00 session 0x55d619bec1c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 369 ms_handle_reset con 0x55d61a058c00 session 0x55d6183c0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 370 ms_handle_reset con 0x55d61cdec400 session 0x55d61a58e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173957120 unmapped: 56082432 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:30.980025+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f8636000/0x0/0x4ffc00000, data 0x130a1f4/0x150f000, compress 0x0/0x0/0x0, omap 0x5f507, meta 0x6050af9), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173957120 unmapped: 56082432 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:31.980153+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.198673248s of 10.444491386s, submitted: 106
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2710348 data_alloc: 218103808 data_used: 90479
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 370 ms_handle_reset con 0x55d61a058c00 session 0x55d619653a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173957120 unmapped: 56082432 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:32.980313+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 371 ms_handle_reset con 0x55d6174e8800 session 0x55d619beb880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f8637000/0x0/0x4ffc00000, data 0x130a192/0x150e000, compress 0x0/0x0/0x0, omap 0x5f507, meta 0x6050af9), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 173965312 unmapped: 56074240 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:33.980435+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 372 ms_handle_reset con 0x55d619a8cc00 session 0x55d619f46a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175046656 unmapped: 54992896 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 372 ms_handle_reset con 0x55d61983d800 session 0x55d619ac1180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:34.980597+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 372 ms_handle_reset con 0x55d61983d800 session 0x55d619beafc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 373 ms_handle_reset con 0x55d6174e8800 session 0x55d617f5d500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 54984704 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:35.980776+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 373 ms_handle_reset con 0x55d619a8cc00 session 0x55d619f47340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175063040 unmapped: 54976512 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 374 ms_handle_reset con 0x55d61a058c00 session 0x55d61a58e8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:36.980947+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 374 heartbeat osd_stat(store_statfs(0x4f862e000/0x0/0x4ffc00000, data 0x130f78d/0x1519000, compress 0x0/0x0/0x0, omap 0x62c42, meta 0x604d3be), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2720634 data_alloc: 218103808 data_used: 91190
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175079424 unmapped: 54960128 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:37.981116+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdef000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 376 ms_handle_reset con 0x55d61cdef000 session 0x55d61a137340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175087616 unmapped: 54951936 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:38.981257+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6174e8800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175120384 unmapped: 54919168 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 378 ms_handle_reset con 0x55d61cdec400 session 0x55d619bed180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 378 ms_handle_reset con 0x55d61983d800 session 0x55d619f46e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:39.981391+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 378 ms_handle_reset con 0x55d619a8cc00 session 0x55d6184eb500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 378 ms_handle_reset con 0x55d61a058c00 session 0x55d619906a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 378 ms_handle_reset con 0x55d6174e8800 session 0x55d619652fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175079424 unmapped: 54960128 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 379 ms_handle_reset con 0x55d61983d800 session 0x55d61838bdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:40.981538+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f8620000/0x0/0x4ffc00000, data 0x1319f8f/0x1528000, compress 0x0/0x0/0x0, omap 0x629bd, meta 0x604d643), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175104000 unmapped: 54935552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 380 ms_handle_reset con 0x55d61a058c00 session 0x55d619984540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 380 ms_handle_reset con 0x55d619a8cc00 session 0x55d6183c0a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cde8400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:41.981698+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 380 ms_handle_reset con 0x55d61cdec400 session 0x55d6196521c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 380 ms_handle_reset con 0x55d61cde8400 session 0x55d6183c08c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2739773 data_alloc: 218103808 data_used: 93158
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175161344 unmapped: 54878208 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:42.981905+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.178769112s of 10.789007187s, submitted: 281
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f861d000/0x0/0x4ffc00000, data 0x131bb39/0x1529000, compress 0x0/0x0/0x0, omap 0x62cf2, meta 0x604d30e), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 381 ms_handle_reset con 0x55d619a8cc00 session 0x55d6177ab6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175169536 unmapped: 54870016 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 382 ms_handle_reset con 0x55d61983d800 session 0x55d619653c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:43.982052+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 382 ms_handle_reset con 0x55d61a058c00 session 0x55d61a06c8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175194112 unmapped: 54845440 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 383 ms_handle_reset con 0x55d61cdec400 session 0x55d61a58fc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:44.982173+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175194112 unmapped: 54845440 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:45.982285+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 383 ms_handle_reset con 0x55d619ddbc00 session 0x55d617f5da40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 383 heartbeat osd_stat(store_statfs(0x4f8615000/0x0/0x4ffc00000, data 0x1320e74/0x1531000, compress 0x0/0x0/0x0, omap 0x6375b, meta 0x604c8a5), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175120384 unmapped: 54919168 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:46.982415+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 383 ms_handle_reset con 0x55d619ddbc00 session 0x55d6183c0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2746751 data_alloc: 218103808 data_used: 93142
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 384 ms_handle_reset con 0x55d61983d800 session 0x55d619bd1c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 384 ms_handle_reset con 0x55d619a8cc00 session 0x55d61a2d3340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 54927360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:47.982593+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f860f000/0x0/0x4ffc00000, data 0x13245dd/0x1539000, compress 0x0/0x0/0x0, omap 0x63c28, meta 0x604c3d8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 54927360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:48.982741+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 386 ms_handle_reset con 0x55d61a058c00 session 0x55d61838a700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 386 ms_handle_reset con 0x55d61cdec400 session 0x55d619906fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 54927360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:49.982856+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 386 ms_handle_reset con 0x55d61cdec400 session 0x55d619653180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175128576 unmapped: 54910976 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 386 ms_handle_reset con 0x55d61a058c00 session 0x55d61a06d880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ec1c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:50.982982+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 386 handle_osd_map epochs [387,387], i have 387, src has [1,387]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 387 ms_handle_reset con 0x55d619ec1c00 session 0x55d619becc40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 387 ms_handle_reset con 0x55d619a8ec00 session 0x55d619becfc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 387 ms_handle_reset con 0x55d61983d800 session 0x55d61a58efc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 387 ms_handle_reset con 0x55d619fdc800 session 0x55d617fe3a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ec1c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 387 ms_handle_reset con 0x55d619a8ec00 session 0x55d617f5d180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 387 ms_handle_reset con 0x55d619ec1c00 session 0x55d619f46fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175104000 unmapped: 54935552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 387 ms_handle_reset con 0x55d61cdec400 session 0x55d61a1368c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:51.983127+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 388 ms_handle_reset con 0x55d61a058c00 session 0x55d61a58f6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2773088 data_alloc: 218103808 data_used: 97855
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f8605000/0x0/0x4ffc00000, data 0x13299c8/0x1545000, compress 0x0/0x0/0x0, omap 0x63ef0, meta 0x604c110), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175120384 unmapped: 54919168 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:52.983251+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 388 ms_handle_reset con 0x55d619a8ec00 session 0x55d61a566000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ec1c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.715250015s of 10.030061722s, submitted: 169
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 389 ms_handle_reset con 0x55d619ec1c00 session 0x55d6196528c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175161344 unmapped: 54878208 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:53.983402+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 390 ms_handle_reset con 0x55d61a058c00 session 0x55d6183c0fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175177728 unmapped: 54861824 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:54.983634+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 ms_handle_reset con 0x55d619fdc800 session 0x55d619bebdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175177728 unmapped: 54861824 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:55.983824+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 ms_handle_reset con 0x55d619a8cc00 session 0x55d617347c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 ms_handle_reset con 0x55d619ddbc00 session 0x55d619df0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 ms_handle_reset con 0x55d619a8cc00 session 0x55d619aa8000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175243264 unmapped: 54796288 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:56.983971+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f85ff000/0x0/0x4ffc00000, data 0x132ec20/0x154b000, compress 0x0/0x0/0x0, omap 0x63b4a, meta 0x604c4b6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 ms_handle_reset con 0x55d619a8ec00 session 0x55d619906fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ec1c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2778202 data_alloc: 218103808 data_used: 98389
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 ms_handle_reset con 0x55d619fdc800 session 0x55d619773180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 ms_handle_reset con 0x55d619ec1c00 session 0x55d619bd0000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 ms_handle_reset con 0x55d619a8cc00 session 0x55d61a7c4000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f85ff000/0x0/0x4ffc00000, data 0x132ebfd/0x154a000, compress 0x0/0x0/0x0, omap 0x63961, meta 0x604c69f), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175243264 unmapped: 54796288 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:57.984188+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175259648 unmapped: 54779904 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:58.984366+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 393 ms_handle_reset con 0x55d619a8ec00 session 0x55d6183c0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175267840 unmapped: 54771712 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:59.984531+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 393 ms_handle_reset con 0x55d619ddbc00 session 0x55d61a2d3340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 393 ms_handle_reset con 0x55d61a058c00 session 0x55d619906000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 393 ms_handle_reset con 0x55d61cdec400 session 0x55d61a5d4fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175267840 unmapped: 54771712 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 393 ms_handle_reset con 0x55d619fdc800 session 0x55d61838b500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:00.984803+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 54755328 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:01.984964+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 393 ms_handle_reset con 0x55d61cdec400 session 0x55d619652540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783140 data_alloc: 218103808 data_used: 102469
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 54755328 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:02.985142+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.126968384s of 10.031415939s, submitted: 162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f85f8000/0x0/0x4ffc00000, data 0x1333cbf/0x1552000, compress 0x0/0x0/0x0, omap 0x72494, meta 0x603db6c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 54755328 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:03.985304+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 ms_handle_reset con 0x55d619a8cc00 session 0x55d61a1f1a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 54755328 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:04.985472+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 ms_handle_reset con 0x55d619a8ec00 session 0x55d61838bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 ms_handle_reset con 0x55d619ddbc00 session 0x55d619df0000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 54755328 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:05.985618+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 54755328 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 ms_handle_reset con 0x55d619ddbc00 session 0x55d61738b340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 ms_handle_reset con 0x55d619a8cc00 session 0x55d619772fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:06.985728+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f85f4000/0x0/0x4ffc00000, data 0x133587e/0x1556000, compress 0x0/0x0/0x0, omap 0x7257b, meta 0x603da85), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 ms_handle_reset con 0x55d619fdc800 session 0x55d6177ab6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 ms_handle_reset con 0x55d619a8ec00 session 0x55d6184f61c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdec400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 ms_handle_reset con 0x55d61cdec400 session 0x55d6183c0a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2795777 data_alloc: 218103808 data_used: 103089
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 54755328 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:07.985861+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 54755328 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:08.985991+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 ms_handle_reset con 0x55d619a8cc00 session 0x55d616e50c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 ms_handle_reset con 0x55d619a8ec00 session 0x55d61a06cfc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f85ee000/0x0/0x4ffc00000, data 0x133746b/0x155b000, compress 0x0/0x0/0x0, omap 0x72d46, meta 0x603d2ba), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 ms_handle_reset con 0x55d619ddbc00 session 0x55d6199f3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 ms_handle_reset con 0x55d619fdc800 session 0x55d61a7c5dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175431680 unmapped: 54607872 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 ms_handle_reset con 0x55d61a058c00 session 0x55d61a780380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f85ee000/0x0/0x4ffc00000, data 0x133746b/0x155b000, compress 0x0/0x0/0x0, omap 0x72d46, meta 0x603d2ba), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:09.986211+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 ms_handle_reset con 0x55d619a8cc00 session 0x55d619bec380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 ms_handle_reset con 0x55d61a058c00 session 0x55d61a137340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175431680 unmapped: 54607872 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:10.986356+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 397 ms_handle_reset con 0x55d619a8ec00 session 0x55d61a06ca80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 397 ms_handle_reset con 0x55d619ddbc00 session 0x55d61a12bdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 397 heartbeat osd_stat(store_statfs(0x4f84c9000/0x0/0x4ffc00000, data 0x145a079/0x1681000, compress 0x0/0x0/0x0, omap 0x72f3e, meta 0x603d0c2), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 397 ms_handle_reset con 0x55d619fdc800 session 0x55d617f5da40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175431680 unmapped: 54607872 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:11.986534+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2817041 data_alloc: 218103808 data_used: 103755
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175431680 unmapped: 54607872 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 397 ms_handle_reset con 0x55d619fdc800 session 0x55d619beafc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:12.986677+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.673118591s of 10.028080940s, submitted: 51
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 398 ms_handle_reset con 0x55d619a8cc00 session 0x55d6183c08c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175464448 unmapped: 54575104 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:13.986858+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175464448 unmapped: 54575104 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:14.987072+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 398 ms_handle_reset con 0x55d619a8ec00 session 0x55d61838a700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f84c6000/0x0/0x4ffc00000, data 0x145bc69/0x1684000, compress 0x0/0x0/0x0, omap 0x72fc6, meta 0x603d03a), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175464448 unmapped: 54575104 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 398 ms_handle_reset con 0x55d619ddbc00 session 0x55d619653a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a058c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:15.987573+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f84c6000/0x0/0x4ffc00000, data 0x145bc69/0x1684000, compress 0x0/0x0/0x0, omap 0x72fc6, meta 0x603d03a), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175480832 unmapped: 54558720 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 399 ms_handle_reset con 0x55d61a058c00 session 0x55d619f476c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:16.987846+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2820473 data_alloc: 218103808 data_used: 104809
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175489024 unmapped: 54550528 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:17.987995+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175489024 unmapped: 54550528 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:18.988245+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 399 ms_handle_reset con 0x55d619a8ec00 session 0x55d61838afc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 399 ms_handle_reset con 0x55d619a8cc00 session 0x55d619aa81c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 399 ms_handle_reset con 0x55d619ddbc00 session 0x55d61a58e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 399 ms_handle_reset con 0x55d619fdc800 session 0x55d6196521c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175505408 unmapped: 54534144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:19.988423+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175505408 unmapped: 54534144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:20.988619+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f84b7000/0x0/0x4ffc00000, data 0x145d811/0x1685000, compress 0x0/0x0/0x0, omap 0x73997, meta 0x604c669), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175505408 unmapped: 54534144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:21.988802+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2821188 data_alloc: 218103808 data_used: 104923
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175505408 unmapped: 54534144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:22.988957+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.942470551s of 10.114850044s, submitted: 55
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175513600 unmapped: 54525952 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:23.989084+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 400 ms_handle_reset con 0x55d619760800 session 0x55d619beba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 400 ms_handle_reset con 0x55d619760800 session 0x55d61a5661c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 400 ms_handle_reset con 0x55d619a8cc00 session 0x55d619652540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 54517760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:24.989962+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 400 ms_handle_reset con 0x55d619a8ec00 session 0x55d61a5d4fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 400 ms_handle_reset con 0x55d619ddbc00 session 0x55d61a2d3340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 400 ms_handle_reset con 0x55d619fdc800 session 0x55d619906fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175849472 unmapped: 54190080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:25.990164+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 400 heartbeat osd_stat(store_statfs(0x4f72fd000/0x0/0x4ffc00000, data 0x148330b/0x16af000, compress 0x0/0x0/0x0, omap 0x73aa7, meta 0x71dc559), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175849472 unmapped: 54190080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:26.990342+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 400 handle_osd_map epochs [401,401], i have 401, src has [1,401]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 401 ms_handle_reset con 0x55d619a8cc00 session 0x55d6178d3dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2846881 data_alloc: 218103808 data_used: 1187307
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175890432 unmapped: 54149120 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:27.990502+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 401 ms_handle_reset con 0x55d619a8ec00 session 0x55d619bed180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 175898624 unmapped: 54140928 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 401 ms_handle_reset con 0x55d619fdc800 session 0x55d619bed880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:28.990649+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 401 ms_handle_reset con 0x55d619760800 session 0x55d619653c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdee400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 401 ms_handle_reset con 0x55d61cdee400 session 0x55d619beb6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 ms_handle_reset con 0x55d619ddbc00 session 0x55d617fe2380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdee400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 ms_handle_reset con 0x55d61cdee400 session 0x55d61a1f16c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 176979968 unmapped: 53059584 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:29.990751+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 ms_handle_reset con 0x55d619760800 session 0x55d619aa9c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 ms_handle_reset con 0x55d619a8cc00 session 0x55d61a06c700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8ec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 ms_handle_reset con 0x55d619a8ec00 session 0x55d6196536c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 176979968 unmapped: 53059584 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:30.990933+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 ms_handle_reset con 0x55d619760800 session 0x55d619aa81c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 ms_handle_reset con 0x55d619a8cc00 session 0x55d61838a700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f731f000/0x0/0x4ffc00000, data 0x1462a44/0x168d000, compress 0x0/0x0/0x0, omap 0x74168, meta 0x71dbe98), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 176979968 unmapped: 53059584 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:31.991076+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 ms_handle_reset con 0x55d619ddbc00 session 0x55d61a1f0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdee400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 403 ms_handle_reset con 0x55d61cdee400 session 0x55d619bed880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2845114 data_alloc: 218103808 data_used: 1189791
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 176988160 unmapped: 53051392 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:32.991265+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 403 ms_handle_reset con 0x55d619fdc800 session 0x55d619bec540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.818619728s of 10.433204651s, submitted: 167
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 403 ms_handle_reset con 0x55d619fdc800 session 0x55d619d77880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 53018624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:33.991437+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 53018624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:34.991618+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 53018624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:35.991760+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 403 ms_handle_reset con 0x55d619760800 session 0x55d619f46c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 53018624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:36.991905+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 403 ms_handle_reset con 0x55d619a8cc00 session 0x55d6178d3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f743d000/0x0/0x4ffc00000, data 0x13435c2/0x156d000, compress 0x0/0x0/0x0, omap 0x74278, meta 0x71dbd88), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 403 ms_handle_reset con 0x55d619ddbc00 session 0x55d619aa88c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2827574 data_alloc: 218103808 data_used: 110068
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 53018624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:37.992041+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdee400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cded800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 404 ms_handle_reset con 0x55d61cded800 session 0x55d619906fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 404 ms_handle_reset con 0x55d619760800 session 0x55d619f46380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177152000 unmapped: 52887552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:38.992147+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 405 ms_handle_reset con 0x55d619a8cc00 session 0x55d619652000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f7439000/0x0/0x4ffc00000, data 0x13450a3/0x1571000, compress 0x0/0x0/0x0, omap 0x7484d, meta 0x71db7b3), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f7433000/0x0/0x4ffc00000, data 0x1346c4f/0x1575000, compress 0x0/0x0/0x0, omap 0x748d5, meta 0x71db72b), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177152000 unmapped: 52887552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:39.992632+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 406 ms_handle_reset con 0x55d619ddbc00 session 0x55d61a2d2c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 406 ms_handle_reset con 0x55d619fdc800 session 0x55d619bd1340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 406 ms_handle_reset con 0x55d61cdee400 session 0x55d6183c1340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177160192 unmapped: 52879360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:40.992815+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdee400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 406 ms_handle_reset con 0x55d61cdee400 session 0x55d619c7b500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 406 ms_handle_reset con 0x55d619760800 session 0x55d61a06c1c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 406 ms_handle_reset con 0x55d619a8cc00 session 0x55d6178d2e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 406 ms_handle_reset con 0x55d619fdc800 session 0x55d619652380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178913280 unmapped: 51126272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:41.992990+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 407 ms_handle_reset con 0x55d61bd55000 session 0x55d617346380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2861420 data_alloc: 218103808 data_used: 110068
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:42.993285+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178921472 unmapped: 51118080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 408 ms_handle_reset con 0x55d61bd55000 session 0x55d6178d3a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 408 ms_handle_reset con 0x55d619ddbc00 session 0x55d61a58ec40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:43.993556+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178855936 unmapped: 51183616 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.477992058s of 10.644544601s, submitted: 91
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 408 ms_handle_reset con 0x55d619a8cc00 session 0x55d619bea380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f72cd000/0x0/0x4ffc00000, data 0x14a8479/0x16dc000, compress 0x0/0x0/0x0, omap 0x749e5, meta 0x71db61b), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:44.993870+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 51175424 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 408 ms_handle_reset con 0x55d619760800 session 0x55d617347880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:45.994021+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 51175424 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f72c8000/0x0/0x4ffc00000, data 0x14aa069/0x16df000, compress 0x0/0x0/0x0, omap 0x74520, meta 0x71dbae0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdee400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 409 ms_handle_reset con 0x55d61cdee400 session 0x55d619aa8540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 409 ms_handle_reset con 0x55d619760800 session 0x55d619773c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:46.994190+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 51175424 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 409 ms_handle_reset con 0x55d619a8cc00 session 0x55d619df1a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 410 handle_osd_map epochs [410,410], i have 410, src has [1,410]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 410 ms_handle_reset con 0x55d61bd55000 session 0x55d6177abdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871462 data_alloc: 218103808 data_used: 111823
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:47.994386+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 51175424 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 411 ms_handle_reset con 0x55d619ddbc00 session 0x55d619f47880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 411 ms_handle_reset con 0x55d619fdc800 session 0x55d619f468c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 411 heartbeat osd_stat(store_statfs(0x4f72c8000/0x0/0x4ffc00000, data 0x14abc21/0x16e2000, compress 0x0/0x0/0x0, omap 0x73fd3, meta 0x71dc02d), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:48.994590+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 51167232 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdc800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619760800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:49.994764+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178880512 unmapped: 51159040 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 412 ms_handle_reset con 0x55d619760800 session 0x55d6198a08c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 412 ms_handle_reset con 0x55d619a8cc00 session 0x55d6178d2c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 412 ms_handle_reset con 0x55d619fdc800 session 0x55d61838a8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f72c2000/0x0/0x4ffc00000, data 0x14af572/0x16e8000, compress 0x0/0x0/0x0, omap 0x72b27, meta 0x71dd4d9), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddbc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 412 ms_handle_reset con 0x55d619ddbc00 session 0x55d617346000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:50.994924+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 51093504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f72c1000/0x0/0x4ffc00000, data 0x14af5d4/0x16e9000, compress 0x0/0x0/0x0, omap 0x72e13, meta 0x71dd1ed), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 412 ms_handle_reset con 0x55d61bd55000 session 0x55d617347180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:51.995060+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178929664 unmapped: 51109888 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2881009 data_alloc: 218103808 data_used: 112964
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:52.995193+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 413 ms_handle_reset con 0x55d61983dc00 session 0x55d619bd1c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178937856 unmapped: 51101696 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:53.995982+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 51093504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 414 ms_handle_reset con 0x55d6183ca000 session 0x55d619aa9880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a04c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:54.996097+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.483240128s of 10.698817253s, submitted: 128
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 415 ms_handle_reset con 0x55d61bd56400 session 0x55d61a06c000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177758208 unmapped: 52281344 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 415 ms_handle_reset con 0x55d61cdecc00 session 0x55d6184f6540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f7276000/0x0/0x4ffc00000, data 0x14f2cf5/0x1732000, compress 0x0/0x0/0x0, omap 0x73718, meta 0x71dc8e8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 415 ms_handle_reset con 0x55d61a04c400 session 0x55d617fe3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:55.996197+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 415 ms_handle_reset con 0x55d6183ca000 session 0x55d619984540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177766400 unmapped: 52273152 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:56.997216+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177766400 unmapped: 52273152 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 416 ms_handle_reset con 0x55d61983dc00 session 0x55d617347880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902915 data_alloc: 218103808 data_used: 218111
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 416 ms_handle_reset con 0x55d61bd56400 session 0x55d61738aa80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:57.997371+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177815552 unmapped: 52224000 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 417 ms_handle_reset con 0x55d61cdecc00 session 0x55d61a566380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:58.997492+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177815552 unmapped: 52224000 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f726d000/0x0/0x4ffc00000, data 0x14f7fa8/0x173b000, compress 0x0/0x0/0x0, omap 0x742c5, meta 0x71dbd3b), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:59.997626+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177815552 unmapped: 52224000 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619dd3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 417 ms_handle_reset con 0x55d619dd3c00 session 0x55d61838a540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:00.997764+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177815552 unmapped: 52224000 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 418 ms_handle_reset con 0x55d6183ca000 session 0x55d617fe2000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:01.997957+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177848320 unmapped: 52191232 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f726b000/0x0/0x4ffc00000, data 0x14f9b6e/0x173f000, compress 0x0/0x0/0x0, omap 0x742c5, meta 0x71dbd3b), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 418 ms_handle_reset con 0x55d61983dc00 session 0x55d61a5661c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 419 ms_handle_reset con 0x55d61c27dc00 session 0x55d619df0000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2912061 data_alloc: 218103808 data_used: 218013
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:02.998170+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177856512 unmapped: 52183040 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:03.998314+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 177545216 unmapped: 52494336 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 ms_handle_reset con 0x55d61bd56400 session 0x55d617fd6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 ms_handle_reset con 0x55d61cdecc00 session 0x55d61a7c48c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:04.998470+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178913280 unmapped: 51126272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:05.998693+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.710088730s of 11.126739502s, submitted: 190
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178913280 unmapped: 51126272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 ms_handle_reset con 0x55d61cdecc00 session 0x55d61a780380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:06.998912+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 178921472 unmapped: 51118080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 ms_handle_reset con 0x55d6183ca000 session 0x55d617347500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 ms_handle_reset con 0x55d61983dc00 session 0x55d6183c0a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f6b72000/0x0/0x4ffc00000, data 0x1becdf7/0x1e37000, compress 0x0/0x0/0x0, omap 0x749aa, meta 0x71db656), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3165761 data_alloc: 218103808 data_used: 570269
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 ms_handle_reset con 0x55d61bd56400 session 0x55d619906380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:07.999052+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdd400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 ms_handle_reset con 0x55d61c27c400 session 0x55d61a06cfc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 ms_handle_reset con 0x55d619fdd400 session 0x55d619d77340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 179036160 unmapped: 51003392 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 422 ms_handle_reset con 0x55d61c27c400 session 0x55d61838bdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 422 ms_handle_reset con 0x55d6183ca000 session 0x55d61a58e540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:08.999191+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 422 ms_handle_reset con 0x55d61c27dc00 session 0x55d61838ba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180084736 unmapped: 49954816 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 423 ms_handle_reset con 0x55d61983dc00 session 0x55d6183c0c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:09.999362+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 423 ms_handle_reset con 0x55d61983dc00 session 0x55d617346700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180109312 unmapped: 49930240 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f43b0000/0x0/0x4ffc00000, data 0x43ae94c/0x45f9000, compress 0x0/0x0/0x0, omap 0x74aba, meta 0x71db546), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:10.999508+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 424 ms_handle_reset con 0x55d6183ca000 session 0x55d61a58e540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 49922048 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdd400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:11.999629+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 49922048 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3190562 data_alloc: 218103808 data_used: 575534
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:12.999755+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 425 ms_handle_reset con 0x55d619fdd400 session 0x55d616e50000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180150272 unmapped: 49889280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f438a000/0x0/0x4ffc00000, data 0x43d3c72/0x461e000, compress 0x0/0x0/0x0, omap 0x7548b, meta 0x71dab75), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 425 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:13.999923+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180150272 unmapped: 49889280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:15.000261+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180150272 unmapped: 49889280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 427 ms_handle_reset con 0x55d61c27c400 session 0x55d619984fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 427 ms_handle_reset con 0x55d61c27dc00 session 0x55d6178d3a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:16.000489+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180150272 unmapped: 49889280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.718261719s of 10.150862694s, submitted: 141
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:17.000662+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180232192 unmapped: 49807360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3195707 data_alloc: 218103808 data_used: 574250
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:18.000902+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 429 ms_handle_reset con 0x55d61c27dc00 session 0x55d6183c1340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180232192 unmapped: 49807360 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 429 ms_handle_reset con 0x55d61983dc00 session 0x55d616e50380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 429 ms_handle_reset con 0x55d6183ca000 session 0x55d61a780380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f4352000/0x0/0x4ffc00000, data 0x4408697/0x4656000, compress 0x0/0x0/0x0, omap 0x756ab, meta 0x71da955), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdd400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:19.001093+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 49504256 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:20.001378+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 49504256 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:21.001581+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 49504256 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:22.001720+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 49504256 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 234881024 data_used: 11960060
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:23.001858+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 44761088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:24.002077+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 44761088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f434e000/0x0/0x4ffc00000, data 0x440d132/0x465c000, compress 0x0/0x0/0x0, omap 0x75733, meta 0x71da8cd), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:25.002312+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 44761088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:26.002421+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 44761088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:27.002659+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 44761088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3276788 data_alloc: 234881024 data_used: 11960060
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:28.002904+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.310859680s of 11.899989128s, submitted: 74
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 44761088 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:29.003097+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 432 ms_handle_reset con 0x55d61bd56400 session 0x55d61a566380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 432 ms_handle_reset con 0x55d61cdecc00 session 0x55d619f47dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 185442304 unmapped: 44597248 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f434b000/0x0/0x4ffc00000, data 0x440ebb1/0x465f000, compress 0x0/0x0/0x0, omap 0x75d08, meta 0x71da2f8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f4345000/0x0/0x4ffc00000, data 0x44108ee/0x4665000, compress 0x0/0x0/0x0, omap 0x75d08, meta 0x71da2f8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:30.003218+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 185442304 unmapped: 44597248 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f4345000/0x0/0x4ffc00000, data 0x44108ee/0x4665000, compress 0x0/0x0/0x0, omap 0x75d08, meta 0x71da2f8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:31.003346+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 185442304 unmapped: 44597248 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:32.003464+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192053248 unmapped: 37986304 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317609 data_alloc: 234881024 data_used: 12766460
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:33.003593+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192356352 unmapped: 37683200 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:34.003773+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193077248 unmapped: 36962304 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f3871000/0x0/0x4ffc00000, data 0x50668ee/0x50fb000, compress 0x0/0x0/0x0, omap 0x75d08, meta 0x71da2f8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:35.003945+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 36929536 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 432 ms_handle_reset con 0x55d61cdecc00 session 0x55d617fe3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:36.004093+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192217088 unmapped: 37822464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f37ae000/0x0/0x4ffc00000, data 0x51678ee/0x51fe000, compress 0x0/0x0/0x0, omap 0x75843, meta 0x71da7bd), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:37.004225+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 433 ms_handle_reset con 0x55d6183ca000 session 0x55d61a12b880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192217088 unmapped: 37822464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f37a9000/0x0/0x4ffc00000, data 0x516948a/0x5201000, compress 0x0/0x0/0x0, omap 0x75843, meta 0x71da7bd), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386320 data_alloc: 234881024 data_used: 12831996
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:38.004439+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192217088 unmapped: 37822464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.766788483s of 10.101539612s, submitted: 143
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 433 ms_handle_reset con 0x55d61c27dc00 session 0x55d619f46c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 433 ms_handle_reset con 0x55d61bd56400 session 0x55d619f46380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 433 ms_handle_reset con 0x55d61983dc00 session 0x55d619bec000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:39.004586+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 ms_handle_reset con 0x55d619fdf000 session 0x55d61a7c4e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 37765120 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 ms_handle_reset con 0x55d6183ca000 session 0x55d6173476c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 ms_handle_reset con 0x55d61bd56400 session 0x55d6178d3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:40.004755+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192299008 unmapped: 37740544 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:41.004920+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 ms_handle_reset con 0x55d61c27dc00 session 0x55d619df0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 36659200 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f37a6000/0x0/0x4ffc00000, data 0x516b07a/0x5204000, compress 0x0/0x0/0x0, omap 0x74eb9, meta 0x71db147), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:42.005079+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 36659200 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 ms_handle_reset con 0x55d619fdd400 session 0x55d61a7c48c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 ms_handle_reset con 0x55d61c27c400 session 0x55d61a567c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 ms_handle_reset con 0x55d6183ca000 session 0x55d6199f3880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:43.005222+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392051 data_alloc: 234881024 data_used: 12849063
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 ms_handle_reset con 0x55d619fdf000 session 0x55d61a1f1dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193413120 unmapped: 36626432 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 435 ms_handle_reset con 0x55d61cdecc00 session 0x55d61a2d2c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 435 ms_handle_reset con 0x55d61bd56400 session 0x55d619beac40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 435 ms_handle_reset con 0x55d61bd56400 session 0x55d617347180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:44.005420+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193437696 unmapped: 36601856 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 436 ms_handle_reset con 0x55d6183ca000 session 0x55d6197ea380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 436 ms_handle_reset con 0x55d619fdf000 session 0x55d619985a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 436 ms_handle_reset con 0x55d61c27c400 session 0x55d61738ba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:45.005573+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27dc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 436 ms_handle_reset con 0x55d61c27dc00 session 0x55d61a1f1a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193241088 unmapped: 36798464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 436 ms_handle_reset con 0x55d6183ca000 session 0x55d619bec8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:46.005685+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 36790272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f37c6000/0x0/0x4ffc00000, data 0x514a822/0x51e6000, compress 0x0/0x0/0x0, omap 0x74d8f, meta 0x71db271), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 436 ms_handle_reset con 0x55d619fdf000 session 0x55d61a58e540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:47.005836+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 36790272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:48.005978+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390277 data_alloc: 234881024 data_used: 12888591
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 437 ms_handle_reset con 0x55d61bd56400 session 0x55d6183c1340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 36790272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:49.006139+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 36790272 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 437 ms_handle_reset con 0x55d61c27c400 session 0x55d619f46380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619763400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765624046s of 11.072802544s, submitted: 159
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:50.006276+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 438 ms_handle_reset con 0x55d619763400 session 0x55d6199f3880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193265664 unmapped: 36773888 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 438 ms_handle_reset con 0x55d6183ca000 session 0x55d619bd1180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 438 ms_handle_reset con 0x55d619fdf000 session 0x55d61a06c8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:51.006503+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f37bf000/0x0/0x4ffc00000, data 0x514e064/0x51ed000, compress 0x0/0x0/0x0, omap 0x74e5b, meta 0x71db1a5), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193314816 unmapped: 36724736 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:52.006666+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 438 ms_handle_reset con 0x55d61bd56400 session 0x55d61a06c000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193314816 unmapped: 36724736 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:53.006836+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396829 data_alloc: 234881024 data_used: 12889204
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193314816 unmapped: 36724736 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:54.006982+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 440 ms_handle_reset con 0x55d61c27c400 session 0x55d6183c0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193314816 unmapped: 36724736 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd58000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 440 ms_handle_reset con 0x55d61bd58000 session 0x55d619985180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:55.007184+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 441 ms_handle_reset con 0x55d6183ca000 session 0x55d617fe3a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193372160 unmapped: 36667392 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 441 heartbeat osd_stat(store_statfs(0x4f37b5000/0x0/0x4ffc00000, data 0x51516b7/0x51f3000, compress 0x0/0x0/0x0, omap 0x75540, meta 0x71daac0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 441 ms_handle_reset con 0x55d619fdf000 session 0x55d619f47340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:56.007312+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 441 ms_handle_reset con 0x55d61bd56400 session 0x55d6178d68c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193413120 unmapped: 36626432 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:57.008014+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193437696 unmapped: 36601856 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 441 ms_handle_reset con 0x55d61c27c400 session 0x55d61a1f1a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 441 heartbeat osd_stat(store_statfs(0x4f353b000/0x0/0x4ffc00000, data 0x53c9245/0x546b000, compress 0x0/0x0/0x0, omap 0x75650, meta 0x71da9b0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:58.008306+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431393 data_alloc: 234881024 data_used: 14827124
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193437696 unmapped: 36601856 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd54400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:59.008536+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 443 ms_handle_reset con 0x55d61bd54400 session 0x55d619772540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192380928 unmapped: 37658624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 443 ms_handle_reset con 0x55d6183ca000 session 0x55d6184f6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:00.008656+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.722994804s of 10.853238106s, submitted: 96
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 ms_handle_reset con 0x55d619fdf000 session 0x55d619becfc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 37625856 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 ms_handle_reset con 0x55d61bd56400 session 0x55d61a2d2c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:01.008777+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 37625856 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 ms_handle_reset con 0x55d61c27c400 session 0x55d616e50000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 ms_handle_reset con 0x55d619a8c400 session 0x55d617f5cfc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f3536000/0x0/0x4ffc00000, data 0x53ce488/0x5474000, compress 0x0/0x0/0x0, omap 0x75cad, meta 0x71da353), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 ms_handle_reset con 0x55d6183ca000 session 0x55d61838a8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:02.008905+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191856640 unmapped: 38182912 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 ms_handle_reset con 0x55d619a8c400 session 0x55d619d77c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 ms_handle_reset con 0x55d619fdf000 session 0x55d6178d2fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f3536000/0x0/0x4ffc00000, data 0x53ce488/0x5474000, compress 0x0/0x0/0x0, omap 0x75cad, meta 0x71da353), peers [0,2] op hist [0,1])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 ms_handle_reset con 0x55d61bd56400 session 0x55d61a12a540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:03.009023+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441509 data_alloc: 234881024 data_used: 14827709
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191864832 unmapped: 38174720 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:04.009152+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b864000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 446 ms_handle_reset con 0x55d61b864000 session 0x55d61a1368c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192921600 unmapped: 37117952 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:05.009306+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 192921600 unmapped: 37117952 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 446 ms_handle_reset con 0x55d6183ca000 session 0x55d6184ea000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:06.009430+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 447 ms_handle_reset con 0x55d619a8c400 session 0x55d6178d3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193896448 unmapped: 36143104 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 447 ms_handle_reset con 0x55d619fdf000 session 0x55d619df0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b864000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 447 ms_handle_reset con 0x55d61b864000 session 0x55d619f46fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:07.009623+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193912832 unmapped: 36126720 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f352b000/0x0/0x4ffc00000, data 0x53d373d/0x547f000, compress 0x0/0x0/0x0, omap 0x7630a, meta 0x71d9cf6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:08.009769+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 447 ms_handle_reset con 0x55d61bd56400 session 0x55d617fd7500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449126 data_alloc: 234881024 data_used: 15324934
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193921024 unmapped: 36118528 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:09.009902+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193937408 unmapped: 36102144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:10.010009+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 448 handle_osd_map epochs [448,449], i have 448, src has [1,449]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 449 ms_handle_reset con 0x55d6183ca000 session 0x55d619beb6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193937408 unmapped: 36102144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 449 heartbeat osd_stat(store_statfs(0x4f352a000/0x0/0x4ffc00000, data 0x53d5166/0x5480000, compress 0x0/0x0/0x0, omap 0x764a2, meta 0x71d9b5e), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 449 ms_handle_reset con 0x55d619a8c400 session 0x55d61a7c5dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:11.010132+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 449 ms_handle_reset con 0x55d619fdf000 session 0x55d619aa8fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.008796692s of 11.170689583s, submitted: 107
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 193937408 unmapped: 36102144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b864000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 450 ms_handle_reset con 0x55d61b864000 session 0x55d61a7c5dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd56400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:12.010280+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 450 ms_handle_reset con 0x55d61bd56400 session 0x55d619f46fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194011136 unmapped: 36028416 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:13.010424+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457400 data_alloc: 234881024 data_used: 15329302
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194011136 unmapped: 36028416 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 451 ms_handle_reset con 0x55d6183ca000 session 0x55d616e50000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:14.010566+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194052096 unmapped: 35987456 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:15.010723+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194052096 unmapped: 35987456 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 452 ms_handle_reset con 0x55d619a8c400 session 0x55d619f47340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:16.010847+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 452 ms_handle_reset con 0x55d619fdf000 session 0x55d61a06c8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b864000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 452 heartbeat osd_stat(store_statfs(0x4f3519000/0x0/0x4ffc00000, data 0x53dbf71/0x548d000, compress 0x0/0x0/0x0, omap 0x76b87, meta 0x71d9479), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 35971072 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 452 ms_handle_reset con 0x55d61b864000 session 0x55d619bec380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:17.010972+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619776800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 453 ms_handle_reset con 0x55d619776800 session 0x55d61a06cfc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 35905536 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 453 ms_handle_reset con 0x55d6183ca000 session 0x55d61a7c56c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:18.011120+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f3519000/0x0/0x4ffc00000, data 0x53ddb51/0x548f000, compress 0x0/0x0/0x0, omap 0x76c0f, meta 0x71d93f1), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472678 data_alloc: 234881024 data_used: 15612438
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194265088 unmapped: 35774464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 454 ms_handle_reset con 0x55d619a8c400 session 0x55d619beb880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:19.011263+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194297856 unmapped: 35741696 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:20.011388+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194306048 unmapped: 35733504 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 455 ms_handle_reset con 0x55d619fdf000 session 0x55d617347340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 455 ms_handle_reset con 0x55d61cdecc00 session 0x55d619d761c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b864000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 455 ms_handle_reset con 0x55d61b864000 session 0x55d617346700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:21.011498+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 35725312 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 455 ms_handle_reset con 0x55d6183ca000 session 0x55d61838b340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:22.011658+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 35725312 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 455 handle_osd_map epochs [455,456], i have 455, src has [1,456]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.923995972s of 11.137657166s, submitted: 71
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 456 ms_handle_reset con 0x55d619a8c400 session 0x55d619bd1180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 456 ms_handle_reset con 0x55d619fdf000 session 0x55d6197ea000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 456 ms_handle_reset con 0x55d61cdecc00 session 0x55d619bd0c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:23.011825+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469074 data_alloc: 234881024 data_used: 15589398
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 196517888 unmapped: 33521664 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ebf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 457 ms_handle_reset con 0x55d619ebf000 session 0x55d619906380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 457 ms_handle_reset con 0x55d6183ca000 session 0x55d619c7b500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 457 heartbeat osd_stat(store_statfs(0x4f25e6000/0x0/0x4ffc00000, data 0x516cd94/0x5222000, compress 0x0/0x0/0x0, omap 0x76da7, meta 0x8379259), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 457 ms_handle_reset con 0x55d619a8c400 session 0x55d619aa9180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:24.011936+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 33505280 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 458 ms_handle_reset con 0x55d619fdf000 session 0x55d616e508c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:25.012099+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 34865152 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 458 ms_handle_reset con 0x55d61bd55000 session 0x55d619f46700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdecc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 458 ms_handle_reset con 0x55d61cdecc00 session 0x55d619bd16c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:26.012223+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194928640 unmapped: 35110912 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:27.012355+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194928640 unmapped: 35110912 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 458 ms_handle_reset con 0x55d6183ca000 session 0x55d619d76380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 458 ms_handle_reset con 0x55d619a8c400 session 0x55d619652c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:28.012485+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397410 data_alloc: 234881024 data_used: 13191651
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194961408 unmapped: 35078144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 459 heartbeat osd_stat(store_statfs(0x4f28ca000/0x0/0x4ffc00000, data 0x4735e35/0x49aa000, compress 0x0/0x0/0x0, omap 0x76a7a, meta 0x8379586), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:29.012619+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 459 heartbeat osd_stat(store_statfs(0x4f28ca000/0x0/0x4ffc00000, data 0x4735e35/0x49aa000, compress 0x0/0x0/0x0, omap 0x76a7a, meta 0x8379586), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194961408 unmapped: 35078144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:30.012769+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194961408 unmapped: 35078144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:31.012954+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194961408 unmapped: 35078144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:32.013080+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 459 handle_osd_map epochs [460,460], i have 460, src has [1,460]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194961408 unmapped: 35078144 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 460 ms_handle_reset con 0x55d619fdf000 session 0x55d61a06d340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:33.013263+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397561 data_alloc: 234881024 data_used: 13195649
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.123728752s of 10.689134598s, submitted: 157
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194994176 unmapped: 35045376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:34.013430+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194994176 unmapped: 35045376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 461 heartbeat osd_stat(store_statfs(0x4f2e5d000/0x0/0x4ffc00000, data 0x4739490/0x49ad000, compress 0x0/0x0/0x0, omap 0x7704f, meta 0x8378fb1), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:35.013604+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194994176 unmapped: 35045376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:36.013759+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194994176 unmapped: 35045376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 461 ms_handle_reset con 0x55d61bd55000 session 0x55d61a12b340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:37.013951+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194977792 unmapped: 35061760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:38.014160+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403569 data_alloc: 234881024 data_used: 13175782
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194977792 unmapped: 35061760 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d619f47dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d619df0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:39.014315+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f2e58000/0x0/0x4ffc00000, data 0x473af81/0x49b2000, compress 0x0/0x0/0x0, omap 0x7715f, meta 0x8378ea1), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194994176 unmapped: 35045376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d6183ca000 session 0x55d6178d2700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619a8c400 session 0x55d6183c0540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619fdf000 session 0x55d619becc40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:40.014681+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194994176 unmapped: 35045376 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:41.014800+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61bd55000 session 0x55d6197eb6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61bd55000 session 0x55d619773180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d6198a08c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 195043328 unmapped: 34996224 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61c27c400 session 0x55d619c7ba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:42.014936+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d6183ca000 session 0x55d616e508c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 195059712 unmapped: 34979840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619a8c400 session 0x55d619bec380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:43.015121+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412134 data_alloc: 234881024 data_used: 13187318
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 195059712 unmapped: 34979840 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.101666451s of 10.344555855s, submitted: 84
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619a8c400 session 0x55d617347340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d619d76380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f2e5a000/0x0/0x4ffc00000, data 0x473af81/0x49b2000, compress 0x0/0x0/0x0, omap 0x76e35, meta 0x83791cb), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61bd55000 session 0x55d619907c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:44.015271+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d6183ca000 session 0x55d619aa81c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619fdf000 session 0x55d617fe3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61c27c400 session 0x55d61a567dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 34381824 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:45.015461+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619fdf000 session 0x55d617346c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 35119104 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:46.015620+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 35119104 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d6177aa000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f2874000/0x0/0x4ffc00000, data 0x4d1ff91/0x4f98000, compress 0x0/0x0/0x0, omap 0x76b08, meta 0x83794f8), peers [0,2] op hist [1])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d6183ca000 session 0x55d619beba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:47.015746+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619a8c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619a8c400 session 0x55d619aa8e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43671552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d619772fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:48.015878+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3101623 data_alloc: 218103808 data_used: 129270
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43671552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:49.016031+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43671552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:50.016184+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43671552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:51.016388+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43671552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:52.016521+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43671552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5401000/0x0/0x4ffc00000, data 0x198df1f/0x1c04000, compress 0x0/0x0/0x0, omap 0x76c18, meta 0x83793e8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:53.016683+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3101623 data_alloc: 218103808 data_used: 129270
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43671552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:54.016844+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43671552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:55.017015+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43671552 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d6183ca000 session 0x55d6197ebdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619fdf000 session 0x55d6184ebdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.124221802s of 12.277992249s, submitted: 72
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61c27c400 session 0x55d6199061c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:56.017161+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ebf400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186679296 unmapped: 43360256 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:57.017355+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5be4000/0x0/0x4ffc00000, data 0x19b1f1f/0x1c28000, compress 0x0/0x0/0x0, omap 0x76c18, meta 0x83793e8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186056704 unmapped: 43982848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:58.017510+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131998 data_alloc: 218103808 data_used: 4971270
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186056704 unmapped: 43982848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:59.017637+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186056704 unmapped: 43982848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:00.017814+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5be4000/0x0/0x4ffc00000, data 0x19b1f1f/0x1c28000, compress 0x0/0x0/0x0, omap 0x76c18, meta 0x83793e8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186056704 unmapped: 43982848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:01.017974+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5be4000/0x0/0x4ffc00000, data 0x19b1f1f/0x1c28000, compress 0x0/0x0/0x0, omap 0x76c18, meta 0x83793e8), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186056704 unmapped: 43982848 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27cc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:02.018100+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186073088 unmapped: 43966464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:03.018444+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3138660 data_alloc: 218103808 data_used: 4971270
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186073088 unmapped: 43966464 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61c27cc00 session 0x55d6178d3a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5ba3000/0x0/0x4ffc00000, data 0x19f1f2f/0x1c69000, compress 0x0/0x0/0x0, omap 0x76753, meta 0x83798ad), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:04.018623+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 43950080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:05.018850+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 43950080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:06.019002+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 186089472 unmapped: 43950080 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5ba3000/0x0/0x4ffc00000, data 0x19f1f2f/0x1c69000, compress 0x0/0x0/0x0, omap 0x76753, meta 0x83798ad), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:07.019141+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.368728638s of 11.548904419s, submitted: 9
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 187301888 unmapped: 42737664 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:08.019251+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3174563 data_alloc: 218103808 data_used: 5442310
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 188653568 unmapped: 41385984 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:09.019360+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f570f000/0x0/0x4ffc00000, data 0x1e7ff2f/0x20f7000, compress 0x0/0x0/0x0, omap 0x76753, meta 0x83798ad), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190332928 unmapped: 39706624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:10.019607+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190332928 unmapped: 39706624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:11.019801+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f567c000/0x0/0x4ffc00000, data 0x1f10f2f/0x2188000, compress 0x0/0x0/0x0, omap 0x76753, meta 0x83798ad), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190332928 unmapped: 39706624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:12.019996+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190332928 unmapped: 39706624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:13.020187+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3183967 data_alloc: 218103808 data_used: 5380870
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190332928 unmapped: 39706624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:14.020336+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190332928 unmapped: 39706624 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:15.020488+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d6184ea000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 190537728 unmapped: 39501824 heap: 230039552 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d6183ca000 session 0x55d61838b340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:16.020618+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619fdf000 session 0x55d61a12a540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61c27c400 session 0x55d619d77340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191602688 unmapped: 46833664 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:17.020868+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f346a000/0x0/0x4ffc00000, data 0x40eaf2f/0x4362000, compress 0x0/0x0/0x0, omap 0x76316, meta 0x8379cea), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191643648 unmapped: 46792704 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61bd55000 session 0x55d619f46000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619ebf400 session 0x55d61a7c5180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:18.021013+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.995861053s of 10.787898064s, submitted: 121
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61bd55000 session 0x55d619aa9340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3353133 data_alloc: 218103808 data_used: 5268230
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191660032 unmapped: 46776320 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:19.021218+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191660032 unmapped: 46776320 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:20.021393+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191660032 unmapped: 46776320 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d61a1f16c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d6183ca000 session 0x55d619bd1340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619fdf000 session 0x55d619984540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:21.021545+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619fdf000 session 0x55d619beb6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d619aa8540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191365120 unmapped: 47071232 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:22.021810+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191365120 unmapped: 47071232 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f29ca000/0x0/0x4ffc00000, data 0x4bc9f91/0x4e42000, compress 0x0/0x0/0x0, omap 0x75dc9, meta 0x837a237), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:23.021959+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425491 data_alloc: 218103808 data_used: 5268230
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f29ca000/0x0/0x4ffc00000, data 0x4bc9f91/0x4e42000, compress 0x0/0x0/0x0, omap 0x75dc9, meta 0x837a237), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191365120 unmapped: 47071232 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:24.022127+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f29ca000/0x0/0x4ffc00000, data 0x4bc9f91/0x4e42000, compress 0x0/0x0/0x0, omap 0x75dc9, meta 0x837a237), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191365120 unmapped: 47071232 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:25.023021+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191365120 unmapped: 47071232 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:26.023167+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191365120 unmapped: 47071232 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d6183ca000 session 0x55d6198a0fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:27.023323+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ebf400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191365120 unmapped: 47071232 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:28.023460+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426128 data_alloc: 218103808 data_used: 5268230
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61c27c400 session 0x55d619becc40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191365120 unmapped: 47071232 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:29.023666+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f29ca000/0x0/0x4ffc00000, data 0x4bc9f91/0x4e42000, compress 0x0/0x0/0x0, omap 0x7602d, meta 0x8379fd3), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 191651840 unmapped: 46784512 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fde000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619fde000 session 0x55d619aa9c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:30.023763+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 199532544 unmapped: 38903808 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:31.023957+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d61738aa80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f29ca000/0x0/0x4ffc00000, data 0x4bc9f91/0x4e42000, compress 0x0/0x0/0x0, omap 0x7602d, meta 0x8379fd3), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d6183ca000 session 0x55d6178d3c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 199532544 unmapped: 38903808 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:32.024061+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 199532544 unmapped: 38903808 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:33.024175+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492816 data_alloc: 234881024 data_used: 16452886
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 199532544 unmapped: 38903808 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f29ca000/0x0/0x4ffc00000, data 0x4bc9f91/0x4e42000, compress 0x0/0x0/0x0, omap 0x7602d, meta 0x8379fd3), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61c27c400 session 0x55d6199f3500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:34.024315+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 199532544 unmapped: 38903808 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619ddd800 session 0x55d61a566380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:35.024488+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 200130560 unmapped: 38305792 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:36.024678+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d2fd000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61d2fd000 session 0x55d6184f7c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 200130560 unmapped: 38305792 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.384754181s of 18.538589478s, submitted: 58
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d618058400 session 0x55d6183c0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:37.024791+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183ca000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 200278016 unmapped: 38158336 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:38.024904+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3503416 data_alloc: 234881024 data_used: 17696534
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f29a6000/0x0/0x4ffc00000, data 0x4bedf91/0x4e66000, compress 0x0/0x0/0x0, omap 0x7602d, meta 0x8379fd3), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 37994496 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:39.025053+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 37994496 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:40.025199+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204767232 unmapped: 33669120 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:41.025360+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 203948032 unmapped: 34488320 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:42.025482+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f222f000/0x0/0x4ffc00000, data 0x5364f91/0x55dd000, compress 0x0/0x0/0x0, omap 0x7602d, meta 0x8379fd3), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 203948032 unmapped: 34488320 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:43.025631+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560688 data_alloc: 234881024 data_used: 19469590
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 203948032 unmapped: 34488320 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:44.025752+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 203948032 unmapped: 34488320 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:45.025913+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f222f000/0x0/0x4ffc00000, data 0x5364f91/0x55dd000, compress 0x0/0x0/0x0, omap 0x7602d, meta 0x8379fd3), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 203948032 unmapped: 34488320 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:46.026261+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 207601664 unmapped: 30834688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:47.026611+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.072984695s of 10.336289406s, submitted: 113
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209043456 unmapped: 29392896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:48.026781+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714048 data_alloc: 234881024 data_used: 20007190
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 30416896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:49.026882+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 30253056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:50.027012+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 30253056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61c27c400 session 0x55d61a06d880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:51.027151+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f080a000/0x0/0x4ffc00000, data 0x6d87f91/0x7000000, compress 0x0/0x0/0x0, omap 0x7602d, meta 0x8379fd3), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208183296 unmapped: 30253056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f080a000/0x0/0x4ffc00000, data 0x6d88003/0x7002000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:52.027292+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f080a000/0x0/0x4ffc00000, data 0x6d88003/0x7002000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208281600 unmapped: 30154752 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:53.027422+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725152 data_alloc: 234881024 data_used: 20543254
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f07fe000/0x0/0x4ffc00000, data 0x6d94003/0x700e000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208281600 unmapped: 30154752 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:54.027565+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619e7d400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208281600 unmapped: 30154752 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:55.027778+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f07fe000/0x0/0x4ffc00000, data 0x6d94003/0x700e000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d619fdf000 session 0x55d619d77c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208281600 unmapped: 30154752 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 ms_handle_reset con 0x55d61776a000 session 0x55d617fd6c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:56.027912+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd54400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208584704 unmapped: 29851648 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:57.028118+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 462 handle_osd_map epochs [462,463], i have 462, src has [1,463]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.446816444s of 10.673620224s, submitted: 94
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 463 ms_handle_reset con 0x55d61bd54400 session 0x55d61a7c4a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208166912 unmapped: 30269440 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:58.028228+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752346 data_alloc: 234881024 data_used: 20592422
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 463 handle_osd_map epochs [463,464], i have 463, src has [1,464]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 464 handle_osd_map epochs [464,464], i have 464, src has [1,464]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208191488 unmapped: 30244864 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 464 ms_handle_reset con 0x55d61776a000 session 0x55d61a5661c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:59.028359+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 464 heartbeat osd_stat(store_statfs(0x4f072a000/0x0/0x4ffc00000, data 0x6ee6b9f/0x70df000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208191488 unmapped: 30244864 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:00.028550+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 464 heartbeat osd_stat(store_statfs(0x4f0725000/0x0/0x4ffc00000, data 0x6ee873b/0x70e2000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208232448 unmapped: 30203904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 465 ms_handle_reset con 0x55d618058400 session 0x55d617fd6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:01.028770+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208240640 unmapped: 30195712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:02.028899+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 465 heartbeat osd_stat(store_statfs(0x4f071f000/0x0/0x4ffc00000, data 0x6f732d7/0x70ea000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208240640 unmapped: 30195712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:03.029030+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764798 data_alloc: 234881024 data_used: 20596518
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208240640 unmapped: 30195712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 465 heartbeat osd_stat(store_statfs(0x4f071f000/0x0/0x4ffc00000, data 0x6f732d7/0x70ea000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:04.029161+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208265216 unmapped: 30171136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:05.029541+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208265216 unmapped: 30171136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:06.029790+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208265216 unmapped: 30171136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:07.030319+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208265216 unmapped: 30171136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:08.030453+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3763774 data_alloc: 234881024 data_used: 20596518
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208265216 unmapped: 30171136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:09.030628+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 465 heartbeat osd_stat(store_statfs(0x4f071d000/0x0/0x4ffc00000, data 0x6f782d7/0x70ef000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 465 heartbeat osd_stat(store_statfs(0x4f071d000/0x0/0x4ffc00000, data 0x6f782d7/0x70ef000, compress 0x0/0x0/0x0, omap 0x75b68, meta 0x837a498), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208265216 unmapped: 30171136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:10.030802+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208265216 unmapped: 30171136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:11.030957+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.977819443s of 14.063630104s, submitted: 46
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208265216 unmapped: 30171136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 465 ms_handle_reset con 0x55d619fdf000 session 0x55d619df0380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:12.031093+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208404480 unmapped: 30031872 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:13.031208+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765607 data_alloc: 234881024 data_used: 20596518
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208404480 unmapped: 30031872 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:14.031382+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a04c800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208535552 unmapped: 29900800 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d61a04c800 session 0x55d619c7b880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:15.031565+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f071d000/0x0/0x4ffc00000, data 0x6f782d7/0x70ef000, compress 0x0/0x0/0x0, omap 0x7561b, meta 0x837a9e5), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208543744 unmapped: 29892608 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:16.031731+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdea400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d61cdea400 session 0x55d61838ba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d61983d000 session 0x55d6197eb500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 28573696 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:17.031884+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 28573696 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:18.032044+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774401 data_alloc: 234881024 data_used: 20759334
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 28573696 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:19.032179+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 28573696 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:20.032297+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 28573696 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:21.032443+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f0710000/0x0/0x4ffc00000, data 0x6f83f37/0x70fa000, compress 0x0/0x0/0x0, omap 0x75ce6, meta 0x837a31a), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 28573696 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:22.032617+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 28573696 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:23.032773+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f0710000/0x0/0x4ffc00000, data 0x6f83f37/0x70fa000, compress 0x0/0x0/0x0, omap 0x75ce6, meta 0x837a31a), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777543 data_alloc: 234881024 data_used: 20759334
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 28573696 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:24.032889+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209862656 unmapped: 28573696 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:25.033062+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.399115562s of 13.433613777s, submitted: 20
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d61776a000 session 0x55d6177ab500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209870848 unmapped: 28565504 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:26.033194+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f0711000/0x0/0x4ffc00000, data 0x6f83f5a/0x70fb000, compress 0x0/0x0/0x0, omap 0x75f4a, meta 0x837a0b6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:27.033359+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:28.033482+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f0711000/0x0/0x4ffc00000, data 0x6f83f5a/0x70fb000, compress 0x0/0x0/0x0, omap 0x75f4a, meta 0x837a0b6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3784445 data_alloc: 234881024 data_used: 21144968
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:29.033656+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:30.033795+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:31.033924+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f0711000/0x0/0x4ffc00000, data 0x6f83f5a/0x70fb000, compress 0x0/0x0/0x0, omap 0x75f4a, meta 0x837a0b6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:32.034065+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:33.034219+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f0711000/0x0/0x4ffc00000, data 0x6f83f5a/0x70fb000, compress 0x0/0x0/0x0, omap 0x75f4a, meta 0x837a0b6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3784445 data_alloc: 234881024 data_used: 21144968
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:34.034354+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:35.034526+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:36.034662+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f0711000/0x0/0x4ffc00000, data 0x6f83f5a/0x70fb000, compress 0x0/0x0/0x0, omap 0x75f4a, meta 0x837a0b6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:37.034753+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:38.034882+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3784445 data_alloc: 234881024 data_used: 21144968
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:39.035010+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.805439949s of 14.825035095s, submitted: 12
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:40.035143+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210968576 unmapped: 27467776 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:41.035292+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f0711000/0x0/0x4ffc00000, data 0x6f83f5a/0x70fb000, compress 0x0/0x0/0x0, omap 0x75f4a, meta 0x837a0b6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 27820032 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:42.035429+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 27820032 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:43.035556+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3790519 data_alloc: 234881024 data_used: 22717832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 27820032 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:44.035777+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f0711000/0x0/0x4ffc00000, data 0x6f83f5a/0x70fb000, compress 0x0/0x0/0x0, omap 0x75f4a, meta 0x837a0b6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 27820032 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:45.036001+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 27820032 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:46.036148+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 25985024 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:47.036268+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 25985024 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:48.036386+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800467 data_alloc: 234881024 data_used: 22861192
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 25985024 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:49.036532+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 25985024 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f063e000/0x0/0x4ffc00000, data 0x7056f5a/0x71ce000, compress 0x0/0x0/0x0, omap 0x75f4a, meta 0x837a0b6), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:50.036700+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.993975639s of 10.016456604s, submitted: 14
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 211689472 unmapped: 26746880 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:51.036875+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d61c27c400 session 0x55d61838a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d6172e4c00 session 0x55d617f5d180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a04c800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 211689472 unmapped: 26746880 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d61a04c800 session 0x55d61a7c4380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:52.036995+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 211714048 unmapped: 26722304 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:53.037161+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3822825 data_alloc: 234881024 data_used: 22910344
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 211714048 unmapped: 26722304 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:54.037276+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f062e000/0x0/0x4ffc00000, data 0x7261f5a/0x71de000, compress 0x0/0x0/0x0, omap 0x759fd, meta 0x837a603), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 211714048 unmapped: 26722304 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:55.037463+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 211714048 unmapped: 26722304 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:56.037791+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d6172e4c00 session 0x55d6184f6540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d61776a000 session 0x55d619c7a1c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f422a000/0x0/0x4ffc00000, data 0x3666f4a/0x35e2000, compress 0x0/0x0/0x0, omap 0x75a85, meta 0x837a57b), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:57.037972+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f422a000/0x0/0x4ffc00000, data 0x3666f4a/0x35e2000, compress 0x0/0x0/0x0, omap 0x75a85, meta 0x837a57b), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:58.038124+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3501585 data_alloc: 234881024 data_used: 20777336
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:59.038292+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f422a000/0x0/0x4ffc00000, data 0x3666f4a/0x35e2000, compress 0x0/0x0/0x0, omap 0x75a85, meta 0x837a57b), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209969152 unmapped: 28467200 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:00.038483+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.437401772s of 10.514878273s, submitted: 52
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210313216 unmapped: 28123136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:01.038677+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210313216 unmapped: 28123136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:02.038804+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210313216 unmapped: 28123136 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:03.038941+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510569 data_alloc: 234881024 data_used: 20879736
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210386944 unmapped: 28049408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:04.039134+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f41ba000/0x0/0x4ffc00000, data 0x36d5f4a/0x3651000, compress 0x0/0x0/0x0, omap 0x75a85, meta 0x837a57b), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d618058400 session 0x55d616e50000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d619fdf000 session 0x55d619beddc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208936960 unmapped: 29499392 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:05.039313+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61983d000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d61983d000 session 0x55d619df0c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208977920 unmapped: 29458432 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:06.039448+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208977920 unmapped: 29458432 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:07.039570+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d6172e4c00 session 0x55d619bea8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208977920 unmapped: 29458432 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f41bb000/0x0/0x4ffc00000, data 0x36d5f27/0x3650000, compress 0x0/0x0/0x0, omap 0x7503f, meta 0x837afc1), peers [0,2] op hist [1])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:08.039676+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d61776a000 session 0x55d61838a380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494401 data_alloc: 234881024 data_used: 20773205
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208994304 unmapped: 29442048 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:09.039818+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d618058400 session 0x55d6177abdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208994304 unmapped: 29442048 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:10.039942+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 ms_handle_reset con 0x55d619fdf000 session 0x55d61a7c4000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 208994304 unmapped: 29442048 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:11.040157+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.993884087s of 11.110601425s, submitted: 64
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209010688 unmapped: 29425664 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:12.040307+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 467 ms_handle_reset con 0x55d6183ca000 session 0x55d6197eb340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 467 heartbeat osd_stat(store_statfs(0x4f4300000/0x0/0x4ffc00000, data 0x3592e63/0x350b000, compress 0x0/0x0/0x0, omap 0x753a6, meta 0x837ac5a), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 467 ms_handle_reset con 0x55d619ddd800 session 0x55d617f5d6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 467 ms_handle_reset con 0x55d6172e4c00 session 0x55d6178d28c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209010688 unmapped: 29425664 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:13.040491+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474521 data_alloc: 234881024 data_used: 20652177
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209010688 unmapped: 29425664 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:14.040644+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209010688 unmapped: 29425664 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:15.040765+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209010688 unmapped: 29425664 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:16.040874+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 467 heartbeat osd_stat(store_statfs(0x4f4333000/0x0/0x4ffc00000, data 0x3360a53/0x34d9000, compress 0x0/0x0/0x0, omap 0x756b9, meta 0x837a947), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209010688 unmapped: 29425664 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:17.040993+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 467 ms_handle_reset con 0x55d61776a000 session 0x55d61a06c380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 467 ms_handle_reset con 0x55d618058400 session 0x55d6173461c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:18.041133+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351963 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:19.041265+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:20.041395+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f4e6b000/0x0/0x4ffc00000, data 0x28264c2/0x299f000, compress 0x0/0x0/0x0, omap 0x75ccf, meta 0x837a331), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:21.041586+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:22.041762+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:23.041888+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351963 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:24.042064+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:25.042261+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:26.042377+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f4e6b000/0x0/0x4ffc00000, data 0x28264c2/0x299f000, compress 0x0/0x0/0x0, omap 0x75ccf, meta 0x837a331), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:27.042585+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:28.042742+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351963 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:29.042907+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:30.043033+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f4e6b000/0x0/0x4ffc00000, data 0x28264c2/0x299f000, compress 0x0/0x0/0x0, omap 0x75ccf, meta 0x837a331), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:31.043193+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:32.043360+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:33.043524+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351963 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:34.043640+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:35.043802+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f4e6b000/0x0/0x4ffc00000, data 0x28264c2/0x299f000, compress 0x0/0x0/0x0, omap 0x75ccf, meta 0x837a331), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:36.043928+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:37.044078+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:38.044234+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f4e6b000/0x0/0x4ffc00000, data 0x28264c2/0x299f000, compress 0x0/0x0/0x0, omap 0x75ccf, meta 0x837a331), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351963 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:39.044408+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 33488896 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619fdf000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.511180878s of 27.581590652s, submitted: 45
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 ms_handle_reset con 0x55d619fdf000 session 0x55d619f47dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f4e6c000/0x0/0x4ffc00000, data 0x28264d2/0x29a0000, compress 0x0/0x0/0x0, omap 0x75fb0, meta 0x837a050), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:40.044631+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204955648 unmapped: 33480704 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f4e67000/0x0/0x4ffc00000, data 0x282806e/0x29a3000, compress 0x0/0x0/0x0, omap 0x76079, meta 0x8379f87), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:41.044796+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204955648 unmapped: 33480704 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 ms_handle_reset con 0x55d6172e4c00 session 0x55d619ac1880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 ms_handle_reset con 0x55d618058400 session 0x55d61779f340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 ms_handle_reset con 0x55d61776a000 session 0x55d619f46380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 ms_handle_reset con 0x55d619ddd800 session 0x55d6177ab500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:42.044953+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204955648 unmapped: 33480704 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27c400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 ms_handle_reset con 0x55d61c27c400 session 0x55d61738aa80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 ms_handle_reset con 0x55d6172e4c00 session 0x55d619beb340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:43.045114+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204955648 unmapped: 33480704 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f4e69000/0x0/0x4ffc00000, data 0x282806e/0x29a3000, compress 0x0/0x0/0x0, omap 0x7639d, meta 0x8379c63), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 470 ms_handle_reset con 0x55d61776a000 session 0x55d61a5d5a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359453 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:44.045265+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f4e69000/0x0/0x4ffc00000, data 0x282806e/0x29a3000, compress 0x0/0x0/0x0, omap 0x7639d, meta 0x8379c63), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 33456128 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:45.045444+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 33456128 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:46.045574+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 33456128 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:47.045723+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 33456128 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:48.045870+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 33456128 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f4e65000/0x0/0x4ffc00000, data 0x2829c4e/0x29a5000, compress 0x0/0x0/0x0, omap 0x76810, meta 0x83797f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359453 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:49.046000+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 33456128 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:50.046210+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.720301628s of 10.849353790s, submitted: 56
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 470 ms_handle_reset con 0x55d618058400 session 0x55d617fd68c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 33456128 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:51.046390+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 33456128 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 470 handle_osd_map epochs [471,471], i have 471, src has [1,471]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:52.046564+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 471 ms_handle_reset con 0x55d619ddd800 session 0x55d61a06da40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204988416 unmapped: 33447936 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61d2fec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 471 ms_handle_reset con 0x55d61d2fec00 session 0x55d61a7c4fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:53.046748+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 471 ms_handle_reset con 0x55d6172e4c00 session 0x55d6177a8a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 204996608 unmapped: 33439744 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d61776a000 session 0x55d619aa9a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d618058400 session 0x55d6183c0c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d619ddd800 session 0x55d619df0fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378892 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:54.047007+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619dda800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d619dda800 session 0x55d61a12b880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 heartbeat osd_stat(store_statfs(0x4f4e5a000/0x0/0x4ffc00000, data 0x282d359/0x29ae000, compress 0x0/0x0/0x0, omap 0x89f73, meta 0x836608d), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205021184 unmapped: 33415168 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d6172e4c00 session 0x55d6173476c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:55.047188+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205037568 unmapped: 33398784 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d61776a000 session 0x55d6197eac40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d618058400 session 0x55d61a58f6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:56.047306+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205053952 unmapped: 33382400 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d619ddd800 session 0x55d619d77880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a05b800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d61a05b800 session 0x55d6197ea8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 heartbeat osd_stat(store_statfs(0x4f4e5e000/0x0/0x4ffc00000, data 0x282d359/0x29ae000, compress 0x0/0x0/0x0, omap 0x8a6eb, meta 0x8365915), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:57.047457+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205062144 unmapped: 33374208 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 heartbeat osd_stat(store_statfs(0x4f4e5f000/0x0/0x4ffc00000, data 0x282d2f7/0x29ad000, compress 0x0/0x0/0x0, omap 0x8a9cc, meta 0x8365634), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d6172e4c00 session 0x55d617f5d180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d61776a000 session 0x55d619aa9dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:58.047610+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205062144 unmapped: 33374208 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 heartbeat osd_stat(store_statfs(0x4f4e60000/0x0/0x4ffc00000, data 0x282d295/0x29ac000, compress 0x0/0x0/0x0, omap 0x8aa52, meta 0x83655ae), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373767 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d618058400 session 0x55d617347880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:59.047766+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d619ddd800 session 0x55d61a58ea80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205062144 unmapped: 33374208 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 114K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 30K writes, 11K syncs, 2.71 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 38K keys, 11K commit groups, 1.0 writes per commit group, ingest: 25.65 MB, 0.04 MB/s
                                           Interval WAL: 11K writes, 5006 syncs, 2.31 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:00.047904+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27f800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.813362122s of 10.061690331s, submitted: 122
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 ms_handle_reset con 0x55d61c27f800 session 0x55d617fd7340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205070336 unmapped: 33366016 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 472 handle_osd_map epochs [472,473], i have 472, src has [1,473]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 473 ms_handle_reset con 0x55d6172e4c00 session 0x55d6197eb500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:01.048025+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205078528 unmapped: 33357824 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 473 ms_handle_reset con 0x55d61776a000 session 0x55d619aa9dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 473 ms_handle_reset con 0x55d618058400 session 0x55d617fd68c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:02.048190+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f4e5b000/0x0/0x4ffc00000, data 0x282ee85/0x29af000, compress 0x0/0x0/0x0, omap 0x8adcf, meta 0x8365231), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:03.048361+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376045 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:04.048587+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:05.048793+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:06.048914+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f4e5c000/0x0/0x4ffc00000, data 0x282ee75/0x29ae000, compress 0x0/0x0/0x0, omap 0x8b0b0, meta 0x8364f50), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:07.049031+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 473 ms_handle_reset con 0x55d619ddd800 session 0x55d619f468c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:08.049311+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f4e5c000/0x0/0x4ffc00000, data 0x282ee75/0x29ae000, compress 0x0/0x0/0x0, omap 0x8b136, meta 0x8364eca), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 473 handle_osd_map epochs [474,474], i have 474, src has [1,474]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378819 data_alloc: 234881024 data_used: 13160577
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:09.049593+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 474 ms_handle_reset con 0x55d61c27e000 session 0x55d619bd0a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:10.049754+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.159008026s of 10.238342285s, submitted: 58
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 475 ms_handle_reset con 0x55d61c27e000 session 0x55d61a12b180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:11.050027+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 476 ms_handle_reset con 0x55d6172e4c00 session 0x55d617fd7340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 476 ms_handle_reset con 0x55d61a050000 session 0x55d619653880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:12.050197+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205111296 unmapped: 33325056 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 476 handle_osd_map epochs [476,477], i have 476, src has [1,477]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 477 ms_handle_reset con 0x55d61776a000 session 0x55d617346000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:13.050403+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205119488 unmapped: 33316864 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d618058400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388474 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:14.050541+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205119488 unmapped: 33316864 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 478 heartbeat osd_stat(store_statfs(0x4f4e50000/0x0/0x4ffc00000, data 0x2835c1c/0x29ba000, compress 0x0/0x0/0x0, omap 0x8bbed, meta 0x8364413), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 478 ms_handle_reset con 0x55d618058400 session 0x55d6173461c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 478 ms_handle_reset con 0x55d6172e4c00 session 0x55d617f5d6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:15.050760+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205119488 unmapped: 33316864 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:16.050923+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205119488 unmapped: 33316864 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:17.051106+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205119488 unmapped: 33316864 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:18.051278+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205119488 unmapped: 33316864 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3391712 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:19.051496+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 478 heartbeat osd_stat(store_statfs(0x4f4e4b000/0x0/0x4ffc00000, data 0x2837828/0x29bd000, compress 0x0/0x0/0x0, omap 0x8bd53, meta 0x83642ad), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 478 handle_osd_map epochs [479,479], i have 479, src has [1,479]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205135872 unmapped: 33300480 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:20.051774+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205135872 unmapped: 33300480 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:21.051918+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205135872 unmapped: 33300480 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:22.053148+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205135872 unmapped: 33300480 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 479 heartbeat osd_stat(store_statfs(0x4f4e4a000/0x0/0x4ffc00000, data 0x28392c3/0x29c0000, compress 0x0/0x0/0x0, omap 0x8beb9, meta 0x8364147), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:23.053315+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 205135872 unmapped: 33300480 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.832316399s of 13.080286026s, submitted: 58
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:24.053562+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396796 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:25.053820+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:26.053965+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:27.054191+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:28.054339+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e47000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c6d7, meta 0x8363929), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:29.054493+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396796 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 ms_handle_reset con 0x55d61776a000 session 0x55d617fd6000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:30.054638+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:31.054772+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 ms_handle_reset con 0x55d61a050000 session 0x55d61a1f1340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 ms_handle_reset con 0x55d61c27e000 session 0x55d61a58e380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:32.054961+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddd800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 ms_handle_reset con 0x55d619ddd800 session 0x55d619652e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:33.055136+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c7e3, meta 0x836381d), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 32251904 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 ms_handle_reset con 0x55d6172e4c00 session 0x55d619f47a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.254230499s of 10.270366669s, submitted: 16
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 ms_handle_reset con 0x55d61776a000 session 0x55d619df1a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:34.055358+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397041 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e48000/0x0/0x4ffc00000, data 0x283ada4/0x29c4000, compress 0x0/0x0/0x0, omap 0x8c869, meta 0x8363797), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:35.055555+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:36.055752+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:37.055936+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:38.056059+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:39.056275+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397041 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:40.056428+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:41.056583+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:42.056778+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:43.056990+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:44.057222+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397041 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:45.057501+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:46.057653+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:47.058005+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:48.058195+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:49.058350+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397041 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:50.058535+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:51.058768+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.761486053s of 17.783672333s, submitted: 8
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [1])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:52.058947+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:53.059138+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 32243712 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:54.059278+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397041 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:55.059469+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:56.059661+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:57.059832+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:58.059979+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:59.060126+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397041 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:00.060381+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:01.060595+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:02.060778+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:03.060919+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:04.061076+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397041 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:05.061233+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:06.061349+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:07.061525+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:08.061755+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:09.062002+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397041 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:10.062184+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:11.062353+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:12.062508+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:13.062889+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:14.063266+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397041 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:15.063904+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:16.064071+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:17.064346+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.702663422s of 25.841566086s, submitted: 90
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 ms_handle_reset con 0x55d61a050000 session 0x55d619bd0e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:18.064525+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f4e49000/0x0/0x4ffc00000, data 0x283ad42/0x29c3000, compress 0x0/0x0/0x0, omap 0x8c8ef, meta 0x8363711), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:19.064734+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398834 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 ms_handle_reset con 0x55d61c27e000 session 0x55d6198a0e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:20.064857+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdeec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc40400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 ms_handle_reset con 0x55d61bc40400 session 0x55d6197721c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 ms_handle_reset con 0x55d61cdeec00 session 0x55d61a06c540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:21.065057+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 32186368 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f4e42000/0x0/0x4ffc00000, data 0x283c9a2/0x29c8000, compress 0x0/0x0/0x0, omap 0x8cadc, meta 0x8363524), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc40400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 ms_handle_reset con 0x55d61bc40400 session 0x55d619f46e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 ms_handle_reset con 0x55d6172e4c00 session 0x55d6184f6540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:22.065319+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206258176 unmapped: 32178176 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:23.065484+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 ms_handle_reset con 0x55d61776a000 session 0x55d61a06c8c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 ms_handle_reset con 0x55d61c27e000 session 0x55d61a7c4000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f4e44000/0x0/0x4ffc00000, data 0x283c9a2/0x29c8000, compress 0x0/0x0/0x0, omap 0x8cadc, meta 0x8363524), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 32169984 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 482 ms_handle_reset con 0x55d61c27e000 session 0x55d619beb340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 482 ms_handle_reset con 0x55d61a050000 session 0x55d619d768c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:24.065787+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405979 data_alloc: 234881024 data_used: 13161260
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 482 ms_handle_reset con 0x55d6172e4c00 session 0x55d6197eaa80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 482 ms_handle_reset con 0x55d61776a000 session 0x55d6196521c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:25.066027+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:26.066234+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:27.066441+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:28.066601+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:29.066779+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f4e43000/0x0/0x4ffc00000, data 0x283e4ce/0x29c9000, compress 0x0/0x0/0x0, omap 0x8ccc9, meta 0x8363337), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3404519 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:30.067001+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:31.067170+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f4e43000/0x0/0x4ffc00000, data 0x283e4ce/0x29c9000, compress 0x0/0x0/0x0, omap 0x8ccc9, meta 0x8363337), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:32.067417+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:33.067573+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 482 handle_osd_map epochs [482,483], i have 482, src has [1,483]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.749118805s of 15.856882095s, submitted: 47
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:34.067844+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408013 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f4e3e000/0x0/0x4ffc00000, data 0x283ff4d/0x29cc000, compress 0x0/0x0/0x0, omap 0x8ce30, meta 0x83631d0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:35.068158+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:36.068397+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:37.068794+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:38.069009+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f4e3e000/0x0/0x4ffc00000, data 0x283ff4d/0x29cc000, compress 0x0/0x0/0x0, omap 0x8ce30, meta 0x83631d0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:39.069229+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408013 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:40.069464+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:41.069682+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:42.069893+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:43.070145+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f4e3e000/0x0/0x4ffc00000, data 0x283ff4d/0x29cc000, compress 0x0/0x0/0x0, omap 0x8ce30, meta 0x83631d0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:44.070348+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408013 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 32145408 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:45.070509+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:46.070739+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:47.070962+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:48.071138+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f4e3e000/0x0/0x4ffc00000, data 0x283ff4d/0x29cc000, compress 0x0/0x0/0x0, omap 0x8ce30, meta 0x83631d0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:49.071373+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408013 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:50.071496+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f4e3e000/0x0/0x4ffc00000, data 0x283ff4d/0x29cc000, compress 0x0/0x0/0x0, omap 0x8ce30, meta 0x83631d0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:51.071683+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:52.071876+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:53.072000+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f4e3e000/0x0/0x4ffc00000, data 0x283ff4d/0x29cc000, compress 0x0/0x0/0x0, omap 0x8ce30, meta 0x83631d0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:54.072206+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408013 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:55.072437+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:56.072679+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f4e3e000/0x0/0x4ffc00000, data 0x283ff4d/0x29cc000, compress 0x0/0x0/0x0, omap 0x8ce30, meta 0x83631d0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:57.072987+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:58.073223+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:59.073479+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408013 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:00.073646+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:01.073803+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206299136 unmapped: 32137216 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f4e3e000/0x0/0x4ffc00000, data 0x283ff4d/0x29cc000, compress 0x0/0x0/0x0, omap 0x8ce30, meta 0x83631d0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:02.074038+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206307328 unmapped: 32129024 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:03.074183+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206307328 unmapped: 32129024 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:04.074420+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408013 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206307328 unmapped: 32129024 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc40400
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.720756531s of 31.731000900s, submitted: 12
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:05.074623+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 31866880 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:06.074775+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 ms_handle_reset con 0x55d61bc40400 session 0x55d619aa9c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 31866880 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:07.074950+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f4e00000/0x0/0x4ffc00000, data 0x287ff4d/0x2a0c000, compress 0x0/0x0/0x0, omap 0x8d111, meta 0x8362eef), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 31866880 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:08.075106+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:09.075263+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d6172e4c00 session 0x55d61a1f1dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417208 data_alloc: 234881024 data_used: 13161162
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:10.075388+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61776a000 session 0x55d61838ba40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:11.075530+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:12.075657+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:13.075837+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f4dfa000/0x0/0x4ffc00000, data 0x2881b4c/0x2a10000, compress 0x0/0x0/0x0, omap 0x8db09, meta 0x83624f7), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:14.076109+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417336 data_alloc: 234881024 data_used: 13165258
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f4dfa000/0x0/0x4ffc00000, data 0x2881b4c/0x2a10000, compress 0x0/0x0/0x0, omap 0x8db09, meta 0x83624f7), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:15.076267+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f4dfa000/0x0/0x4ffc00000, data 0x2881b4c/0x2a10000, compress 0x0/0x0/0x0, omap 0x8db09, meta 0x83624f7), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:16.076394+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:17.076601+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:18.076779+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f4dfa000/0x0/0x4ffc00000, data 0x2881b4c/0x2a10000, compress 0x0/0x0/0x0, omap 0x8db09, meta 0x83624f7), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:19.076925+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417336 data_alloc: 234881024 data_used: 13165258
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:20.077087+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:21.077234+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 31858688 heap: 238436352 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:22.077362+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61a050000 session 0x55d619f46380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61c27e000 session 0x55d6183c0700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61cdeec00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61cdeec00 session 0x55d61a7c48c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d6172e4c00 session 0x55d6198a0fc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.487417221s of 17.524330139s, submitted: 18
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61776a000 session 0x55d61a58fa40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206856192 unmapped: 35782656 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:23.077516+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f45a0000/0x0/0x4ffc00000, data 0x30ddb4c/0x326c000, compress 0x0/0x0/0x0, omap 0x8db09, meta 0x83624f7), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206856192 unmapped: 35782656 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:24.077634+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466602 data_alloc: 234881024 data_used: 13165258
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206856192 unmapped: 35782656 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:25.077914+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206856192 unmapped: 35782656 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:26.078056+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206856192 unmapped: 35782656 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:27.078195+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206856192 unmapped: 35782656 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:28.078358+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61a050000 session 0x55d619d77c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206856192 unmapped: 35782656 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:29.078474+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61c27e000 session 0x55d617fe2c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f45a0000/0x0/0x4ffc00000, data 0x30ddb4c/0x326c000, compress 0x0/0x0/0x0, omap 0x8db09, meta 0x83624f7), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466602 data_alloc: 234881024 data_used: 13165258
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61805b000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61805b000 session 0x55d61a780380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d6172e4c00 session 0x55d617fe2540
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 35774464 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:30.078629+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 35774464 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:31.078749+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f459f000/0x0/0x4ffc00000, data 0x30ddb5b/0x326d000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:32.078859+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:33.078998+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:34.079115+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513603 data_alloc: 234881024 data_used: 20602058
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:35.079600+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f459f000/0x0/0x4ffc00000, data 0x30ddb5b/0x326d000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:36.079732+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f459f000/0x0/0x4ffc00000, data 0x30ddb5b/0x326d000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:37.079860+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:38.079993+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:39.080139+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513603 data_alloc: 234881024 data_used: 20602058
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:40.080261+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f459f000/0x0/0x4ffc00000, data 0x30ddb5b/0x326d000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 33169408 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f459f000/0x0/0x4ffc00000, data 0x30ddb5b/0x326d000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:41.080420+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.534845352s of 18.608541489s, submitted: 18
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213155840 unmapped: 29483008 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:42.080561+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213663744 unmapped: 28975104 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:43.080822+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f3d3a000/0x0/0x4ffc00000, data 0x3941b5b/0x3ad1000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213663744 unmapped: 28975104 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:44.080993+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572141 data_alloc: 234881024 data_used: 20667594
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213663744 unmapped: 28975104 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:45.081161+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f3d3a000/0x0/0x4ffc00000, data 0x3941b5b/0x3ad1000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213663744 unmapped: 28975104 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:46.081321+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213663744 unmapped: 28975104 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:47.081457+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:48.081626+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:49.081832+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569229 data_alloc: 234881024 data_used: 20667594
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:50.082005+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f3d19000/0x0/0x4ffc00000, data 0x3963b5b/0x3af3000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:51.082247+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:52.082417+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:53.082594+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f3d19000/0x0/0x4ffc00000, data 0x3963b5b/0x3af3000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:54.082747+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569357 data_alloc: 234881024 data_used: 20667693
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:55.082897+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:56.083046+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f3d19000/0x0/0x4ffc00000, data 0x3963b5b/0x3af3000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:57.083198+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.803193092s of 16.022220612s, submitted: 52
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61c27e000 session 0x55d6183c0a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:58.083320+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f3d14000/0x0/0x4ffc00000, data 0x3968b5b/0x3af8000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f3d14000/0x0/0x4ffc00000, data 0x3968b5b/0x3af8000, compress 0x0/0x0/0x0, omap 0x8f82c, meta 0x83607d4), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61776a000 session 0x55d61a7c5500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 213942272 unmapped: 28696576 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d61a050000 session 0x55d619bedc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:59.083437+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d619ddb000 session 0x55d619d776c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429577 data_alloc: 234881024 data_used: 13421357
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209977344 unmapped: 32661504 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:00.083634+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209977344 unmapped: 32661504 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:01.083865+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209977344 unmapped: 32661504 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:02.084122+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f4dfc000/0x0/0x4ffc00000, data 0x2881b4c/0x2a10000, compress 0x0/0x0/0x0, omap 0x8faa6, meta 0x836055a), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209977344 unmapped: 32661504 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:03.084274+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d6172e4c00 session 0x55d619df0000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209985536 unmapped: 32653312 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f4dfc000/0x0/0x4ffc00000, data 0x2881b4c/0x2a10000, compress 0x0/0x0/0x0, omap 0x8faa6, meta 0x836055a), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:04.084465+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429853 data_alloc: 234881024 data_used: 13421357
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 ms_handle_reset con 0x55d619ddb000 session 0x55d619aa9180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209977344 unmapped: 32661504 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 485 ms_handle_reset con 0x55d61776a000 session 0x55d619bed880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:05.084774+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 485 ms_handle_reset con 0x55d61a050000 session 0x55d61a5661c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209985536 unmapped: 32653312 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:06.084991+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 485 ms_handle_reset con 0x55d61c27e000 session 0x55d619653180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209985536 unmapped: 32653312 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:07.085206+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.251415253s of 10.041988373s, submitted: 98
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 486 ms_handle_reset con 0x55d6172e4c00 session 0x55d61a5d5340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 486 ms_handle_reset con 0x55d61776a000 session 0x55d61a5d5880
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 486 ms_handle_reset con 0x55d619ddb000 session 0x55d619773c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209993728 unmapped: 32645120 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:08.085405+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 486 ms_handle_reset con 0x55d61a050000 session 0x55d61a2d2c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61c27e000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210010112 unmapped: 32628736 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:09.085576+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 487 ms_handle_reset con 0x55d61c27e000 session 0x55d619f47500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429223 data_alloc: 234881024 data_used: 13421357
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 487 heartbeat osd_stat(store_statfs(0x4f4df6000/0x0/0x4ffc00000, data 0x2777f2b/0x2a12000, compress 0x0/0x0/0x0, omap 0x912bd, meta 0x835ed43), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:10.085795+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:11.085978+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 487 heartbeat osd_stat(store_statfs(0x4f4df6000/0x0/0x4ffc00000, data 0x2777f2b/0x2a12000, compress 0x0/0x0/0x0, omap 0x912bd, meta 0x835ed43), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:12.086163+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:13.086363+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 487 handle_osd_map epochs [487,488], i have 487, src has [1,488]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f4df6000/0x0/0x4ffc00000, data 0x2777f2b/0x2a12000, compress 0x0/0x0/0x0, omap 0x912bd, meta 0x835ed43), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:14.086542+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431245 data_alloc: 234881024 data_used: 13421357
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:15.086771+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f4df5000/0x0/0x4ffc00000, data 0x27799c6/0x2a15000, compress 0x0/0x0/0x0, omap 0x913e0, meta 0x835ec20), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:16.088341+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f4df5000/0x0/0x4ffc00000, data 0x27799c6/0x2a15000, compress 0x0/0x0/0x0, omap 0x913e0, meta 0x835ec20), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:17.089220+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f4df5000/0x0/0x4ffc00000, data 0x27799c6/0x2a15000, compress 0x0/0x0/0x0, omap 0x913e0, meta 0x835ec20), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:18.089365+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:19.089637+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431245 data_alloc: 234881024 data_used: 13421357
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210026496 unmapped: 32612352 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:20.089785+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _renew_subs
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.675860405s of 12.968883514s, submitted: 59
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d6172e4c00 session 0x55d61a06c380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d61776a000 session 0x55d61838b6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210034688 unmapped: 32604160 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:21.090085+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210034688 unmapped: 32604160 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:22.090397+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210034688 unmapped: 32604160 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:23.090582+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f4df2000/0x0/0x4ffc00000, data 0x277b445/0x2a18000, compress 0x0/0x0/0x0, omap 0x91c27, meta 0x835e3d9), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210034688 unmapped: 32604160 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:24.090898+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f4df2000/0x0/0x4ffc00000, data 0x277b445/0x2a18000, compress 0x0/0x0/0x0, omap 0x91c27, meta 0x835e3d9), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434019 data_alloc: 234881024 data_used: 13421357
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210034688 unmapped: 32604160 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:25.091166+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210034688 unmapped: 32604160 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:26.091551+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210034688 unmapped: 32604160 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:27.091886+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 210034688 unmapped: 32604160 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:28.092155+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d619ddb000 session 0x55d619bebdc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f4db3000/0x0/0x4ffc00000, data 0x27bb4a8/0x2a59000, compress 0x0/0x0/0x0, omap 0x91cad, meta 0x835e353), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:29.092475+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438182 data_alloc: 234881024 data_used: 13421357
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:30.092755+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:31.093036+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:32.093183+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f4db3000/0x0/0x4ffc00000, data 0x27bb4a8/0x2a59000, compress 0x0/0x0/0x0, omap 0x91cad, meta 0x835e353), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:33.093365+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:34.093581+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438182 data_alloc: 234881024 data_used: 13421357
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:35.093834+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:36.094014+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f4db3000/0x0/0x4ffc00000, data 0x27bb4a8/0x2a59000, compress 0x0/0x0/0x0, omap 0x91cad, meta 0x835e353), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:37.094154+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:38.094277+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:39.094415+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 33210368 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f4db3000/0x0/0x4ffc00000, data 0x27bb4a8/0x2a59000, compress 0x0/0x0/0x0, omap 0x91cad, meta 0x835e353), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438182 data_alloc: 234881024 data_used: 13421357
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d61a050000 session 0x55d619bd0c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61805bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.732427597s of 19.800811768s, submitted: 18
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f4db3000/0x0/0x4ffc00000, data 0x27bb4a8/0x2a59000, compress 0x0/0x0/0x0, omap 0x91cad, meta 0x835e353), peers [0,2] op hist [0,0,0,0,0,1])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:40.094552+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 33071104 heap: 242638848 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d61805bc00 session 0x55d619653c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d6172e4c00 session 0x55d6177ab500
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d61776a000 session 0x55d61a12b340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:41.094785+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:42.094993+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:43.095111+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:44.095263+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f2bfa000/0x0/0x4ffc00000, data 0x49744a7/0x4c12000, compress 0x0/0x0/0x0, omap 0x91db9, meta 0x835e247), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620037 data_alloc: 234881024 data_used: 13425354
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:45.095443+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f2bfa000/0x0/0x4ffc00000, data 0x49744a7/0x4c12000, compress 0x0/0x0/0x0, omap 0x91db9, meta 0x835e247), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:46.095590+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:47.095730+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:48.095895+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:49.096107+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f2bfa000/0x0/0x4ffc00000, data 0x49744a7/0x4c12000, compress 0x0/0x0/0x0, omap 0x91db9, meta 0x835e247), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620037 data_alloc: 234881024 data_used: 13425354
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:50.096651+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:51.096866+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:52.097285+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 37142528 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61805bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.857565880s of 12.273908615s, submitted: 56
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d61805bc00 session 0x55d6177a9180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:53.097674+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:54.098026+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621646 data_alloc: 234881024 data_used: 13425354
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:55.098193+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f2bf9000/0x0/0x4ffc00000, data 0x49744ca/0x4c13000, compress 0x0/0x0/0x0, omap 0x91db9, meta 0x835e247), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:56.098360+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:57.098618+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:58.098789+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:59.099023+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3631758 data_alloc: 234881024 data_used: 15101130
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:00.099248+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:01.099421+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f2bf9000/0x0/0x4ffc00000, data 0x49744ca/0x4c13000, compress 0x0/0x0/0x0, omap 0x91db9, meta 0x835e247), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:02.099649+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:03.099872+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:04.100037+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3632398 data_alloc: 234881024 data_used: 15121610
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:05.100218+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 209707008 unmapped: 37134336 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.103594780s of 13.116017342s, submitted: 4
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:06.100419+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 224763904 unmapped: 22077440 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:07.100571+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 217227264 unmapped: 29614080 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f21ee000/0x0/0x4ffc00000, data 0x537f4ca/0x561e000, compress 0x0/0x0/0x0, omap 0x91db9, meta 0x835e247), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:08.100795+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 217227264 unmapped: 29614080 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:09.100936+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 217227264 unmapped: 29614080 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717902 data_alloc: 234881024 data_used: 15595722
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:10.101169+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 217227264 unmapped: 29614080 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:11.101345+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 217227264 unmapped: 29614080 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:12.101523+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 217227264 unmapped: 29614080 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:13.101688+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 217227264 unmapped: 29614080 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f1de6000/0x0/0x4ffc00000, data 0x57874ca/0x5a26000, compress 0x0/0x0/0x0, omap 0x91db9, meta 0x835e247), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:14.101861+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 217227264 unmapped: 29614080 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718158 data_alloc: 234881024 data_used: 15603914
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:15.102021+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218275840 unmapped: 28565504 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.667717934s of 10.077672005s, submitted: 117
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d619ddb000 session 0x55d615c75180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a050000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f1de6000/0x0/0x4ffc00000, data 0x57874ca/0x5a26000, compress 0x0/0x0/0x0, omap 0x91db9, meta 0x835e247), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 ms_handle_reset con 0x55d61a050000 session 0x55d61a1f16c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:16.102180+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218300416 unmapped: 28540928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:17.102336+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218300416 unmapped: 28540928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:18.102504+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218300416 unmapped: 28540928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:19.102671+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218300416 unmapped: 28540928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 489 handle_osd_map epochs [489,490], i have 489, src has [1,490]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d6172e4c00 session 0x55d61838b6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721818 data_alloc: 234881024 data_used: 15632551
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:20.103077+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de2000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x9221e, meta 0x835dde2), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61776a000 session 0x55d619d77c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:21.103271+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:22.103432+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:23.103564+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:24.103668+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721833 data_alloc: 234881024 data_used: 15632551
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:25.103859+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:26.103968+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de0000/0x0/0x4ffc00000, data 0x57890b5/0x5a2a000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de0000/0x0/0x4ffc00000, data 0x57890b5/0x5a2a000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:27.104095+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:28.104239+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:29.104413+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de0000/0x0/0x4ffc00000, data 0x57890b5/0x5a2a000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de0000/0x0/0x4ffc00000, data 0x57890b5/0x5a2a000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721833 data_alloc: 234881024 data_used: 15632551
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:30.104547+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61805bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61805bc00 session 0x55d617346380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:31.104686+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218308608 unmapped: 28532736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d619ddb000 session 0x55d619653180
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:32.104839+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6196dfc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d6196dfc00 session 0x55d61a2d2c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.086864471s of 17.144132614s, submitted: 24
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d6172e4c00 session 0x55d619beda40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:33.105021+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61805bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:34.105154+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1dbd000/0x0/0x4ffc00000, data 0x57ad0c5/0x5a4f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725235 data_alloc: 234881024 data_used: 15632567
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:35.105308+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:36.105424+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:37.105564+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:38.105656+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1dbd000/0x0/0x4ffc00000, data 0x57ad0c5/0x5a4f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:39.105767+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726259 data_alloc: 234881024 data_used: 15717047
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:40.105963+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:41.106092+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:42.106230+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:43.106407+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:44.106545+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1dbd000/0x0/0x4ffc00000, data 0x57ad0c5/0x5a4f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:45.107226+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726259 data_alloc: 234881024 data_used: 15717047
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 218480640 unmapped: 28360704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:46.107359+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.133109093s of 14.139578819s, submitted: 2
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 25092096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:47.107489+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 25092096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:48.107622+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f15bd000/0x0/0x4ffc00000, data 0x5fad0c5/0x624f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 25092096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:49.107776+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f15bd000/0x0/0x4ffc00000, data 0x5fad0c5/0x624f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 25092096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:50.107912+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797139 data_alloc: 234881024 data_used: 21028023
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 25092096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:51.108091+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f15bd000/0x0/0x4ffc00000, data 0x5fad0c5/0x624f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 25092096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:52.108275+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 24657920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:53.108443+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 24657920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:54.108585+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 24657920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:55.108763+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819951 data_alloc: 234881024 data_used: 21028023
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f11bd000/0x0/0x4ffc00000, data 0x63ad0c5/0x664f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f11bd000/0x0/0x4ffc00000, data 0x63ad0c5/0x664f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 24657920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:56.108903+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 24657920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:57.109051+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.777740479s of 10.845247269s, submitted: 20
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 24641536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:58.109225+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 24641536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:59.109394+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f11bd000/0x0/0x4ffc00000, data 0x63ad0c5/0x664f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 24641536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets getting new tickets!
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:00.109656+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _finish_auth 0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:00.110546+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818479 data_alloc: 234881024 data_used: 21032119
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 24616960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:01.109804+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 24616960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:02.109901+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f0dbd000/0x0/0x4ffc00000, data 0x67ad0c5/0x6a4f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223281152 unmapped: 23560192 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:03.110064+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223281152 unmapped: 23560192 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d619e7b800 session 0x55d61a204380
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d619ddb000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:04.110240+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223281152 unmapped: 23560192 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: mgrc ms_handle_reset ms_handle_reset con 0x55d619762000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/273425939
Feb 02 16:02:27 compute-0 ceph-osd[87170]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/273425939,v1:192.168.122.100:6801/273425939]
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: get_auth_request con 0x55d6196dfc00 auth_method 0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: mgrc handle_mgr_configure stats_period=5
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:05.110456+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843595 data_alloc: 234881024 data_used: 22097079
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61983ec00 session 0x55d6197736c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bc41000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d6183d2000 session 0x55d6178d6e00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61b3f8c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 23756800 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:06.110620+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 23756800 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:07.110802+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 23756800 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:08.110971+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f0dbd000/0x0/0x4ffc00000, data 0x67ad0c5/0x6a4f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 23756800 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:09.111136+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 23756800 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:10.111301+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843595 data_alloc: 234881024 data_used: 22097079
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 23756800 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:11.111461+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 23756800 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:12.111617+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f0dbd000/0x0/0x4ffc00000, data 0x67ad0c5/0x6a4f000, compress 0x0/0x0/0x0, omap 0x922a4, meta 0x835dd5c), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 23756800 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:13.111783+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 23715840 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:14.111906+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61776a000 session 0x55d619652c40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.593950272s of 16.629344940s, submitted: 16
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61805bc00 session 0x55d61a567dc0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183d2000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d6183d2000 session 0x55d6177b2a80
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 23715840 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:15.112082+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3838666 data_alloc: 234881024 data_used: 23176375
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 23715840 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:16.112300+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f0de2000/0x0/0x4ffc00000, data 0x67890b5/0x6a2a000, compress 0x0/0x0/0x0, omap 0x91f3e, meta 0x835e0c2), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 23715840 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:17.112462+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f0de2000/0x0/0x4ffc00000, data 0x67890b5/0x6a2a000, compress 0x0/0x0/0x0, omap 0x91f3e, meta 0x835e0c2), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 23715840 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:18.112610+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61a054800
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61a054800 session 0x55d617347c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6172e4c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d6172e4c00 session 0x55d6178bf340
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:19.112806+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:20.113011+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749002 data_alloc: 234881024 data_used: 21644455
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:21.113229+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61776a000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61776a000 session 0x55d6184eb6c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61805bc00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61805bc00 session 0x55d619907c00
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:22.113452+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:23.113660+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:24.113843+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:25.114199+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21648551
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:26.114336+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:27.114482+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:28.114689+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:29.114973+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:30.115140+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21648551
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:31.115316+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:32.115494+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:33.115790+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:34.115942+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:35.116160+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21648551
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:36.116349+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:37.116479+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:38.116670+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:39.116844+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:40.116994+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21648551
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:41.117155+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:42.117371+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:43.117538+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d6177fbc00 session 0x55d619ac1a40
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d6183d2000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:44.117689+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:45.117912+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21648551
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223166464 unmapped: 23674880 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:46.118081+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223174656 unmapped: 23666688 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:47.118481+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223174656 unmapped: 23666688 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:48.118628+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223174656 unmapped: 23666688 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:49.118799+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223174656 unmapped: 23666688 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:50.118925+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21648551
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223174656 unmapped: 23666688 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:51.119033+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.365932465s of 37.424438477s, submitted: 26
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223182848 unmapped: 23658496 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:52.119171+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:53.119327+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223182848 unmapped: 23658496 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:54.119531+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223182848 unmapped: 23658496 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:55.119699+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223191040 unmapped: 23650304 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:56.119937+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223191040 unmapped: 23650304 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:57.120161+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223191040 unmapped: 23650304 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:58.120322+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223191040 unmapped: 23650304 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:59.120548+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223191040 unmapped: 23650304 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:00.120659+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223199232 unmapped: 23642112 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:01.120854+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223199232 unmapped: 23642112 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:02.120985+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223199232 unmapped: 23642112 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:03.121152+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223199232 unmapped: 23642112 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:04.121278+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223199232 unmapped: 23642112 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:05.121477+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223199232 unmapped: 23642112 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:06.121635+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223199232 unmapped: 23642112 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:07.121777+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223199232 unmapped: 23642112 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:08.121937+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223207424 unmapped: 23633920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:09.122101+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223207424 unmapped: 23633920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:10.122263+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223207424 unmapped: 23633920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:11.122440+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223207424 unmapped: 23633920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:12.122601+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223207424 unmapped: 23633920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:13.122763+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223207424 unmapped: 23633920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:14.122943+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223207424 unmapped: 23633920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:15.123168+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223207424 unmapped: 23633920 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:16.123312+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223215616 unmapped: 23625728 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:17.123436+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223215616 unmapped: 23625728 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:18.123605+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223215616 unmapped: 23625728 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:19.123763+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223215616 unmapped: 23625728 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:20.123920+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223215616 unmapped: 23625728 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:21.124039+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223215616 unmapped: 23625728 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:22.124159+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223223808 unmapped: 23617536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:23.124297+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223223808 unmapped: 23617536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:24.124505+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223223808 unmapped: 23617536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:25.124680+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223223808 unmapped: 23617536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:26.124891+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223223808 unmapped: 23617536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:27.125041+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223223808 unmapped: 23617536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:28.125188+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223223808 unmapped: 23617536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:29.125328+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223223808 unmapped: 23617536 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:30.125474+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223232000 unmapped: 23609344 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:31.125599+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223232000 unmapped: 23609344 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:32.125814+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223232000 unmapped: 23609344 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:33.125981+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223232000 unmapped: 23609344 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:34.126243+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223232000 unmapped: 23609344 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:35.126445+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223232000 unmapped: 23609344 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:36.126659+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223240192 unmapped: 23601152 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:37.126868+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223240192 unmapped: 23601152 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:38.126998+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223240192 unmapped: 23601152 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:39.127135+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223240192 unmapped: 23601152 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:40.127269+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 23592960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:41.127401+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 23592960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:42.127544+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 23592960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:43.127822+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 23592960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:44.128111+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 23592960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:45.128381+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 23592960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:46.128540+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 23592960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:47.128807+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 23592960 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:48.128993+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223256576 unmapped: 23584768 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:49.129112+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223256576 unmapped: 23584768 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:50.129290+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223256576 unmapped: 23584768 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:51.129466+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223256576 unmapped: 23584768 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:52.129690+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223256576 unmapped: 23584768 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:53.129904+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223256576 unmapped: 23584768 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:54.130056+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223264768 unmapped: 23576576 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:55.130248+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223264768 unmapped: 23576576 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:56.130376+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223264768 unmapped: 23576576 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:57.130540+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223264768 unmapped: 23576576 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:58.130668+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223264768 unmapped: 23576576 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:59.130847+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223264768 unmapped: 23576576 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:00.131018+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223264768 unmapped: 23576576 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:01.131189+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223264768 unmapped: 23576576 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:02.131385+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:03.131559+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:04.131788+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:05.132886+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:06.133062+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:07.133254+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:08.133404+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:09.133537+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:10.133676+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:11.133806+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223272960 unmapped: 23568384 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:12.133936+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223281152 unmapped: 23560192 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:13.134108+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223281152 unmapped: 23560192 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:14.134310+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223281152 unmapped: 23560192 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:15.134539+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223281152 unmapped: 23560192 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:16.134685+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223281152 unmapped: 23560192 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:17.134877+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223289344 unmapped: 23552000 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:18.135067+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223289344 unmapped: 23552000 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:19.135222+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223289344 unmapped: 23552000 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:20.135432+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223289344 unmapped: 23552000 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:21.135643+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223289344 unmapped: 23552000 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:22.135910+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223289344 unmapped: 23552000 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:23.136141+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223289344 unmapped: 23552000 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:24.136327+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223297536 unmapped: 23543808 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:25.136579+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223297536 unmapped: 23543808 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:26.136816+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223297536 unmapped: 23543808 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:27.137072+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223297536 unmapped: 23543808 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:28.137290+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223305728 unmapped: 23535616 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:29.137475+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223305728 unmapped: 23535616 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:30.137762+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223305728 unmapped: 23535616 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:31.137961+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223305728 unmapped: 23535616 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:32.138196+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223305728 unmapped: 23535616 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:33.138408+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223305728 unmapped: 23535616 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:34.138612+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223305728 unmapped: 23535616 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:35.138852+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223305728 unmapped: 23535616 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:36.139110+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223313920 unmapped: 23527424 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:37.139353+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223313920 unmapped: 23527424 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:38.139554+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223313920 unmapped: 23527424 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:39.139801+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223313920 unmapped: 23527424 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:40.140009+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223313920 unmapped: 23527424 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:41.140255+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223313920 unmapped: 23527424 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:42.140477+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223322112 unmapped: 23519232 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:43.140663+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223322112 unmapped: 23519232 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:44.140873+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223322112 unmapped: 23519232 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:45.141070+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223322112 unmapped: 23519232 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:46.141265+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223322112 unmapped: 23519232 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:47.141492+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223330304 unmapped: 23511040 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:48.141764+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223330304 unmapped: 23511040 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:49.141967+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223330304 unmapped: 23511040 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:50.142126+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223330304 unmapped: 23511040 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:51.142264+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223330304 unmapped: 23511040 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:52.142477+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223338496 unmapped: 23502848 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:53.142656+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223338496 unmapped: 23502848 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:54.142874+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223338496 unmapped: 23502848 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:55.143121+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223338496 unmapped: 23502848 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:56.143260+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223338496 unmapped: 23502848 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:57.143387+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223338496 unmapped: 23502848 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:58.143537+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223338496 unmapped: 23502848 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:59.143665+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223338496 unmapped: 23502848 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:00.143838+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223346688 unmapped: 23494656 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:01.144018+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223346688 unmapped: 23494656 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:02.144192+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223346688 unmapped: 23494656 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:03.144399+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223346688 unmapped: 23494656 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:04.144588+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223346688 unmapped: 23494656 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:05.144828+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223346688 unmapped: 23494656 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:06.144979+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223354880 unmapped: 23486464 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:07.145133+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223354880 unmapped: 23486464 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:08.145322+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223354880 unmapped: 23486464 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:09.145515+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223354880 unmapped: 23486464 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:10.145676+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223354880 unmapped: 23486464 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:11.145843+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223354880 unmapped: 23486464 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:12.145976+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223363072 unmapped: 23478272 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:13.146171+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223363072 unmapped: 23478272 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:14.146354+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223363072 unmapped: 23478272 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:15.146599+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223363072 unmapped: 23478272 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:16.146800+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223371264 unmapped: 23470080 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:17.147003+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223379456 unmapped: 23461888 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223379456 unmapped: 23461888 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:18.876239+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223379456 unmapped: 23461888 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:19.876451+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223379456 unmapped: 23461888 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:20.876615+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223387648 unmapped: 23453696 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:21.876965+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223387648 unmapped: 23453696 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:22.877160+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223387648 unmapped: 23453696 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:23.877358+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223387648 unmapped: 23453696 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:24.877535+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223387648 unmapped: 23453696 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:25.877823+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223387648 unmapped: 23453696 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:26.877997+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223387648 unmapped: 23453696 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:27.878240+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 23445504 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:28.878395+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 23445504 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:29.878511+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 23445504 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:30.878676+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 23445504 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:31.878898+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 23445504 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:32.879092+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223404032 unmapped: 23437312 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:33.879317+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223404032 unmapped: 23437312 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:34.879504+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223404032 unmapped: 23437312 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:35.879745+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223404032 unmapped: 23437312 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:36.879911+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223404032 unmapped: 23437312 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:37.880101+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223404032 unmapped: 23437312 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:38.880405+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223404032 unmapped: 23437312 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:39.881255+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 23429120 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:40.881428+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 23429120 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:41.881815+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 23429120 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:42.882011+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 23429120 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:43.882186+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 23429120 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:44.882371+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 23429120 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:45.883412+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 23429120 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:46.883571+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 23429120 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:47.883735+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223420416 unmapped: 23420928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:48.883948+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223420416 unmapped: 23420928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:49.884196+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223420416 unmapped: 23420928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:50.884802+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223420416 unmapped: 23420928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:51.885026+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223420416 unmapped: 23420928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:52.885189+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223420416 unmapped: 23420928 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:53.885344+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223428608 unmapped: 23412736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:54.885693+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223428608 unmapped: 23412736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:55.885946+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223428608 unmapped: 23412736 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:56.886222+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223436800 unmapped: 23404544 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:57.886537+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223436800 unmapped: 23404544 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:58.886740+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223436800 unmapped: 23404544 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:59.886852+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223444992 unmapped: 23396352 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:00.886989+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223444992 unmapped: 23396352 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:01.887161+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223444992 unmapped: 23396352 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:02.887328+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223444992 unmapped: 23396352 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:03.887507+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223461376 unmapped: 23379968 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:04.887687+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223461376 unmapped: 23379968 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:05.887921+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223461376 unmapped: 23379968 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:06.888138+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223461376 unmapped: 23379968 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:07.888319+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223461376 unmapped: 23379968 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:08.888497+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223461376 unmapped: 23379968 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:09.888648+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223461376 unmapped: 23379968 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:10.888886+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223461376 unmapped: 23379968 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:11.889089+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223469568 unmapped: 23371776 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:12.889296+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223469568 unmapped: 23371776 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:13.889497+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223469568 unmapped: 23371776 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:14.889651+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:15.889891+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223469568 unmapped: 23371776 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:16.890084+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223469568 unmapped: 23371776 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:17.890235+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223469568 unmapped: 23371776 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:18.890413+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223477760 unmapped: 23363584 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:19.890567+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223477760 unmapped: 23363584 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:21.776874+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223477760 unmapped: 23363584 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:22.777014+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223477760 unmapped: 23363584 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:23.777147+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223477760 unmapped: 23363584 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:24.777311+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223485952 unmapped: 23355392 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:25.777445+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223485952 unmapped: 23355392 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:26.777650+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223485952 unmapped: 23355392 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:27.777781+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223485952 unmapped: 23355392 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:28.777922+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223494144 unmapped: 23347200 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:29.778054+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223494144 unmapped: 23347200 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:30.778222+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223494144 unmapped: 23347200 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:31.778409+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223494144 unmapped: 23347200 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:32.778601+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223494144 unmapped: 23347200 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:33.778800+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223494144 unmapped: 23347200 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:34.778965+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223494144 unmapped: 23347200 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:35.779148+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223502336 unmapped: 23339008 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:36.779339+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223502336 unmapped: 23339008 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:37.779484+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223502336 unmapped: 23339008 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:38.779628+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223502336 unmapped: 23339008 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:39.779786+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223510528 unmapped: 23330816 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:40.779989+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223510528 unmapped: 23330816 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:41.780190+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223510528 unmapped: 23330816 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 234881024 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:42.780351+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223518720 unmapped: 23322624 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:43.780530+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223518720 unmapped: 23322624 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:44.780753+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223526912 unmapped: 23314432 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:45.780890+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223526912 unmapped: 23314432 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:46.781126+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223526912 unmapped: 23314432 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:47.781316+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223526912 unmapped: 23314432 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:48.781660+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223526912 unmapped: 23314432 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:49.781821+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223526912 unmapped: 23314432 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:50.782107+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223526912 unmapped: 23314432 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:51.782394+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223535104 unmapped: 23306240 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:52.782579+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223535104 unmapped: 23306240 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:53.782737+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223535104 unmapped: 23306240 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:54.782877+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223535104 unmapped: 23306240 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:55.783142+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223535104 unmapped: 23306240 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:56.783431+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223543296 unmapped: 23298048 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:57.783594+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223543296 unmapped: 23298048 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:58.783808+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223543296 unmapped: 23298048 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 118K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.68 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1490 writes, 4587 keys, 1490 commit groups, 1.0 writes per commit group, ingest: 5.99 MB, 0.01 MB/s
                                           Interval WAL: 1490 writes, 651 syncs, 2.29 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:59.784219+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223543296 unmapped: 23298048 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:00.784387+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223543296 unmapped: 23298048 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:01.784868+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223543296 unmapped: 23298048 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:02.785029+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223543296 unmapped: 23298048 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:03.785297+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223543296 unmapped: 23298048 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:04.785452+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223551488 unmapped: 23289856 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:05.785601+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223551488 unmapped: 23289856 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:06.785831+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223551488 unmapped: 23289856 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:07.785984+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223551488 unmapped: 23289856 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:08.786161+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223551488 unmapped: 23289856 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:09.786337+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223551488 unmapped: 23289856 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:10.786546+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223559680 unmapped: 23281664 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:11.786796+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223559680 unmapped: 23281664 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:12.786968+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223576064 unmapped: 23265280 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:13.787138+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223576064 unmapped: 23265280 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:14.787400+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223576064 unmapped: 23265280 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:15.787530+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223584256 unmapped: 23257088 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:16.787815+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223584256 unmapped: 23257088 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223584256 unmapped: 23257088 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:18.126897+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223584256 unmapped: 23257088 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:19.127110+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223584256 unmapped: 23257088 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:20.127313+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223592448 unmapped: 23248896 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:21.127505+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223592448 unmapped: 23248896 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:22.127776+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223592448 unmapped: 23248896 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:23.127977+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223592448 unmapped: 23248896 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:24.128220+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223592448 unmapped: 23248896 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:25.128422+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223592448 unmapped: 23248896 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:26.128587+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223600640 unmapped: 23240704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:27.128769+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223600640 unmapped: 23240704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:28.128929+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:29.129161+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223600640 unmapped: 23240704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:30.129385+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223600640 unmapped: 23240704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:31.129604+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223600640 unmapped: 23240704 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:32.129781+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 23232512 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:33.129981+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 23232512 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:34.130194+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 23232512 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:35.130419+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 23232512 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:36.130576+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 23232512 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:37.130769+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 23232512 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:38.130918+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 23224320 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:39.131096+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 23224320 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:40.131250+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 23224320 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:41.131451+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 23224320 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:42.131627+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 23224320 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:43.131807+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 23224320 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:44.131976+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 23224320 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:45.132132+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 23216128 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:46.132291+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223633408 unmapped: 23207936 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:47.132556+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223633408 unmapped: 23207936 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:48.132880+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223633408 unmapped: 23207936 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:49.133271+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223633408 unmapped: 23207936 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:50.133431+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223633408 unmapped: 23207936 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:51.133620+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223633408 unmapped: 23207936 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.021148682s of 300.053833008s, submitted: 22
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:52.133769+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223657984 unmapped: 23183360 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:53.133931+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223657984 unmapped: 23183360 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:54.134085+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223674368 unmapped: 23166976 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:55.134401+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223715328 unmapped: 23126016 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:56.134570+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223715328 unmapped: 23126016 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:57.134765+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223715328 unmapped: 23126016 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:58.134890+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223715328 unmapped: 23126016 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:59.135028+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223715328 unmapped: 23126016 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:00.135192+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223715328 unmapped: 23126016 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:01.135467+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223715328 unmapped: 23126016 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:02.135655+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223723520 unmapped: 23117824 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:03.135807+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223723520 unmapped: 23117824 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:04.135980+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223723520 unmapped: 23117824 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:05.136206+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223723520 unmapped: 23117824 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:06.136368+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223731712 unmapped: 23109632 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:07.136618+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223731712 unmapped: 23109632 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:08.136797+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223731712 unmapped: 23109632 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:09.136977+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223731712 unmapped: 23109632 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:10.137112+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223739904 unmapped: 23101440 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:11.137293+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223739904 unmapped: 23101440 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:12.137484+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223739904 unmapped: 23101440 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:13.137627+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223739904 unmapped: 23101440 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:14.137835+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223739904 unmapped: 23101440 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:15.137975+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223739904 unmapped: 23101440 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:16.138167+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223748096 unmapped: 23093248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:17.139088+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223748096 unmapped: 23093248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:18.139246+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223748096 unmapped: 23093248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:19.139466+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223748096 unmapped: 23093248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:20.139623+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223756288 unmapped: 23085056 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:21.139754+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223756288 unmapped: 23085056 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:22.139893+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223756288 unmapped: 23085056 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:23.139986+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223756288 unmapped: 23085056 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:24.140143+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223756288 unmapped: 23085056 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:25.140274+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223756288 unmapped: 23085056 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:26.140402+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223756288 unmapped: 23085056 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:27.140589+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223756288 unmapped: 23085056 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:28.140769+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223764480 unmapped: 23076864 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:29.140882+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223764480 unmapped: 23076864 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:30.140999+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223764480 unmapped: 23076864 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:31.141133+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223764480 unmapped: 23076864 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:32.141262+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223764480 unmapped: 23076864 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:33.141435+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223772672 unmapped: 23068672 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:34.141630+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223772672 unmapped: 23068672 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:35.141830+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223772672 unmapped: 23068672 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:36.141945+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223772672 unmapped: 23068672 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:37.142140+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223772672 unmapped: 23068672 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:38.142291+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223772672 unmapped: 23068672 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:39.142432+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223772672 unmapped: 23068672 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:40.142540+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223780864 unmapped: 23060480 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:41.142681+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223780864 unmapped: 23060480 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:42.142809+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223780864 unmapped: 23060480 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:43.142937+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223780864 unmapped: 23060480 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:44.143039+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223780864 unmapped: 23060480 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:45.143162+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223789056 unmapped: 23052288 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:46.143310+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223789056 unmapped: 23052288 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:47.143503+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223789056 unmapped: 23052288 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:48.143628+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223789056 unmapped: 23052288 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:49.143769+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223789056 unmapped: 23052288 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:50.143895+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223789056 unmapped: 23052288 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:51.144038+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223789056 unmapped: 23052288 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:52.144174+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223797248 unmapped: 23044096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:53.144365+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223797248 unmapped: 23044096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:54.144507+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223797248 unmapped: 23044096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:55.144641+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223797248 unmapped: 23044096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:56.144783+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223797248 unmapped: 23044096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:57.144986+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223797248 unmapped: 23044096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:58.145143+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223797248 unmapped: 23044096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:59.145319+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223797248 unmapped: 23044096 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:00.145497+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223805440 unmapped: 23035904 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:01.145632+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223805440 unmapped: 23035904 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:02.145794+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223805440 unmapped: 23035904 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:03.145917+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223805440 unmapped: 23035904 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:04.146042+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223813632 unmapped: 23027712 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:05.146197+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223813632 unmapped: 23027712 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:06.146363+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223813632 unmapped: 23027712 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:07.146577+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223813632 unmapped: 23027712 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:08.146768+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223813632 unmapped: 23027712 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:09.146897+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223813632 unmapped: 23027712 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:10.147037+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223821824 unmapped: 23019520 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:11.147222+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223830016 unmapped: 23011328 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:12.147359+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223830016 unmapped: 23011328 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:13.147516+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223830016 unmapped: 23011328 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:14.147643+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223830016 unmapped: 23011328 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:15.147833+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223830016 unmapped: 23011328 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:16.147995+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223838208 unmapped: 23003136 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:17.148183+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223838208 unmapped: 23003136 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:18.148316+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223838208 unmapped: 23003136 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:19.148456+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223838208 unmapped: 23003136 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:20.148595+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223846400 unmapped: 22994944 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:21.148779+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223846400 unmapped: 22994944 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:22.148944+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223846400 unmapped: 22994944 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:23.149075+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223846400 unmapped: 22994944 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:24.149215+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 22986752 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:25.149366+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 22986752 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:26.149483+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 22986752 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:27.149623+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 22986752 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:28.149767+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 22986752 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:29.149900+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 22986752 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:30.150077+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 22978560 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:31.150254+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 22978560 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:32.150445+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 22978560 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:33.150571+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 22978560 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:34.150732+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 22978560 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:35.150848+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 22978560 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:36.151077+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 22978560 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:37.151331+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223870976 unmapped: 22970368 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:38.151561+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223870976 unmapped: 22970368 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:39.151802+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223870976 unmapped: 22970368 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:40.152056+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223879168 unmapped: 22962176 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:41.152221+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223879168 unmapped: 22962176 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:42.152437+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223879168 unmapped: 22962176 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:43.152613+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223879168 unmapped: 22962176 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:44.152811+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223879168 unmapped: 22962176 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:45.153030+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223879168 unmapped: 22962176 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:46.153232+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223879168 unmapped: 22962176 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:47.153494+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223879168 unmapped: 22962176 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:48.153671+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223879168 unmapped: 22962176 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:49.153838+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223887360 unmapped: 22953984 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:50.154090+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223887360 unmapped: 22953984 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:51.154804+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223887360 unmapped: 22953984 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:52.154976+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223887360 unmapped: 22953984 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:53.155472+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223887360 unmapped: 22953984 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:54.155646+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223887360 unmapped: 22953984 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:55.155863+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223895552 unmapped: 22945792 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:56.156007+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223895552 unmapped: 22945792 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:57.156224+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223895552 unmapped: 22945792 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:58.156347+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223895552 unmapped: 22945792 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:59.156517+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223895552 unmapped: 22945792 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:00.156773+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223895552 unmapped: 22945792 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:01.156947+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223895552 unmapped: 22945792 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:02.157097+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223903744 unmapped: 22937600 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:03.157232+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223903744 unmapped: 22937600 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:04.157403+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223903744 unmapped: 22937600 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:05.157568+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223903744 unmapped: 22937600 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:06.157784+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223903744 unmapped: 22937600 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:07.157979+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f1de4000/0x0/0x4ffc00000, data 0x5789043/0x5a28000, compress 0x0/0x0/0x0, omap 0x920d0, meta 0x835df30), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749066 data_alloc: 218103808 data_used: 21649011
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223903744 unmapped: 22937600 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:08.158112+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223911936 unmapped: 22929408 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:09.158272+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 223911936 unmapped: 22929408 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:10.158447+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 138.550216675s of 138.699676514s, submitted: 90
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d619ebf400 session 0x55d617346700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61bd55000 session 0x55d61a06c700
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d619e7d400 session 0x55d619aa88c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: handle_auth_request added challenge on 0x55d61bd55000
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 31293440 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 ms_handle_reset con 0x55d61bd55000 session 0x55d6178d68c0
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:11.158629+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:12.158784+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:13.158943+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:14.159073+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:15.159253+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:16.159406+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:17.159924+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:18.160057+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:19.160200+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:20.160334+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:21.160500+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:22.160665+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:23.160851+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:24.161018+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:25.161204+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:26.161355+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:27.161556+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:28.161772+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:29.161994+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:30.162193+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:31.162362+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:32.162531+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:33.162765+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:34.162930+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:35.163072+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:36.163238+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:37.163448+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:38.163600+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:39.163810+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:40.163964+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:41.164145+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:42.164287+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:43.164477+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:44.164644+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:45.164807+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:46.164993+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:47.165223+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:48.165387+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:49.165540+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:50.165730+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:51.165938+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:52.166147+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:53.166329+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:54.166492+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:55.166664+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:56.166878+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:57.167119+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:58.167301+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:59.167482+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:00.167615+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:01.167748+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:02.167902+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:03.168066+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:04.168245+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:05.168443+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:06.168666+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:07.168923+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:08.169127+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:09.169379+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:10.169546+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:11.169724+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:12.169902+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:13.170050+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:14.170239+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:15.170421+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:16.170602+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:17.170815+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:18.170950+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:19.171099+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:20.171279+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:21.171468+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:22.171632+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:23.171762+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:24.171951+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:25.172181+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:26.172363+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:27.172556+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:28.172750+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:29.172894+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:30.173063+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:31.173246+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:32.173423+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:33.173596+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:34.173801+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:35.174008+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:36.174196+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:37.174446+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:38.174582+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:39.174779+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:40.175057+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:41.175221+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:42.175379+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:43.175534+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:44.175797+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:45.175962+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:46.176152+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:47.176354+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:48.176504+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:49.176723+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:50.176951+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:51.177139+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:52.177268+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:53.177431+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:27 compute-0 ceph-osd[87170]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:27 compute-0 ceph-osd[87170]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564438 data_alloc: 218103808 data_used: 8650257
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215556096 unmapped: 31285248 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:54.177594+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215564288 unmapped: 31277056 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f308c000/0x0/0x4ffc00000, data 0x44e1fe1/0x4780000, compress 0x0/0x0/0x0, omap 0x92b10, meta 0x835d4f0), peers [0,2] op hist [])
Feb 02 16:02:27 compute-0 ceph-osd[87170]: do_command 'config diff' '{prefix=config diff}'
Feb 02 16:02:27 compute-0 ceph-osd[87170]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 02 16:02:27 compute-0 ceph-osd[87170]: do_command 'config show' '{prefix=config show}'
Feb 02 16:02:27 compute-0 ceph-osd[87170]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 02 16:02:27 compute-0 ceph-osd[87170]: do_command 'counter dump' '{prefix=counter dump}'
Feb 02 16:02:27 compute-0 ceph-osd[87170]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 02 16:02:27 compute-0 ceph-osd[87170]: do_command 'counter schema' '{prefix=counter schema}'
Feb 02 16:02:27 compute-0 ceph-osd[87170]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:55.177781+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 30957568 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:56.177916+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: prioritycache tune_memory target: 4294967296 mapped: 215777280 unmapped: 31064064 heap: 246841344 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: tick
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_tickets
Feb 02 16:02:27 compute-0 ceph-osd[87170]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:57.178103+0000)
Feb 02 16:02:27 compute-0 ceph-osd[87170]: do_command 'log dump' '{prefix=log dump}'
Feb 02 16:02:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 02 16:02:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859718089' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 02 16:02:27 compute-0 nova_compute[239545]: 2026-02-02 16:02:27.539 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 02 16:02:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2110869457' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 02 16:02:27 compute-0 rsyslogd[1004]: imjournal from <np0005605268:ceph-osd>: begin to drop messages due to rate-limiting
Feb 02 16:02:27 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 02 16:02:27 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb 02 16:02:27 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518074427' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Feb 02 16:02:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2801969465' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: pgmap v2151: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:28 compute-0 ceph-mon[75334]: from='client.19202 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: from='client.19204 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/112889341' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1859718089' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2110869457' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2518074427' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2801969465' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 02 16:02:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/238100838' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 02 16:02:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/238100838' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb 02 16:02:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2411834022' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb 02 16:02:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3754983985' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 02 16:02:28 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb 02 16:02:28 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2949146196' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 02 16:02:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/238100838' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 02 16:02:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.10:0/238100838' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 02 16:02:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2411834022' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 02 16:02:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3754983985' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 02 16:02:29 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2949146196' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 02 16:02:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Feb 02 16:02:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055486042' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 02 16:02:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:02:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb 02 16:02:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4189021481' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 02 16:02:29 compute-0 nova_compute[239545]: 2026-02-02 16:02:29.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Feb 02 16:02:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1670643046' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 02 16:02:29 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb 02 16:02:29 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627579075' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 02 16:02:30 compute-0 ceph-mon[75334]: pgmap v2152: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4055486042' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 02 16:02:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4189021481' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 02 16:02:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1670643046' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 02 16:02:30 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3627579075' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 02 16:02:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb 02 16:02:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655868918' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 02 16:02:30 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 02 16:02:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/645614045' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 02 16:02:30 compute-0 nova_compute[239545]: 2026-02-02 16:02:30.406 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:30 compute-0 nova_compute[239545]: 2026-02-02 16:02:30.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Feb 02 16:02:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091970797' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 02 16:02:30 compute-0 nova_compute[239545]: 2026-02-02 16:02:30.641 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:30 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb 02 16:02:30 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154821609' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 02 16:02:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3655868918' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 02 16:02:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/645614045' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 02 16:02:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/4091970797' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 02 16:02:31 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3154821609' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 02 16:02:31 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19242 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:31 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19244 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:31 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19246 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:31 compute-0 nova_compute[239545]: 2026-02-02 16:02:31.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:31 compute-0 nova_compute[239545]: 2026-02-02 16:02:31.545 239549 DEBUG nova.compute.manager [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 02 16:02:31 compute-0 nova_compute[239545]: 2026-02-02 16:02:31.545 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:31 compute-0 nova_compute[239545]: 2026-02-02 16:02:31.583 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:02:31 compute-0 nova_compute[239545]: 2026-02-02 16:02:31.584 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:02:31 compute-0 nova_compute[239545]: 2026-02-02 16:02:31.584 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:02:31 compute-0 nova_compute[239545]: 2026-02-02 16:02:31.584 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 02 16:02:31 compute-0 nova_compute[239545]: 2026-02-02 16:02:31.584 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 16:02:31 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19248 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 252 ms_handle_reset con 0x557ad78a8000 session 0x557ad5a84380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:00.027944+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad829d800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 252 ms_handle_reset con 0x557ad829d800 session 0x557ad790a1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 252 heartbeat osd_stat(store_statfs(0x4c5a23000/0x0/0x4ffc00000, data 0x362f7251/0x36467000, compress 0x0/0x0/0x0, omap 0x427fd, meta 0x3d2d803), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 253 ms_handle_reset con 0x557ad8b17400 session 0x557ad6bf0a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 151740416 unmapped: 49356800 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 253 ms_handle_reset con 0x557ad538d000 session 0x557ad7b37340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:01.028086+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 253 ms_handle_reset con 0x557ad78a8000 session 0x557ad6384380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 151863296 unmapped: 49233920 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 253 heartbeat osd_stat(store_statfs(0x4c4b81000/0x0/0x4ffc00000, data 0x37194e5f/0x37308000, compress 0x0/0x0/0x0, omap 0x42bf3, meta 0x3d2d40d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 253 handle_osd_map epochs [254,254], i have 254, src has [1,254]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:02.028416+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad829d800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.728559494s of 10.012578011s, submitted: 64
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 254 ms_handle_reset con 0x557ad829d800 session 0x557ad791f180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 254 ms_handle_reset con 0x557adbe94000 session 0x557ad7b37dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 151928832 unmapped: 49168384 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6718830 data_alloc: 234881024 data_used: 25391204
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:03.028611+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 255 ms_handle_reset con 0x557ad8f3d400 session 0x557ad7475c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 255 ms_handle_reset con 0x557ad538d000 session 0x557ad78ecfc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad829d800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 255 ms_handle_reset con 0x557ad78a8000 session 0x557ad518ec40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152002560 unmapped: 49094656 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:04.028872+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 256 heartbeat osd_stat(store_statfs(0x4c4b52000/0x0/0x4ffc00000, data 0x371c2610/0x37338000, compress 0x0/0x0/0x0, omap 0x4328e, meta 0x3d2cd72), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 155017216 unmapped: 46080000 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:05.029001+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 256 ms_handle_reset con 0x557ad8f3d000 session 0x557ad5e4fdc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 256 ms_handle_reset con 0x557ad78a9400 session 0x557ad524cfc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 256 ms_handle_reset con 0x557ad538c000 session 0x557ad5351500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168730624 unmapped: 32366592 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 256 ms_handle_reset con 0x557ad538d000 session 0x557ad5a848c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 256 heartbeat osd_stat(store_statfs(0x4c4b4d000/0x0/0x4ffc00000, data 0x371c4210/0x3733b000, compress 0x0/0x0/0x0, omap 0x435ff, meta 0x3d2ca01), peers [1,2] op hist [0,0,2])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:06.029110+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 166510592 unmapped: 34586624 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:07.029222+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 34283520 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6696848 data_alloc: 251658240 data_used: 29774180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:08.029346+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 34283520 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 256 heartbeat osd_stat(store_statfs(0x4c56ae000/0x0/0x4ffc00000, data 0x366671dd/0x367dc000, compress 0x0/0x0/0x0, omap 0x43814, meta 0x3d2c7ec), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:09.029575+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 34283520 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:10.029723+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 166903808 unmapped: 34193408 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:11.029888+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 256 ms_handle_reset con 0x557ad78a8000 session 0x557ad7b5aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 35684352 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:12.030049+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.353460312s of 10.266341209s, submitted: 190
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a9400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 35684352 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6696987 data_alloc: 251658240 data_used: 29782435
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:13.030202+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 35684352 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 ms_handle_reset con 0x557ad78a9400 session 0x557ad7b361c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:14.030335+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 heartbeat osd_stat(store_statfs(0x4c56aa000/0x0/0x4ffc00000, data 0x36668cde/0x367e0000, compress 0x0/0x0/0x0, omap 0x43d07, meta 0x3d2c2f9), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 heartbeat osd_stat(store_statfs(0x4c56a5000/0x0/0x4ffc00000, data 0x3666a87a/0x367e3000, compress 0x0/0x0/0x0, omap 0x44169, meta 0x3d2be97), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 35684352 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:15.030473+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 35684352 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:16.030602+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 heartbeat osd_stat(store_statfs(0x4c56a5000/0x0/0x4ffc00000, data 0x3666a87a/0x367e3000, compress 0x0/0x0/0x0, omap 0x44169, meta 0x3d2be97), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167067648 unmapped: 34029568 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:17.030794+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 ms_handle_reset con 0x557adbe94c00 session 0x557ad5350a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 ms_handle_reset con 0x557adbe94800 session 0x557ad5346000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 ms_handle_reset con 0x557adbe94800 session 0x557ad5a85c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167010304 unmapped: 34086912 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6815571 data_alloc: 251658240 data_used: 29942179
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:18.030986+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 166985728 unmapped: 34111488 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 ms_handle_reset con 0x557ad78a8000 session 0x557ad53476c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:19.031139+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a9400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 ms_handle_reset con 0x557ad78a9400 session 0x557ad7534a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 ms_handle_reset con 0x557adbe94c00 session 0x557ad7b5a700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 258 handle_osd_map epochs [258,259], i have 259, src has [1,259]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167247872 unmapped: 33849344 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe95000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 259 ms_handle_reset con 0x557adbe95000 session 0x557ad7b27dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 259 heartbeat osd_stat(store_statfs(0x4c3a86000/0x0/0x4ffc00000, data 0x3750a91c/0x37261000, compress 0x0/0x0/0x0, omap 0x44c23, meta 0x4ecb3dd), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:20.031415+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 259 ms_handle_reset con 0x557ad78a8000 session 0x557ad7535500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a9400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 260 ms_handle_reset con 0x557ad78a9400 session 0x557ad791fdc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 260 ms_handle_reset con 0x557adbe94800 session 0x557ad6bf01c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167469056 unmapped: 33628160 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 260 ms_handle_reset con 0x557ad538d000 session 0x557ad7b37dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:21.031642+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167518208 unmapped: 33579008 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 260 ms_handle_reset con 0x557adbe94c00 session 0x557ad75348c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:22.031875+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.902719975s of 10.019507408s, submitted: 164
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167534592 unmapped: 33562624 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6829815 data_alloc: 251658240 data_used: 30138787
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:23.032079+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 260 ms_handle_reset con 0x557ad538d000 session 0x557ad7b5a000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a9400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 261 ms_handle_reset con 0x557ad78a9400 session 0x557ad791e1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167567360 unmapped: 33529856 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:24.032205+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 262 ms_handle_reset con 0x557adbe94800 session 0x557ad790a380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 262 ms_handle_reset con 0x557ad78a8000 session 0x557ad5351a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe95400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 262 ms_handle_reset con 0x557adbe95400 session 0x557ad5351500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167632896 unmapped: 33464320 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:25.032376+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 262 ms_handle_reset con 0x557ad538d000 session 0x557ad5e9c8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 262 heartbeat osd_stat(store_statfs(0x4c3a7c000/0x0/0x4ffc00000, data 0x3751198f/0x3726e000, compress 0x0/0x0/0x0, omap 0x45f6c, meta 0x4eca094), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 262 handle_osd_map epochs [263,263], i have 263, src has [1,263]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 263 heartbeat osd_stat(store_statfs(0x4c3a76000/0x0/0x4ffc00000, data 0x37513539/0x37270000, compress 0x0/0x0/0x0, omap 0x46372, meta 0x4ec9c8e), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168706048 unmapped: 32391168 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:26.032508+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168747008 unmapped: 32350208 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 264 ms_handle_reset con 0x557ad78a8000 session 0x557ad5347dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:27.032768+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a9400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe95800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 264 ms_handle_reset con 0x557adbe94800 session 0x557ad518e8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 264 ms_handle_reset con 0x557ad78a9400 session 0x557ad73f01c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 32169984 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6846690 data_alloc: 251658240 data_used: 30139274
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:28.033017+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 264 handle_osd_map epochs [264,265], i have 265, src has [1,265]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe95c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 265 ms_handle_reset con 0x557adbe95c00 session 0x557ad5347500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe95c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 265 ms_handle_reset con 0x557adbe95c00 session 0x557ad791f340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168935424 unmapped: 32161792 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 265 ms_handle_reset con 0x557ad538d000 session 0x557ad518ea80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:29.033246+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 266 ms_handle_reset con 0x557ad78a8000 session 0x557ad6bf0fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 266 ms_handle_reset con 0x557adbe95800 session 0x557ad78eca80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168378368 unmapped: 32718848 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:30.033433+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 266 ms_handle_reset con 0x557ad78a8c00 session 0x557ad7534540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538c000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 267 ms_handle_reset con 0x557ad538c000 session 0x557ad790b500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 267 ms_handle_reset con 0x557ad52c6800 session 0x557ad4eb9500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 267 ms_handle_reset con 0x557ad538c800 session 0x557ad4eb9880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168484864 unmapped: 32612352 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 267 heartbeat osd_stat(store_statfs(0x4c3a71000/0x0/0x4ffc00000, data 0x37518e06/0x3727b000, compress 0x0/0x0/0x0, omap 0x47590, meta 0x4ec8a70), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:31.033553+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538cc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 268 ms_handle_reset con 0x557ad538cc00 session 0x557ad4eb9c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 268 ms_handle_reset con 0x557ad538d400 session 0x557ad518f500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 268 ms_handle_reset con 0x557ad538c400 session 0x557ad7b36a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168452096 unmapped: 32645120 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:32.033754+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 268 handle_osd_map epochs [268,269], i have 268, src has [1,269]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.478686333s of 10.011374474s, submitted: 173
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557ad52c6800 session 0x557ad7535dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538c000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557ad8f3d000 session 0x557ad518f180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168493056 unmapped: 32604160 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6858551 data_alloc: 251658240 data_used: 30139258
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557ad538c000 session 0x557ad791fa40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:33.033935+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168509440 unmapped: 32587776 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557ad8f3d800 session 0x557ad7534000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557ad8f3dc00 session 0x557ad789a8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:34.034092+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557ad52c6800 session 0x557ad5a85340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 46399488 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:35.034522+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 heartbeat osd_stat(store_statfs(0x4c4f4d000/0x0/0x4ffc00000, data 0x35c7acd4/0x359db000, compress 0x0/0x0/0x0, omap 0x4841d, meta 0x4ec7be3), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 46399488 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:36.034768+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 46399488 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:37.034920+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 46399488 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6631032 data_alloc: 234881024 data_used: 15350650
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:38.035125+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557ad829d800 session 0x557ad791ea80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557adbe94000 session 0x557ad7b5bc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557adbe94400 session 0x557ad790b340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 154714112 unmapped: 46383104 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:39.035262+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 ms_handle_reset con 0x557ad52c6800 session 0x557ad791e8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 60170240 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:40.035381+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad829d800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 60923904 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 270 heartbeat osd_stat(store_statfs(0x4c61d3000/0x0/0x4ffc00000, data 0x34db9c3f/0x34b17000, compress 0x0/0x0/0x0, omap 0x485ad, meta 0x4ec7a53), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:41.035498+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140173312 unmapped: 60923904 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:42.035660+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.722065926s of 10.000762939s, submitted: 171
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 272 ms_handle_reset con 0x557ad829d800 session 0x557ad5e9ce00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad86d8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 272 heartbeat osd_stat(store_statfs(0x4c61cd000/0x0/0x4ffc00000, data 0x34dbd3c3/0x34b1b000, compress 0x0/0x0/0x0, omap 0x48e38, meta 0x4ec71c8), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 272 ms_handle_reset con 0x557ad86d8000 session 0x557ad5e4fa40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 139378688 unmapped: 61718528 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6452430 data_alloc: 218103808 data_used: 314464
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:43.035886+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92c1800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71ac400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 144523264 unmapped: 56573952 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 273 ms_handle_reset con 0x557ad7191000 session 0x557ad791e540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:44.036159+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 273 ms_handle_reset con 0x557ad71ac400 session 0x557ad5350380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 274 ms_handle_reset con 0x557ad92c1800 session 0x557ad524cc40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 141156352 unmapped: 59940864 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:45.036335+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 141230080 unmapped: 59867136 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:46.036456+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 274 ms_handle_reset con 0x557ad52c6800 session 0x557ad791ee00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 141230080 unmapped: 59867136 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:47.036580+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 60293120 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 275 heartbeat osd_stat(store_statfs(0x4c5a50000/0x0/0x4ffc00000, data 0x35e52845/0x35296000, compress 0x0/0x0/0x0, omap 0x4a087, meta 0x4ec5f79), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:48.036827+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6597970 data_alloc: 218103808 data_used: 315077
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 60276736 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:49.036974+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 276 ms_handle_reset con 0x557ad7191000 session 0x557ad4eb96c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 60694528 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:50.037150+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad829d800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 276 ms_handle_reset con 0x557ad829d800 session 0x557ad5e9da40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 60694528 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:51.037339+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad86d8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 60686336 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:52.037470+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 276 heartbeat osd_stat(store_statfs(0x4c6ee5000/0x0/0x4ffc00000, data 0x340a008d/0x33e05000, compress 0x0/0x0/0x0, omap 0x4ba4b, meta 0x4ec45b5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 276 ms_handle_reset con 0x557ad738c400 session 0x557ad5e4f340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.612604141s of 10.001750946s, submitted: 206
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 277 ms_handle_reset con 0x557ad86d8000 session 0x557ad6bf1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 277 ms_handle_reset con 0x557ad52c6800 session 0x557ad789b500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140435456 unmapped: 60661760 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:53.037613+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6412709 data_alloc: 218103808 data_used: 316247
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 277 ms_handle_reset con 0x557ad738c400 session 0x557ad7b361c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 277 ms_handle_reset con 0x557ad7191000 session 0x557ad4eb2c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad829d800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 60653568 heap: 201097216 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 277 ms_handle_reset con 0x557ad829d800 session 0x557ad518f6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:54.037808+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 191791104 unmapped: 63881216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:55.037943+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 5382 syncs, 3.03 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 23.73 MB, 0.04 MB/s
                                           Interval WAL: 10K writes, 4383 syncs, 2.39 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 145924096 unmapped: 109748224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:56.038062+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 277 heartbeat osd_stat(store_statfs(0x4be2e1000/0x0/0x4ffc00000, data 0x3cca1dfb/0x3ca0b000, compress 0x0/0x0/0x0, omap 0x4c371, meta 0x4ec3c8f), peers [1,2] op hist [0,0,1,1,0,0,2])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 277 ms_handle_reset con 0x557ad738c400 session 0x557ad53516c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad86d8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 147570688 unmapped: 108101632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:57.038162+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 277 ms_handle_reset con 0x557ad86d8000 session 0x557ad7938540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 278 heartbeat osd_stat(store_statfs(0x4b9660000/0x0/0x4ffc00000, data 0x41923d99/0x4168c000, compress 0x0/0x0/0x0, omap 0x4c4a9, meta 0x4ec3b57), peers [1,2] op hist [0,0,0,0,0,2,1])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 149315584 unmapped: 106356736 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:58.038312+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8052331 data_alloc: 218103808 data_used: 316860
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 149864448 unmapped: 105807872 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:36:59.038465+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 278 heartbeat osd_stat(store_statfs(0x4af793000/0x0/0x4ffc00000, data 0x4b7ef818/0x4b559000, compress 0x0/0x0/0x0, omap 0x4c913, meta 0x4ec36ed), peers [1,2] op hist [0,0,0,1,0,1,2,1])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 278 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 151674880 unmapped: 103997440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:00.038587+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 ms_handle_reset con 0x557ad7191000 session 0x557ad5a85c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 ms_handle_reset con 0x557ad52c6800 session 0x557ad4eb8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: mgrc ms_handle_reset ms_handle_reset con 0x557ad5436000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/273425939
Feb 02 16:02:31 compute-0 ceph-osd[86115]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/273425939,v1:192.168.122.100:6801/273425939]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: get_auth_request con 0x557ad829d800 auth_method 0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: mgrc handle_mgr_configure stats_period=5
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 ms_handle_reset con 0x557ad52c6800 session 0x557ad7b5ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 ms_handle_reset con 0x557ad7191000 session 0x557ad524d340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 147857408 unmapped: 107814912 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:01.038757+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 ms_handle_reset con 0x557ad738c400 session 0x557ad5a84a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad86d8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 ms_handle_reset con 0x557ad86d8000 session 0x557ad7938540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 heartbeat osd_stat(store_statfs(0x4c6391000/0x0/0x4ffc00000, data 0x34bf12f0/0x3495a000, compress 0x0/0x0/0x0, omap 0x4d05b, meta 0x4ec2fa5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92c1800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 ms_handle_reset con 0x557ad92c1800 session 0x557ad7535a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 146972672 unmapped: 108699648 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:02.038939+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 ms_handle_reset con 0x557ad52c6800 session 0x557ad789a000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.358632088s of 10.133784294s, submitted: 610
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 ms_handle_reset con 0x557ad738c400 session 0x557ad7911c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 279 handle_osd_map epochs [279,280], i have 279, src has [1,280]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad7191000 session 0x557ad5e4f340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 146939904 unmapped: 108732416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:03.039112+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad86d8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad86d8000 session 0x557ad791ee00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad5edc800 session 0x557ad5e9da40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6621540 data_alloc: 218103808 data_used: 316860
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad52c6800 session 0x557ad789b500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad7191000 session 0x557ad4eb2c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad86d8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad738c400 session 0x557ad518fa40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad86d8000 session 0x557ad75348c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad750dc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71adc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad71adc00 session 0x557ad6bf1dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad750dc00 session 0x557ad7b36a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 147152896 unmapped: 108519424 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:04.039246+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad7191000 session 0x557ad5e9c000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 147161088 unmapped: 108511232 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:05.039364+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 ms_handle_reset con 0x557ad5eda400 session 0x557ad78f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad86d8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 heartbeat osd_stat(store_statfs(0x4c5c86000/0x0/0x4ffc00000, data 0x352f4fc4/0x35063000, compress 0x0/0x0/0x0, omap 0x4e005, meta 0x4ec1ffb), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 281 ms_handle_reset con 0x557ad52c6800 session 0x557ad71a9c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 147169280 unmapped: 108503040 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:06.039465+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 ms_handle_reset con 0x557ad738c400 session 0x557ad5346000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 147193856 unmapped: 108478464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:07.039583+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cd400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 ms_handle_reset con 0x557ad87e3800 session 0x557ad4eb8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 ms_handle_reset con 0x557ada0cd400 session 0x557ad5347dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 ms_handle_reset con 0x557ad738d000 session 0x557ad5351a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 ms_handle_reset con 0x557ad6394c00 session 0x557ad7474fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 heartbeat osd_stat(store_statfs(0x4c5c96000/0x0/0x4ffc00000, data 0x34ebd7d4/0x35052000, compress 0x0/0x0/0x0, omap 0x4ece2, meta 0x4ec131e), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 ms_handle_reset con 0x557ad52c6800 session 0x557ad5351880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 147210240 unmapped: 108462080 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:08.039788+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6560209 data_alloc: 218103808 data_used: 92181
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 ms_handle_reset con 0x557ad7191000 session 0x557ad6bf1500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 ms_handle_reset con 0x557ad52c6800 session 0x557ad5328c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 ms_handle_reset con 0x557ad6394c00 session 0x557ad7b5bc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 282 handle_osd_map epochs [282,283], i have 283, src has [1,283]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 283 ms_handle_reset con 0x557ad738d000 session 0x557ad5e9ca80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cd400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:09.039935+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 146153472 unmapped: 109518848 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 284 ms_handle_reset con 0x557ad7191000 session 0x557ad5a85c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad750dc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 284 ms_handle_reset con 0x557ad8f3c800 session 0x557ad73f16c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:10.040054+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 149291008 unmapped: 106381312 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 284 ms_handle_reset con 0x557ad8f3c800 session 0x557ad63848c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 284 ms_handle_reset con 0x557ad750dc00 session 0x557ad7b361c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:11.040190+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 150495232 unmapped: 105177088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 284 heartbeat osd_stat(store_statfs(0x4d4b08000/0x0/0x4ffc00000, data 0x24c4af95/0x24de0000, compress 0x0/0x0/0x0, omap 0x4f823, meta 0x4ec07dd), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:12.040332+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 150495232 unmapped: 105177088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.615063667s of 10.022487640s, submitted: 384
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:13.040461+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 150528000 unmapped: 105144320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 285 ms_handle_reset con 0x557ad52c6800 session 0x557ad790aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2493240 data_alloc: 218103808 data_used: 7257175
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:14.040868+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 105136128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:15.040993+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 105136128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:16.041203+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 105136128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:17.041392+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 105136128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 285 heartbeat osd_stat(store_statfs(0x4f9707000/0x0/0x4ffc00000, data 0x144cbed/0x15e3000, compress 0x0/0x0/0x0, omap 0x4ff67, meta 0x4ec0099), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:18.041513+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 105127936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2496142 data_alloc: 218103808 data_used: 7257788
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:19.041671+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 105127936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:20.041787+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 101597184 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 286 heartbeat osd_stat(store_statfs(0x4f863a000/0x0/0x4ffc00000, data 0x251268c/0x26aa000, compress 0x0/0x0/0x0, omap 0x501c8, meta 0x4ebfe38), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:21.041909+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 96575488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 287 ms_handle_reset con 0x557ad6394c00 session 0x557ad5347180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:22.042062+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 97460224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:23.042210+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 97460224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2613670 data_alloc: 218103808 data_used: 8688316
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.344770432s of 10.836241722s, submitted: 299
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 287 heartbeat osd_stat(store_statfs(0x4f73f9000/0x0/0x4ffc00000, data 0x25b728a/0x2751000, compress 0x0/0x0/0x0, omap 0x505b8, meta 0x605fa48), peers [1,2] op hist [1])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 287 ms_handle_reset con 0x557ad738d000 session 0x557ad4eb2c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 287 ms_handle_reset con 0x557ad7191000 session 0x557ad5a848c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:24.042398+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158220288 unmapped: 97452032 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:25.042621+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158220288 unmapped: 97452032 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 288 ms_handle_reset con 0x557ad738d000 session 0x557ad75341c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:26.042805+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158228480 unmapped: 97443840 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 288 ms_handle_reset con 0x557ad6394c00 session 0x557ad7474fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 288 ms_handle_reset con 0x557ad52c6800 session 0x557ad791ea80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 288 heartbeat osd_stat(store_statfs(0x4f73f5000/0x0/0x4ffc00000, data 0x25b8e88/0x2755000, compress 0x0/0x0/0x0, omap 0x50b74, meta 0x605f48c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:27.042987+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 97402880 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad750dc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:28.043111+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 97402880 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2624014 data_alloc: 218103808 data_used: 8688414
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 288 handle_osd_map epochs [288,289], i have 289, src has [1,289]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:29.043211+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 ms_handle_reset con 0x557ad750dc00 session 0x557ad4eb9500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 97402880 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 ms_handle_reset con 0x557ad52c6800 session 0x557ad6384380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:30.043402+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 97402880 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 ms_handle_reset con 0x557ad6394c00 session 0x557ad78eca80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 heartbeat osd_stat(store_statfs(0x4f73eb000/0x0/0x4ffc00000, data 0x25bdb5a/0x275f000, compress 0x0/0x0/0x0, omap 0x51051, meta 0x605efaf), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:31.043594+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158269440 unmapped: 97402880 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 ms_handle_reset con 0x557ad7191000 session 0x557ad7534fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 ms_handle_reset con 0x557ad738d000 session 0x557ad5e4f6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:32.043797+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158294016 unmapped: 97378304 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 ms_handle_reset con 0x557ad5edc400 session 0x557ad78f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 ms_handle_reset con 0x557ad5edc400 session 0x557ad5a85340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 heartbeat osd_stat(store_statfs(0x4f73eb000/0x0/0x4ffc00000, data 0x25bdbcc/0x2761000, compress 0x0/0x0/0x0, omap 0x51189, meta 0x605ee77), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:33.043984+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158294016 unmapped: 97378304 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2633364 data_alloc: 218103808 data_used: 8688708
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 ms_handle_reset con 0x557ad6394c00 session 0x557ad5a84a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.375137329s of 10.460056305s, submitted: 41
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:34.044121+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158294016 unmapped: 97378304 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 290 ms_handle_reset con 0x557ad7191000 session 0x557ad5e9c540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 290 ms_handle_reset con 0x557ad52c6800 session 0x557ad7534c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:35.044250+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158310400 unmapped: 97361920 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 290 ms_handle_reset con 0x557ad8f3c800 session 0x557ad789b500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:36.044374+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 290 ms_handle_reset con 0x557ad8f3c800 session 0x557ad5e9ca80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158318592 unmapped: 97353728 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 290 ms_handle_reset con 0x557ad5edc400 session 0x557ad7b5ac40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 290 ms_handle_reset con 0x557ad738c400 session 0x557ad5e4fdc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 290 ms_handle_reset con 0x557ada0cd400 session 0x557ad518f180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 290 ms_handle_reset con 0x557ad7191000 session 0x557ad7b36000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 291 ms_handle_reset con 0x557ad5edc400 session 0x557ad6bf0c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:37.044692+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 291 ms_handle_reset con 0x557ad6394c00 session 0x557ad73f1880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 158302208 unmapped: 97370112 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 291 ms_handle_reset con 0x557ad52c6800 session 0x557ad7938540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 291 heartbeat osd_stat(store_statfs(0x4f83a5000/0x0/0x4ffc00000, data 0xf7f34a/0x1124000, compress 0x0/0x0/0x0, omap 0x51fca, meta 0x605e036), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 291 ms_handle_reset con 0x557ad7191000 session 0x557ad5e4fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:38.044854+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152846336 unmapped: 102825984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2456677 data_alloc: 218103808 data_used: 93657
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 291 ms_handle_reset con 0x557ad738c400 session 0x557ad79381c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 291 ms_handle_reset con 0x557ad52c6800 session 0x557ad7b5a700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:39.045217+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 102809600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 291 ms_handle_reset con 0x557ad5edc400 session 0x557ad78ec700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:40.045319+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 102809600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:41.045513+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 102809600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 292 ms_handle_reset con 0x557ad6394c00 session 0x557ad78f0000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:42.045684+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 102809600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 292 ms_handle_reset con 0x557ad7191000 session 0x557ad524ddc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 293 ms_handle_reset con 0x557ad8f3c800 session 0x557ad4eb8e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 293 ms_handle_reset con 0x557ad52c6800 session 0x557ad5e4fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:43.045766+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 153239552 unmapped: 102432768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2470357 data_alloc: 218103808 data_used: 102345
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 293 ms_handle_reset con 0x557ad6394c00 session 0x557ad78ec380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 293 ms_handle_reset con 0x557ad5edc400 session 0x557ad7474fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 293 heartbeat osd_stat(store_statfs(0x4f8a4e000/0x0/0x4ffc00000, data 0xf56e8d/0x10fc000, compress 0x0/0x0/0x0, omap 0x522c8, meta 0x605dd38), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 293 ms_handle_reset con 0x557ad5f35800 session 0x557ad6384700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cd400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:44.045862+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152911872 unmapped: 102760448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.884735107s of 10.452068329s, submitted: 189
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 293 heartbeat osd_stat(store_statfs(0x4f8a23000/0x0/0x4ffc00000, data 0xf8290c/0x1129000, compress 0x0/0x0/0x0, omap 0x52e57, meta 0x605d1a9), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 ms_handle_reset con 0x557ad7508000 session 0x557ad791f500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:45.045987+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 102752256 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 ms_handle_reset con 0x557ada0cd800 session 0x557ad7475c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 ms_handle_reset con 0x557ad52c7000 session 0x557ad4eb8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:46.046146+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152944640 unmapped: 102727680 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 ms_handle_reset con 0x557ada0cd800 session 0x557ad78f08c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:47.046273+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 ms_handle_reset con 0x557ad52c6800 session 0x557ad78f0700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152944640 unmapped: 102727680 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:48.046410+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152944640 unmapped: 102727680 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 ms_handle_reset con 0x557ad5edc400 session 0x557ad4eb8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2488627 data_alloc: 218103808 data_used: 2282811
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 ms_handle_reset con 0x557ad6394c00 session 0x557ad5e9c540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:49.046541+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152944640 unmapped: 102727680 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 heartbeat osd_stat(store_statfs(0x4f8a1f000/0x0/0x4ffc00000, data 0xf844b8/0x112d000, compress 0x0/0x0/0x0, omap 0x538ee, meta 0x605c712), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 ms_handle_reset con 0x557ad52c6800 session 0x557ad7534fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 ms_handle_reset con 0x557ad5edc400 session 0x557ad790ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:50.046790+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152944640 unmapped: 102727680 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 295 ms_handle_reset con 0x557ad6394c00 session 0x557ad791e700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 295 ms_handle_reset con 0x557ad52c7000 session 0x557ad73f1880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:51.046961+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152952832 unmapped: 102719488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 295 ms_handle_reset con 0x557ada0cd800 session 0x557ad5a85180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:52.047239+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 295 ms_handle_reset con 0x557ad52c7000 session 0x557ad791f6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 152993792 unmapped: 102678528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 296 ms_handle_reset con 0x557ad5edc400 session 0x557ad5351dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 296 ms_handle_reset con 0x557ad52c6800 session 0x557ad790aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:53.047371+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 153059328 unmapped: 102612992 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 296 ms_handle_reset con 0x557ad6394c00 session 0x557ad5346c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2492937 data_alloc: 218103808 data_used: 2283298
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 296 ms_handle_reset con 0x557ad7508000 session 0x557ad7b5b880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 296 heartbeat osd_stat(store_statfs(0x4f8a1a000/0x0/0x4ffc00000, data 0xf87c16/0x1130000, compress 0x0/0x0/0x0, omap 0x54621, meta 0x605b9df), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:54.047549+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 153092096 unmapped: 102580224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.633826256s of 10.164919853s, submitted: 245
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 296 ms_handle_reset con 0x557ad52c6800 session 0x557ad5e9d6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:55.047651+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157982720 unmapped: 97689600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:56.047782+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157122560 unmapped: 98549760 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 297 ms_handle_reset con 0x557ad52c7000 session 0x557ad5350e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:57.047875+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157130752 unmapped: 98541568 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 298 heartbeat osd_stat(store_statfs(0x4f821a000/0x0/0x4ffc00000, data 0x1784840/0x1930000, compress 0x0/0x0/0x0, omap 0x54a3c, meta 0x605b5c4), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 298 ms_handle_reset con 0x557ad6394c00 session 0x557ad789afc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:58.047996+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 298 ms_handle_reset con 0x557ad5edc400 session 0x557ad71a8fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157130752 unmapped: 98541568 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2559196 data_alloc: 218103808 data_used: 2935684
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 298 heartbeat osd_stat(store_statfs(0x4f821a000/0x0/0x4ffc00000, data 0x1784840/0x1930000, compress 0x0/0x0/0x0, omap 0x54a3c, meta 0x605b5c4), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 298 ms_handle_reset con 0x557ad7193800 session 0x557ad4eb2c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 298 ms_handle_reset con 0x557ad52c6800 session 0x557ad5351880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:37:59.048180+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157130752 unmapped: 98541568 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:00.048310+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 298 ms_handle_reset con 0x557ad52c7000 session 0x557ad790b880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157130752 unmapped: 98541568 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 298 ms_handle_reset con 0x557ad6394c00 session 0x557ad789a000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 299 ms_handle_reset con 0x557ad7193800 session 0x557ad5347c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 299 ms_handle_reset con 0x557ad5edc400 session 0x557ad4eb9880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:01.048448+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157138944 unmapped: 98533376 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:02.048649+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157138944 unmapped: 98533376 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:03.048810+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157138944 unmapped: 98533376 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2563652 data_alloc: 218103808 data_used: 2939666
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 300 ms_handle_reset con 0x557ad52c6800 session 0x557ad78ece00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:04.048998+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157155328 unmapped: 98516992 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 300 heartbeat osd_stat(store_statfs(0x4f8210000/0x0/0x4ffc00000, data 0x1789ac9/0x193a000, compress 0x0/0x0/0x0, omap 0x55227, meta 0x605add9), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 300 ms_handle_reset con 0x557ad52c7000 session 0x557ad7b5bc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 300 ms_handle_reset con 0x557ad6394c00 session 0x557ad6bf0700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:05.049210+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157155328 unmapped: 98516992 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 300 ms_handle_reset con 0x557ad7193800 session 0x557ad5346000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8912c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.961863518s of 11.321086884s, submitted: 135
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 300 ms_handle_reset con 0x557ad8912c00 session 0x557ad6c12c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:06.049400+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157155328 unmapped: 98516992 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:07.049561+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157155328 unmapped: 98516992 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 300 ms_handle_reset con 0x557ad52c6800 session 0x557ad790a540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 301 heartbeat osd_stat(store_statfs(0x4f820f000/0x0/0x4ffc00000, data 0x178b4d6/0x193b000, compress 0x0/0x0/0x0, omap 0x55954, meta 0x605a6ac), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:08.049648+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157179904 unmapped: 98492416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2568091 data_alloc: 218103808 data_used: 2939666
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:09.049791+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157179904 unmapped: 98492416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:10.049913+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157179904 unmapped: 98492416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:11.050066+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157179904 unmapped: 98492416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 301 heartbeat osd_stat(store_statfs(0x4f820f000/0x0/0x4ffc00000, data 0x178b4d6/0x193b000, compress 0x0/0x0/0x0, omap 0x55954, meta 0x605a6ac), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:12.050254+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157179904 unmapped: 98492416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 301 heartbeat osd_stat(store_statfs(0x4f820f000/0x0/0x4ffc00000, data 0x178b4d6/0x193b000, compress 0x0/0x0/0x0, omap 0x55954, meta 0x605a6ac), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:13.050478+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157179904 unmapped: 98492416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2568091 data_alloc: 218103808 data_used: 2939666
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:14.050600+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157179904 unmapped: 98492416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:15.050857+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 157179904 unmapped: 98492416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 ms_handle_reset con 0x557ad52c7000 session 0x557ad7b5a380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:16.050998+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156065792 unmapped: 99606528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 heartbeat osd_stat(store_statfs(0x4f820b000/0x0/0x4ffc00000, data 0x178d0d4/0x193f000, compress 0x0/0x0/0x0, omap 0x55a79, meta 0x605a587), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 ms_handle_reset con 0x557ad6394c00 session 0x557ad78ec000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:17.051103+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.408477783s of 11.456601143s, submitted: 35
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156065792 unmapped: 99606528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 ms_handle_reset con 0x557ad7193800 session 0x557ad5a84700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad4ddd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 ms_handle_reset con 0x557ad4ddd800 session 0x557ad75341c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:18.051222+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 ms_handle_reset con 0x557ad52c6800 session 0x557ad789afc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 ms_handle_reset con 0x557ad52c7000 session 0x557ad73f1880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 ms_handle_reset con 0x557ad6394c00 session 0x557ad71a8380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156418048 unmapped: 99254272 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2631886 data_alloc: 218103808 data_used: 2939666
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:19.051367+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156418048 unmapped: 99254272 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 ms_handle_reset con 0x557ad7193800 session 0x557ad5e9d880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:20.051520+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156418048 unmapped: 99254272 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 heartbeat osd_stat(store_statfs(0x4f794d000/0x0/0x4ffc00000, data 0x204c0e2/0x21ff000, compress 0x0/0x0/0x0, omap 0x55b6d, meta 0x605a493), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:21.051677+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 heartbeat osd_stat(store_statfs(0x4f794d000/0x0/0x4ffc00000, data 0x204c0e2/0x21ff000, compress 0x0/0x0/0x0, omap 0x55b6d, meta 0x605a493), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156680192 unmapped: 98992128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad7190400 session 0x557ad6384700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:22.051942+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156680192 unmapped: 98992128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad7190400 session 0x557ad7b37c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad52c6800 session 0x557ad63841c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad52c7000 session 0x557ad5e9c380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:23.052094+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156680192 unmapped: 98992128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2636339 data_alloc: 218103808 data_used: 2939682
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad6394c00 session 0x557ad518fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad7193800 session 0x557ad73f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:24.052229+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad52c6800 session 0x557ad78ec380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156688384 unmapped: 98983936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad52c7000 session 0x557ad790b500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad52d5000 session 0x557ad78f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 ms_handle_reset con 0x557ad7193800 session 0x557ad4eb28c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:25.052344+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156704768 unmapped: 98967552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 304 ms_handle_reset con 0x557ad738d000 session 0x557ad524cfc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 304 ms_handle_reset con 0x557ad7191000 session 0x557ad6c12e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:26.052509+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 304 ms_handle_reset con 0x557ad52c7000 session 0x557ad4eb36c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156966912 unmapped: 98705408 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 304 ms_handle_reset con 0x557ad52c6800 session 0x557ad791f500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 304 ms_handle_reset con 0x557ad52d5000 session 0x557ad7b36000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:27.052668+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 304 heartbeat osd_stat(store_statfs(0x4f836a000/0x0/0x4ffc00000, data 0x162a87f/0x17e1000, compress 0x0/0x0/0x0, omap 0x560b7, meta 0x6059f49), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156966912 unmapped: 98705408 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 304 ms_handle_reset con 0x557ad7193800 session 0x557ad4eb8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.762213707s of 10.441011429s, submitted: 78
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 305 ms_handle_reset con 0x557ad52c6800 session 0x557ad7b36a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:28.052771+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156975104 unmapped: 98697216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2614134 data_alloc: 218103808 data_used: 9101090
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:29.053002+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156975104 unmapped: 98697216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 305 ms_handle_reset con 0x557ad52c7000 session 0x557ad5e4f340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:30.053230+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156975104 unmapped: 98697216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 305 heartbeat osd_stat(store_statfs(0x4f8394000/0x0/0x4ffc00000, data 0x16023ff/0x17b8000, compress 0x0/0x0/0x0, omap 0x5674e, meta 0x60598b2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:31.053413+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156975104 unmapped: 98697216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 305 ms_handle_reset con 0x557ad52d5000 session 0x557ad5e9c540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:32.053611+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 305 ms_handle_reset con 0x557ad7191000 session 0x557ad75348c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156983296 unmapped: 98689024 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:33.053871+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156983296 unmapped: 98689024 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2611990 data_alloc: 218103808 data_used: 9105072
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:34.054008+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156991488 unmapped: 98680832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92c1400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 306 ms_handle_reset con 0x557ad92c1400 session 0x557ad5e9c000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:35.054138+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 156999680 unmapped: 98672640 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f8391000/0x0/0x4ffc00000, data 0x1603e3e/0x17bb000, compress 0x0/0x0/0x0, omap 0x569fb, meta 0x6059605), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:36.054285+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 163078144 unmapped: 92594176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f7a04000/0x0/0x4ffc00000, data 0x1f88e3e/0x2140000, compress 0x0/0x0/0x0, omap 0x569fb, meta 0x6059605), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:37.054462+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 306 ms_handle_reset con 0x557ad52c7000 session 0x557ad7b5b880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 92766208 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.421188354s of 10.132289886s, submitted: 160
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 307 ms_handle_reset con 0x557ad52d5000 session 0x557ad4eb8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:38.054633+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164126720 unmapped: 91545600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2692176 data_alloc: 234881024 data_used: 10449970
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 308 ms_handle_reset con 0x557ad7191000 session 0x557ad78f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92c0c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 308 ms_handle_reset con 0x557ad92c0c00 session 0x557ad78ec540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 308 ms_handle_reset con 0x557ad52c6800 session 0x557ad5a85180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:39.054795+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164184064 unmapped: 91488256 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:40.054948+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164184064 unmapped: 91488256 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 308 heartbeat osd_stat(store_statfs(0x4f795d000/0x0/0x4ffc00000, data 0x202dadd/0x21ea000, compress 0x0/0x0/0x0, omap 0x56f4d, meta 0x60590b3), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 309 ms_handle_reset con 0x557ad52c6800 session 0x557ad791ee00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 309 ms_handle_reset con 0x557ad52c7000 session 0x557ad7534a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:41.055083+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164249600 unmapped: 91422720 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:42.055329+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164249600 unmapped: 91422720 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 310 ms_handle_reset con 0x557ad52d5000 session 0x557ad78ec8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 310 ms_handle_reset con 0x557ad7191000 session 0x557ad7b5a700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:43.055491+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165306368 unmapped: 90365952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2696184 data_alloc: 234881024 data_used: 10449856
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:44.055605+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165306368 unmapped: 90365952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:45.055760+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165306368 unmapped: 90365952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:46.055884+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 310 heartbeat osd_stat(store_statfs(0x4f795a000/0x0/0x4ffc00000, data 0x2033d00/0x21ee000, compress 0x0/0x0/0x0, omap 0x57c46, meta 0x60583ba), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165306368 unmapped: 90365952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:47.056048+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 165306368 unmapped: 90365952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 310 handle_osd_map epochs [310,311], i have 311, src has [1,311]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:48.056226+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164397056 unmapped: 91275264 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2698062 data_alloc: 234881024 data_used: 10449856
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:49.056377+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164397056 unmapped: 91275264 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92c0c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 311 ms_handle_reset con 0x557ad92c0c00 session 0x557ad789a000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:50.056567+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164397056 unmapped: 91275264 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 311 heartbeat osd_stat(store_statfs(0x4f7959000/0x0/0x4ffc00000, data 0x20357b7/0x21f1000, compress 0x0/0x0/0x0, omap 0x57eab, meta 0x6058155), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:51.056730+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164397056 unmapped: 91275264 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.512664795s of 13.661522865s, submitted: 108
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 311 ms_handle_reset con 0x557ad52c6800 session 0x557ad4eb2380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:52.056862+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 311 ms_handle_reset con 0x557ad52c7000 session 0x557ad63841c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164397056 unmapped: 91275264 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 311 handle_osd_map epochs [311,312], i have 311, src has [1,312]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 312 ms_handle_reset con 0x557ad52d5000 session 0x557ad5351dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:53.057075+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 91258880 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2704230 data_alloc: 234881024 data_used: 10449856
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 313 ms_handle_reset con 0x557ad7191000 session 0x557ad7b37dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 313 heartbeat osd_stat(store_statfs(0x4f7954000/0x0/0x4ffc00000, data 0x2039353/0x21f6000, compress 0x0/0x0/0x0, omap 0x581d3, meta 0x6057e2d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:54.057226+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 91258880 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 313 ms_handle_reset con 0x557ad7513400 session 0x557ad7b5a380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:55.057372+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164413440 unmapped: 91258880 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 314 ms_handle_reset con 0x557ad52c6800 session 0x557ad5e4ee00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 315 ms_handle_reset con 0x557ad52d5000 session 0x557ad5a84700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5311800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 315 ms_handle_reset con 0x557ad87e3c00 session 0x557ad524d180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 315 ms_handle_reset con 0x557ad7191000 session 0x557ad53508c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:56.057529+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71ad000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 315 ms_handle_reset con 0x557ad71ad000 session 0x557ad5e4f180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 91103232 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 315 handle_osd_map epochs [316,316], i have 316, src has [1,316]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 316 ms_handle_reset con 0x557ad52c6800 session 0x557ad7534540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 316 ms_handle_reset con 0x557ad5311800 session 0x557ad7534000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 316 ms_handle_reset con 0x557ad52c7000 session 0x557ad790ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 316 ms_handle_reset con 0x557ad7191000 session 0x557ad78f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 316 ms_handle_reset con 0x557ad52d5000 session 0x557ad5347c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:57.057642+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 316 ms_handle_reset con 0x557ad52c6800 session 0x557ad4eb9a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 316 ms_handle_reset con 0x557ad52d5000 session 0x557ad7b36c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 77971456 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 317 ms_handle_reset con 0x557ad52c7000 session 0x557ad78ed180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5311800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:58.057804+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 317 ms_handle_reset con 0x557ad87e3c00 session 0x557ad4eb8540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 177750016 unmapped: 77922304 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2819820 data_alloc: 234881024 data_used: 19944384
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7507800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 317 heartbeat osd_stat(store_statfs(0x4f706b000/0x0/0x4ffc00000, data 0x2910102/0x2ad9000, compress 0x0/0x0/0x0, omap 0x59235, meta 0x6056dcb), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 318 ms_handle_reset con 0x557ad5311800 session 0x557ad7475500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 318 ms_handle_reset con 0x557ad7507800 session 0x557ad7b36c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 318 ms_handle_reset con 0x557ad7191000 session 0x557ad78f0a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:38:59.058015+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 82059264 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:00.058191+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c6800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 318 ms_handle_reset con 0x557ad52d5000 session 0x557ad5a84700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 82059264 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 318 ms_handle_reset con 0x557ad52c7000 session 0x557ad7b5afc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cdc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 319 ms_handle_reset con 0x557ad87e3c00 session 0x557ad790afc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:01.058335+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 319 ms_handle_reset con 0x557ad52c7000 session 0x557ad4eb2000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 173752320 unmapped: 81920000 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.494272232s of 10.047286987s, submitted: 168
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 319 ms_handle_reset con 0x557ad52d5000 session 0x557ad7534fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 319 ms_handle_reset con 0x557ad7191000 session 0x557ad5351500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 320 ms_handle_reset con 0x557ada0cdc00 session 0x557ad63841c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 320 ms_handle_reset con 0x557ad52c6800 session 0x557ad791f500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:02.058539+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 173826048 unmapped: 81846272 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 321 ms_handle_reset con 0x557ad52c7000 session 0x557ad790a380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:03.058688+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cdc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 321 ms_handle_reset con 0x557ad7191000 session 0x557ad73f1880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 321 heartbeat osd_stat(store_statfs(0x4f7065000/0x0/0x4ffc00000, data 0x2915886/0x2ae5000, compress 0x0/0x0/0x0, omap 0x5a646, meta 0x60559ba), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 173891584 unmapped: 81780736 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2827291 data_alloc: 234881024 data_used: 19944678
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ada0cdc00 session 0x557ad4eb3180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 heartbeat osd_stat(store_statfs(0x4f7062000/0x0/0x4ffc00000, data 0x291724d/0x2ae5000, compress 0x0/0x0/0x0, omap 0x5ad89, meta 0x6055277), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7507800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:04.058900+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ad7507800 session 0x557ad791e540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ad52d5000 session 0x557ad78ec540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ad52d5000 session 0x557ad789ac40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 heartbeat osd_stat(store_statfs(0x4f7062000/0x0/0x4ffc00000, data 0x291724d/0x2ae5000, compress 0x0/0x0/0x0, omap 0x5ad89, meta 0x6055277), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ad7191000 session 0x557ad7b5a1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ad52c7000 session 0x557ad733c540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7507800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ad7507800 session 0x557ad4eb9880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 81657856 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cdc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ada0cdc00 session 0x557ad5e9d180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:05.059317+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 174039040 unmapped: 81633280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ad52c7000 session 0x557ad4eb2a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:06.059512+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 174039040 unmapped: 81633280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:07.059665+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 174039040 unmapped: 81633280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ad7191000 session 0x557ad5a84e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7507800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71acc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 ms_handle_reset con 0x557ad7507800 session 0x557ad5328000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 322 handle_osd_map epochs [322,323], i have 323, src has [1,323]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 323 ms_handle_reset con 0x557ad71acc00 session 0x557ad518f180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad750cc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 323 ms_handle_reset con 0x557ad750cc00 session 0x557ad5e4f6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:08.059822+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 323 ms_handle_reset con 0x557ad52c7000 session 0x557ad7b5aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 323 heartbeat osd_stat(store_statfs(0x4f705e000/0x0/0x4ffc00000, data 0x291ac02/0x2aec000, compress 0x0/0x0/0x0, omap 0x5b687, meta 0x6054979), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 174039040 unmapped: 81633280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2833576 data_alloc: 234881024 data_used: 19946163
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 323 handle_osd_map epochs [323,324], i have 323, src has [1,324]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 324 ms_handle_reset con 0x557ad52d5000 session 0x557ad789b6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:09.059955+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 324 ms_handle_reset con 0x557ad7191000 session 0x557ad518ea80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71acc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 324 ms_handle_reset con 0x557ad71acc00 session 0x557ad791f880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 174039040 unmapped: 81633280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7507800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 324 ms_handle_reset con 0x557ad7507800 session 0x557ad5350e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 324 ms_handle_reset con 0x557ad52d5000 session 0x557ad789a700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 325 ms_handle_reset con 0x557ad52c7000 session 0x557ad518fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:10.060092+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 325 ms_handle_reset con 0x557ad7191000 session 0x557ad518ea80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71acc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 81469440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:11.060214+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 325 ms_handle_reset con 0x557add020800 session 0x557ad4eb8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 178601984 unmapped: 77070336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:12.060345+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.253252983s of 10.661709785s, submitted: 253
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 325 ms_handle_reset con 0x557ad7775c00 session 0x557ad524d340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 178782208 unmapped: 76890112 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 325 ms_handle_reset con 0x557ad52c7000 session 0x557ad791e700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:13.060487+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 325 heartbeat osd_stat(store_statfs(0x4f7037000/0x0/0x4ffc00000, data 0x2942380/0x2b15000, compress 0x0/0x0/0x0, omap 0x5c3fe, meta 0x6053c02), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 325 handle_osd_map epochs [326,326], i have 326, src has [1,326]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179011584 unmapped: 76660736 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925323 data_alloc: 234881024 data_used: 25065025
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 326 ms_handle_reset con 0x557ad7191000 session 0x557ad789b340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 327 ms_handle_reset con 0x557ad7775c00 session 0x557ad73f16c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb192c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 327 ms_handle_reset con 0x557add020800 session 0x557ad7b368c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:14.060601+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179011584 unmapped: 76660736 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 328 ms_handle_reset con 0x557adb192c00 session 0x557ad5e9dc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 328 ms_handle_reset con 0x557ad52c7000 session 0x557ad5e9c700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 328 ms_handle_reset con 0x557ad52d5000 session 0x557ad7b36000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:15.060718+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 328 ms_handle_reset con 0x557ad7191000 session 0x557ad790aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 178184192 unmapped: 77488128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:16.060864+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 178184192 unmapped: 77488128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 328 handle_osd_map epochs [328,329], i have 328, src has [1,329]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 329 ms_handle_reset con 0x557add020800 session 0x557ad524c8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 329 ms_handle_reset con 0x557ad7775c00 session 0x557ad71a8fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:17.061002+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 329 ms_handle_reset con 0x557ad52c7000 session 0x557ad791fdc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179273728 unmapped: 76398592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 329 handle_osd_map epochs [329,330], i have 330, src has [1,330]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 330 ms_handle_reset con 0x557ad52d5000 session 0x557ad5e9ce00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:18.061163+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179314688 unmapped: 76357632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2900450 data_alloc: 234881024 data_used: 25060929
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:19.061331+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 330 heartbeat osd_stat(store_statfs(0x4f701f000/0x0/0x4ffc00000, data 0x294b215/0x2b28000, compress 0x0/0x0/0x0, omap 0x5da9b, meta 0x6052565), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179322880 unmapped: 76349440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 330 ms_handle_reset con 0x557ad8d96400 session 0x557ad78ecfc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 330 ms_handle_reset con 0x557add020800 session 0x557ad4eb9dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbd400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 330 ms_handle_reset con 0x557adcfbd400 session 0x557ad5329180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:20.061497+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179740672 unmapped: 75931648 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 331 ms_handle_reset con 0x557adb193400 session 0x557ad5350380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 331 ms_handle_reset con 0x557ad52c7000 session 0x557ad7535180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 331 ms_handle_reset con 0x557ad7191000 session 0x557ad78f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 331 heartbeat osd_stat(store_statfs(0x4f701f000/0x0/0x4ffc00000, data 0x294b215/0x2b28000, compress 0x0/0x0/0x0, omap 0x5da9b, meta 0x6052565), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:21.061605+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52d5000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 186425344 unmapped: 69246976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 331 ms_handle_reset con 0x557ad8d96400 session 0x557ad6bf01c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:22.061773+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 331 handle_osd_map epochs [331,332], i have 331, src has [1,332]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.030637741s of 10.306366920s, submitted: 146
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 186449920 unmapped: 69222400 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 332 ms_handle_reset con 0x557ad52d5000 session 0x557ad733c700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 332 ms_handle_reset con 0x557ad52c7000 session 0x557ad73f1880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:23.061893+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 72425472 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949555 data_alloc: 251658240 data_used: 27832745
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 332 heartbeat osd_stat(store_statfs(0x4f6c14000/0x0/0x4ffc00000, data 0x2d56a0f/0x2f36000, compress 0x0/0x0/0x0, omap 0x5e429, meta 0x6051bd7), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:24.062011+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 72376320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 334 ms_handle_reset con 0x557ad7191000 session 0x557ad78ed6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 334 ms_handle_reset con 0x557ad8d96400 session 0x557ad78f1dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:25.062257+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 72343552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 335 ms_handle_reset con 0x557adb193400 session 0x557ad5347180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:26.062574+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 335 ms_handle_reset con 0x557add020800 session 0x557ad78f0540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 335 ms_handle_reset con 0x557ad52c7000 session 0x557ad7b5a1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 183566336 unmapped: 72105984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 336 ms_handle_reset con 0x557ad7191000 session 0x557ad524da40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:27.062745+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 183681024 unmapped: 71991296 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:28.062878+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 336 ms_handle_reset con 0x557adb193400 session 0x557ad6bf1dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 336 heartbeat osd_stat(store_statfs(0x4f683e000/0x0/0x4ffc00000, data 0x3127389/0x330a000, compress 0x0/0x0/0x0, omap 0x5f001, meta 0x6050fff), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 184041472 unmapped: 71630848 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 336 handle_osd_map epochs [336,337], i have 337, src has [1,337]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993795 data_alloc: 251658240 data_used: 28238347
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:29.063027+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add023000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 185401344 unmapped: 70270976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 337 ms_handle_reset con 0x557ad8d96400 session 0x557ad790afc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add027c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 338 ms_handle_reset con 0x557add027c00 session 0x557ad78f1dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:30.063144+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 187752448 unmapped: 67919872 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:31.063314+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 187752448 unmapped: 67919872 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:32.063519+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 338 ms_handle_reset con 0x557add023000 session 0x557ad6bf0000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 338 ms_handle_reset con 0x557add020800 session 0x557ad4eb9340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.545202255s of 10.002570152s, submitted: 222
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 186638336 unmapped: 69033984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 338 ms_handle_reset con 0x557ad52c7000 session 0x557ad790aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:33.063649+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 186638336 unmapped: 69033984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2970051 data_alloc: 251658240 data_used: 28234153
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 338 heartbeat osd_stat(store_statfs(0x4f6c06000/0x0/0x4ffc00000, data 0x2d60b2d/0x2f44000, compress 0x0/0x0/0x0, omap 0x5f6dd, meta 0x6050923), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:34.063783+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 186654720 unmapped: 69017600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:35.063888+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 339 ms_handle_reset con 0x557ad7191000 session 0x557ad7534a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 186662912 unmapped: 69009408 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:36.064010+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189440000 unmapped: 66232320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:37.064157+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 340 ms_handle_reset con 0x557adb193400 session 0x557ad4eb2000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189448192 unmapped: 66224128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 341 heartbeat osd_stat(store_statfs(0x4f69d1000/0x0/0x4ffc00000, data 0x2f90279/0x3179000, compress 0x0/0x0/0x0, omap 0x6011a, meta 0x604fee6), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 341 ms_handle_reset con 0x557ad7191000 session 0x557ad78f0700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 341 ms_handle_reset con 0x557ad52c7000 session 0x557ad4eb8fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:38.064297+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 187293696 unmapped: 68378624 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006922 data_alloc: 251658240 data_used: 28242341
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 342 ms_handle_reset con 0x557adb193400 session 0x557ad78ecfc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 342 ms_handle_reset con 0x557add020800 session 0x557ad7534fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:39.064416+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 342 ms_handle_reset con 0x557ad8d96400 session 0x557ad733c700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188375040 unmapped: 67297280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:40.064534+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 67289088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:41.064686+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 343 ms_handle_reset con 0x557ad52c7000 session 0x557ad4eb8540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 343 ms_handle_reset con 0x557ad7191000 session 0x557ad791e540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188481536 unmapped: 67190784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:42.064873+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188481536 unmapped: 67190784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 343 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.023833275s of 10.397670746s, submitted: 209
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 344 ms_handle_reset con 0x557adb193400 session 0x557ad5347500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:43.064986+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 344 ms_handle_reset con 0x557add020800 session 0x557ad7b27c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add023000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 344 ms_handle_reset con 0x557add023000 session 0x557ad4eb9a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad52c7000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 344 ms_handle_reset con 0x557ad52c7000 session 0x557ad7b5ac40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 67108864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 344 heartbeat osd_stat(store_statfs(0x4f69c5000/0x0/0x4ffc00000, data 0x2f97663/0x3182000, compress 0x0/0x0/0x0, omap 0x61efa, meta 0x604e106), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3008001 data_alloc: 251658240 data_used: 28242212
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:44.065106+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188571648 unmapped: 67100672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:45.065224+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188571648 unmapped: 67100672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:46.065371+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 344 heartbeat osd_stat(store_statfs(0x4f69c6000/0x0/0x4ffc00000, data 0x2f97601/0x3181000, compress 0x0/0x0/0x0, omap 0x621c5, meta 0x604de3b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188571648 unmapped: 67100672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:47.065516+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188571648 unmapped: 67100672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:48.065674+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188579840 unmapped: 67092480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010231 data_alloc: 251658240 data_used: 28246273
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:49.065846+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188579840 unmapped: 67092480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 345 ms_handle_reset con 0x557ad7191000 session 0x557ad7b36000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 345 ms_handle_reset con 0x557adb193400 session 0x557ad5e9ca80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:50.065968+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188579840 unmapped: 67092480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add023000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:51.066104+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 346 ms_handle_reset con 0x557add023000 session 0x557ad4eb9500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188907520 unmapped: 66764800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbe800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 346 ms_handle_reset con 0x557adcfbe800 session 0x557ad6bf0700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:52.066278+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad750c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 346 ms_handle_reset con 0x557ad750c800 session 0x557ad4eb8380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 346 ms_handle_reset con 0x557add020800 session 0x557ad5347a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 346 handle_osd_map epochs [346,347], i have 346, src has [1,347]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 347 ms_handle_reset con 0x557ad8f3d000 session 0x557ad4eb8e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 347 heartbeat osd_stat(store_statfs(0x4f67be000/0x0/0x4ffc00000, data 0x319bfa0/0x338c000, compress 0x0/0x0/0x0, omap 0x62b34, meta 0x604d4cc), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92bf000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3dc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 347 ms_handle_reset con 0x557ad92bf000 session 0x557ad5a84380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189300736 unmapped: 66371584 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 ms_handle_reset con 0x557ad8f3dc00 session 0x557ad78f1880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad750c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 ms_handle_reset con 0x557ad750c800 session 0x557ad789a8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.859265327s of 10.059959412s, submitted: 137
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:53.066414+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 ms_handle_reset con 0x557ad8f3d000 session 0x557ad5e9d6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92bf000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 ms_handle_reset con 0x557ad92bf000 session 0x557ad5a84e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 heartbeat osd_stat(store_statfs(0x4f67b7000/0x0/0x4ffc00000, data 0x319fc74/0x3393000, compress 0x0/0x0/0x0, omap 0x6323f, meta 0x604cdc1), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189341696 unmapped: 66330624 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3040683 data_alloc: 251658240 data_used: 28853704
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 ms_handle_reset con 0x557ad71acc00 session 0x557ad5a84700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 ms_handle_reset con 0x557ad5ebac00 session 0x557ad5328380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:54.066528+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 ms_handle_reset con 0x557ad5ebac00 session 0x557ad791f340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71acc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 ms_handle_reset con 0x557ad71acc00 session 0x557ad5328380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 66469888 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:55.066674+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 66469888 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:56.066798+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad750c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 ms_handle_reset con 0x557ad8f3d000 session 0x557ad791ec40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 66461696 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92bf000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 349 ms_handle_reset con 0x557ad92bf000 session 0x557ad79381c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:57.066907+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbf000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 349 ms_handle_reset con 0x557adcfbf000 session 0x557ad5351500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 349 ms_handle_reset con 0x557ad5ebac00 session 0x557ad78f08c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189251584 unmapped: 66420736 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 350 heartbeat osd_stat(store_statfs(0x4f6e0d000/0x0/0x4ffc00000, data 0x2b49789/0x2d3d000, compress 0x0/0x0/0x0, omap 0x639a4, meta 0x604c65c), peers [1,2] op hist [1])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 350 ms_handle_reset con 0x557add020800 session 0x557ad53296c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71acc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 350 ms_handle_reset con 0x557ad71acc00 session 0x557ad524ddc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 350 ms_handle_reset con 0x557ad750c800 session 0x557ad789a8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:58.067112+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92bf000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189267968 unmapped: 66404352 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998052 data_alloc: 251658240 data_used: 28031960
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 352 ms_handle_reset con 0x557ad92bf000 session 0x557ad790aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 352 ms_handle_reset con 0x557add024400 session 0x557ad791e8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:39:59.067267+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 352 ms_handle_reset con 0x557ad8f3d000 session 0x557ad78f0700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 352 ms_handle_reset con 0x557ad5ebac00 session 0x557ad78f1dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71acc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 352 ms_handle_reset con 0x557ad71acc00 session 0x557ad791e540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 183894016 unmapped: 71778304 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:00.067447+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad750c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 353 ms_handle_reset con 0x557ad750c800 session 0x557ad733c700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 183894016 unmapped: 71778304 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:01.067567+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 353 heartbeat osd_stat(store_statfs(0x4f78cc000/0x0/0x4ffc00000, data 0x2081a5a/0x2279000, compress 0x0/0x0/0x0, omap 0x64528, meta 0x604bad8), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 353 ms_handle_reset con 0x557ad6394c00 session 0x557ad37c3180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 353 ms_handle_reset con 0x557ad7190400 session 0x557ad518fa40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 183894016 unmapped: 71778304 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71acc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:02.067681+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 354 ms_handle_reset con 0x557ad8f3d000 session 0x557ad790ac40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 354 ms_handle_reset con 0x557ad71acc00 session 0x557ad7b5afc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 354 ms_handle_reset con 0x557add020800 session 0x557ad4eb8c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 87408640 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 355 heartbeat osd_stat(store_statfs(0x4f82a8000/0x0/0x4ffc00000, data 0xd9d63e/0xf96000, compress 0x0/0x0/0x0, omap 0x649fd, meta 0x604b603), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 355 ms_handle_reset con 0x557ad6394c00 session 0x557ad789b6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 355 ms_handle_reset con 0x557ad5ebac00 session 0x557ad7b5aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:03.067790+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168321024 unmapped: 87351296 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2734325 data_alloc: 218103808 data_used: 134603
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.545272827s of 10.910304070s, submitted: 220
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 356 ms_handle_reset con 0x557add024400 session 0x557ad73f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:04.067884+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 356 ms_handle_reset con 0x557ad7190400 session 0x557ad7b36fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 356 heartbeat osd_stat(store_statfs(0x4f8bae000/0x0/0x4ffc00000, data 0xd9cfde/0xf97000, compress 0x0/0x0/0x0, omap 0x64c29, meta 0x604b3d7), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167133184 unmapped: 88539136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71acc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 356 ms_handle_reset con 0x557ad71acc00 session 0x557ad791fdc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:05.068008+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 356 ms_handle_reset con 0x557ad5ebac00 session 0x557ad5e4ee00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167157760 unmapped: 88514560 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 357 ms_handle_reset con 0x557ad8f3d000 session 0x557ad4eb8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:06.068140+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 357 ms_handle_reset con 0x557ad6394c00 session 0x557ad71a8380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167231488 unmapped: 88440832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:07.068279+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 357 ms_handle_reset con 0x557add020800 session 0x557ad71a8fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167256064 unmapped: 88416256 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 357 handle_osd_map epochs [357,358], i have 358, src has [1,358]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 358 ms_handle_reset con 0x557ad5eba800 session 0x557ad4eb2a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:08.068410+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 358 ms_handle_reset con 0x557ad7190400 session 0x557ad7b36700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167280640 unmapped: 88391680 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2743153 data_alloc: 218103808 data_used: 139292
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 359 ms_handle_reset con 0x557ad5eba800 session 0x557ad78ec8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:09.068550+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 359 ms_handle_reset con 0x557add024400 session 0x557ad4eb2c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 359 ms_handle_reset con 0x557ad5ebac00 session 0x557ad7475c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad6394c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 359 ms_handle_reset con 0x557ad6394c00 session 0x557ad4eb9880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167346176 unmapped: 88326144 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:10.068691+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 359 heartbeat osd_stat(store_statfs(0x4f8bac000/0x0/0x4ffc00000, data 0xda205b/0xf9c000, compress 0x0/0x0/0x0, omap 0x66014, meta 0x6049fec), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167354368 unmapped: 88317952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:11.068848+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 360 ms_handle_reset con 0x557ad5eba800 session 0x557ad4eb2380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 360 ms_handle_reset con 0x557ad5ebac00 session 0x557ad6bf01c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 360 ms_handle_reset con 0x557ad7190400 session 0x557ad791b880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167337984 unmapped: 88334336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:12.068996+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 360 heartbeat osd_stat(store_statfs(0x4f8bae000/0x0/0x4ffc00000, data 0xda36cf/0xf9c000, compress 0x0/0x0/0x0, omap 0x667f9, meta 0x6049807), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 360 ms_handle_reset con 0x557add024400 session 0x557ad5346c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167337984 unmapped: 88334336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 361 ms_handle_reset con 0x557ad8f3d000 session 0x557ad53508c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:13.069127+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 361 heartbeat osd_stat(store_statfs(0x4f8ba7000/0x0/0x4ffc00000, data 0xda53cd/0xfa1000, compress 0x0/0x0/0x0, omap 0x66d86, meta 0x604927a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167346176 unmapped: 88326144 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2750180 data_alloc: 218103808 data_used: 139081
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:14.069282+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167346176 unmapped: 88326144 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:15.069404+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167346176 unmapped: 88326144 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:16.069539+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.606457710s of 12.237533569s, submitted: 286
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 361 ms_handle_reset con 0x557ad5ebac00 session 0x557ad7b361c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:17.069683+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167346176 unmapped: 88326144 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 362 ms_handle_reset con 0x557ad7190400 session 0x557ad7911180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 363 ms_handle_reset con 0x557add024400 session 0x557ad78f1180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:18.069793+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 363 ms_handle_reset con 0x557ad5eba800 session 0x557ad7b37880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167411712 unmapped: 88260608 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2762720 data_alloc: 218103808 data_used: 139309
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:19.069971+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 363 heartbeat osd_stat(store_statfs(0x4f8b9d000/0x0/0x4ffc00000, data 0xda8d13/0xfab000, compress 0x0/0x0/0x0, omap 0x672ae, meta 0x6048d52), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167411712 unmapped: 88260608 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add025c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 363 ms_handle_reset con 0x557add025c00 session 0x557ad524c1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:20.070110+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167411712 unmapped: 88260608 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 364 heartbeat osd_stat(store_statfs(0x4f8b9b000/0x0/0x4ffc00000, data 0xda8d85/0xfad000, compress 0x0/0x0/0x0, omap 0x672ae, meta 0x6048d52), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 364 ms_handle_reset con 0x557ad5eba800 session 0x557ad791f180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:21.070436+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 88268800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 364 heartbeat osd_stat(store_statfs(0x4f8b9a000/0x0/0x4ffc00000, data 0xdaa921/0xfb0000, compress 0x0/0x0/0x0, omap 0x677a5, meta 0x604885b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 365 ms_handle_reset con 0x557ad5ebac00 session 0x557ad5328540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 365 ms_handle_reset con 0x557ad7190400 session 0x557ad7535a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 365 ms_handle_reset con 0x557add020800 session 0x557ad518fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:22.070623+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 88268800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:23.070791+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 88268800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a9400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 365 ms_handle_reset con 0x557ad78a9400 session 0x557ad7b5ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 366 ms_handle_reset con 0x557ad5eba800 session 0x557ad78ed180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2777547 data_alloc: 218103808 data_used: 139423
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:24.070967+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 88268800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 367 ms_handle_reset con 0x557ad5ebac00 session 0x557ad5a84a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 367 ms_handle_reset con 0x557ad7190400 session 0x557ad4eb8540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 367 ms_handle_reset con 0x557add024400 session 0x557ad4eb2e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 367 heartbeat osd_stat(store_statfs(0x4f8b92000/0x0/0x4ffc00000, data 0xdae5ac/0xfb8000, compress 0x0/0x0/0x0, omap 0x679cd, meta 0x6048633), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:25.071098+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 88268800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 367 heartbeat osd_stat(store_statfs(0x4f8b8c000/0x0/0x4ffc00000, data 0xdb0639/0xfbc000, compress 0x0/0x0/0x0, omap 0x67ec8, meta 0x6048138), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:26.071230+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 88268800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:27.071389+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 88268800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5fd0800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 367 ms_handle_reset con 0x557ad5fd0800 session 0x557ad5e9d180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.225564957s of 11.294611931s, submitted: 96
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:28.071548+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 88268800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 368 ms_handle_reset con 0x557ad5eba800 session 0x557ad5e9d500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5ebac00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 368 ms_handle_reset con 0x557ad7190400 session 0x557ad791ee00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 368 ms_handle_reset con 0x557add024400 session 0x557ad71a8c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2787549 data_alloc: 218103808 data_used: 140593
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:29.071687+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167403520 unmapped: 88268800 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 369 ms_handle_reset con 0x557adb19b000 session 0x557ad524ce00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 369 ms_handle_reset con 0x557ad5ebac00 session 0x557ad789a380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 369 ms_handle_reset con 0x557add020800 session 0x557ad4eb3340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:30.071806+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 369 ms_handle_reset con 0x557ad7190400 session 0x557ad524d340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167419904 unmapped: 88252416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 370 ms_handle_reset con 0x557ad5eba800 session 0x557ad7b376c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 370 heartbeat osd_stat(store_statfs(0x4f8b89000/0x0/0x4ffc00000, data 0xdb3f7c/0xfc3000, compress 0x0/0x0/0x0, omap 0x684db, meta 0x6047b25), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:31.071942+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167428096 unmapped: 88244224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:32.072151+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167428096 unmapped: 88244224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 371 ms_handle_reset con 0x557adb19b000 session 0x557ad71a8540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 371 ms_handle_reset con 0x557add024400 session 0x557ad791fdc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 371 heartbeat osd_stat(store_statfs(0x4f8b82000/0x0/0x4ffc00000, data 0xdb72a3/0xfc8000, compress 0x0/0x0/0x0, omap 0x6883b, meta 0x60477c5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:33.072316+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167428096 unmapped: 88244224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2794295 data_alloc: 218103808 data_used: 142160
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:34.072463+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167428096 unmapped: 88244224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 372 ms_handle_reset con 0x557ad5eba800 session 0x557ad6bf1500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 372 ms_handle_reset con 0x557ad7190400 session 0x557ad4eb2c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:35.072601+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 372 ms_handle_reset con 0x557adb19b000 session 0x557ad4eb2e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167469056 unmapped: 88203264 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add020800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:36.072772+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167485440 unmapped: 88186880 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb196400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 374 ms_handle_reset con 0x557adb196400 session 0x557ad524d180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 374 ms_handle_reset con 0x557add020800 session 0x557ad791ee00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:37.072954+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167550976 unmapped: 88121344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 374 handle_osd_map epochs [374,375], i have 375, src has [1,375]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.663015366s of 10.142137527s, submitted: 141
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 375 heartbeat osd_stat(store_statfs(0x4f8b7d000/0x0/0x4ffc00000, data 0xdbc205/0xfcb000, compress 0x0/0x0/0x0, omap 0x6909f, meta 0x6046f61), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 376 ms_handle_reset con 0x557ad7190400 session 0x557ad733c700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:38.073117+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167575552 unmapped: 88096768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2808099 data_alloc: 218103808 data_used: 142886
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:39.073249+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb196400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 87801856 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 378 ms_handle_reset con 0x557ad5eba800 session 0x557ad789a380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 378 ms_handle_reset con 0x557adb19b000 session 0x557ad78ed6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb195400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:40.073414+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 378 ms_handle_reset con 0x557adb195400 session 0x557ad7b361c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 87793664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 378 ms_handle_reset con 0x557ad5eddc00 session 0x557ad78f1dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 378 ms_handle_reset con 0x557adb196400 session 0x557ad5350fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 379 ms_handle_reset con 0x557ad5eddc00 session 0x557ad518f500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:41.073544+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 87793664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 380 ms_handle_reset con 0x557ad7190400 session 0x557ad524d340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 380 ms_handle_reset con 0x557ad5eba800 session 0x557ad71a8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb195400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 380 ms_handle_reset con 0x557adb195400 session 0x557ad5a848c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:42.073730+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 380 ms_handle_reset con 0x557adb19b000 session 0x557ad5328540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168951808 unmapped: 86720512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 381 heartbeat osd_stat(store_statfs(0x4f8b32000/0x0/0x4ffc00000, data 0xe06995/0x1018000, compress 0x0/0x0/0x0, omap 0x6a7b5, meta 0x604584b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:43.073861+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 381 ms_handle_reset con 0x557ad5eddc00 session 0x557ad5e4fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168968192 unmapped: 86704128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 382 ms_handle_reset con 0x557ad5eba800 session 0x557ad7b37180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2826312 data_alloc: 218103808 data_used: 143385
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:44.073957+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 382 ms_handle_reset con 0x557ad7190400 session 0x557ad4eb9880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 169000960 unmapped: 86671360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb196400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 383 ms_handle_reset con 0x557adb196400 session 0x557ad791b880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:45.074090+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 169009152 unmapped: 86663168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:46.074244+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 383 ms_handle_reset con 0x557ad5eba800 session 0x557ad78eca80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 169009152 unmapped: 86663168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 384 ms_handle_reset con 0x557ad5eddc00 session 0x557ad4eb2380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:47.074370+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 169025536 unmapped: 86646784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 384 ms_handle_reset con 0x557ad7190400 session 0x557ad518fa40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.624364853s of 10.005038261s, submitted: 268
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 384 ms_handle_reset con 0x557adb19b000 session 0x557ad5e9d180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f8b23000/0x0/0x4ffc00000, data 0xe0d920/0x1025000, compress 0x0/0x0/0x0, omap 0x6b683, meta 0x604497d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 384 handle_osd_map epochs [384,385], i have 385, src has [1,385]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:48.074510+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92be800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 385 ms_handle_reset con 0x557ad92be800 session 0x557ad790ac40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 169025536 unmapped: 86646784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2831773 data_alloc: 218103808 data_used: 145098
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:49.074646+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92be800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 386 ms_handle_reset con 0x557ad92be800 session 0x557ad791f6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 169017344 unmapped: 86654976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 386 ms_handle_reset con 0x557ad5eba800 session 0x557ad78ece00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 386 ms_handle_reset con 0x557ad5eddc00 session 0x557ad790a1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 386 ms_handle_reset con 0x557ad7190400 session 0x557ad7b5ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:50.074891+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170065920 unmapped: 85606400 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb193800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 387 ms_handle_reset con 0x557ad8d96800 session 0x557ad7b37880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:51.075039+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 387 ms_handle_reset con 0x557adb19b000 session 0x557ad791f180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 387 ms_handle_reset con 0x557ad8d96800 session 0x557ad5e4f340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 85573632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 387 ms_handle_reset con 0x557ad5eba800 session 0x557ad7b5ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 387 ms_handle_reset con 0x557ad5eddc00 session 0x557ad791f6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92be800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 387 ms_handle_reset con 0x557ad92be800 session 0x557ad7b361c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:52.075177+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 388 ms_handle_reset con 0x557ad7190400 session 0x557ad524d340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 85573632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 388 heartbeat osd_stat(store_statfs(0x4f8b1c000/0x0/0x4ffc00000, data 0xe12bba/0x102e000, compress 0x0/0x0/0x0, omap 0x6c055, meta 0x6043fab), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92be800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 388 heartbeat osd_stat(store_statfs(0x4f8b1c000/0x0/0x4ffc00000, data 0xe12bba/0x102e000, compress 0x0/0x0/0x0, omap 0x6c055, meta 0x6043fab), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:53.075277+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 389 ms_handle_reset con 0x557ad5eba800 session 0x557ad37c3180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170123264 unmapped: 85549056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f8b13000/0x0/0x4ffc00000, data 0xe16213/0x1033000, compress 0x0/0x0/0x0, omap 0x6c8fb, meta 0x6043705), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2851475 data_alloc: 218103808 data_used: 402275
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:54.075413+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 390 ms_handle_reset con 0x557ad92be800 session 0x557ad7b36fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 85540864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:55.075565+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 ms_handle_reset con 0x557ad5eddc00 session 0x557ad733c540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170156032 unmapped: 85516288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 heartbeat osd_stat(store_statfs(0x4f8b0e000/0x0/0x4ffc00000, data 0xe19df3/0x103a000, compress 0x0/0x0/0x0, omap 0x6d18c, meta 0x6042e74), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:56.075700+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 ms_handle_reset con 0x557adb193800 session 0x557ad5328fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 ms_handle_reset con 0x557adcfc1800 session 0x557ad4eb8c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170156032 unmapped: 85516288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 ms_handle_reset con 0x557ad5eba800 session 0x557ad7b5ac40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:57.075880+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170156032 unmapped: 85516288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 ms_handle_reset con 0x557ad5eddc00 session 0x557ad78ed180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7190400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92be800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 ms_handle_reset con 0x557ad92be800 session 0x557ad53468c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 ms_handle_reset con 0x557ad7190400 session 0x557ad78f0700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.760316849s of 10.083900452s, submitted: 129
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 ms_handle_reset con 0x557ad5eba800 session 0x557ad789b6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:58.075993+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 85483520 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2865190 data_alloc: 218103808 data_used: 403473
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:40:59.076091+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 85475328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 ms_handle_reset con 0x557ad5eddc00 session 0x557ad78f1180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92be800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:00.076202+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 ms_handle_reset con 0x557adcfc1800 session 0x557ad63848c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 heartbeat osd_stat(store_statfs(0x4f8b05000/0x0/0x4ffc00000, data 0xe1d50d/0x1043000, compress 0x0/0x0/0x0, omap 0x6d9c7, meta 0x6042639), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170205184 unmapped: 85467136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 ms_handle_reset con 0x557ad8d96800 session 0x557ad6bf1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 ms_handle_reset con 0x557ad92be800 session 0x557ad5350fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:01.076328+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170205184 unmapped: 85467136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 heartbeat osd_stat(store_statfs(0x4f8b48000/0x0/0x4ffc00000, data 0xddd51d/0x1004000, compress 0x0/0x0/0x0, omap 0x6dafb, meta 0x6042505), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:02.076464+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 ms_handle_reset con 0x557ad5eba800 session 0x557ad5a84a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170237952 unmapped: 85434368 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:03.076696+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 heartbeat osd_stat(store_statfs(0x4f8b43000/0x0/0x4ffc00000, data 0xddef9c/0x1007000, compress 0x0/0x0/0x0, omap 0x6e20b, meta 0x6041df5), peers [1,2] op hist [0,0,0,0,0,0,1])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 handle_osd_map epochs [394,394], i have 394, src has [1,394]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 393 handle_osd_map epochs [394,394], i have 394, src has [1,394]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170246144 unmapped: 85426176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:04.076855+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871031 data_alloc: 218103808 data_used: 147532
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 ms_handle_reset con 0x557ad5eddc00 session 0x557ad78f1dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170246144 unmapped: 85426176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:05.077005+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 ms_handle_reset con 0x557ad8d96800 session 0x557ad78f0e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170246144 unmapped: 85426176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 heartbeat osd_stat(store_statfs(0x4f8b3e000/0x0/0x4ffc00000, data 0xde0b38/0x100a000, compress 0x0/0x0/0x0, omap 0x6e455, meta 0x6041bab), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 ms_handle_reset con 0x557adcfc1800 session 0x557ad71a8fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:06.077201+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 170246144 unmapped: 85426176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 ms_handle_reset con 0x557adb19b000 session 0x557ad71a8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 ms_handle_reset con 0x557ad5eba800 session 0x557ad6bf01c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:07.077313+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 ms_handle_reset con 0x557ad5eddc00 session 0x557ad71a9340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 ms_handle_reset con 0x557adb19b000 session 0x557ad4eb21c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171311104 unmapped: 84361216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 ms_handle_reset con 0x557ad8d96800 session 0x557ad524d180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:08.083803+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171311104 unmapped: 84361216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.827701569s of 11.172548294s, submitted: 93
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 396 ms_handle_reset con 0x557adcfc1800 session 0x557ad5347dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:09.083978+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2881338 data_alloc: 218103808 data_used: 148117
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 396 ms_handle_reset con 0x557ad5eba800 session 0x557ad791a1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171343872 unmapped: 84328448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 396 heartbeat osd_stat(store_statfs(0x4f8b3b000/0x0/0x4ffc00000, data 0xde2778/0x100f000, compress 0x0/0x0/0x0, omap 0x6eace, meta 0x6041532), peers [1,2] op hist [0,1,0,5])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 396 ms_handle_reset con 0x557ad5eddc00 session 0x557ad5a85340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 396 ms_handle_reset con 0x557ad8d96800 session 0x557ad78ed6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:10.084154+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 396 ms_handle_reset con 0x557ad7775c00 session 0x557ad4eb9340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 396 ms_handle_reset con 0x557adb19b000 session 0x557ad78ec000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171851776 unmapped: 83820544 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:11.084314+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 397 ms_handle_reset con 0x557adb19b000 session 0x557ad4eb9880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171851776 unmapped: 83820544 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 397 ms_handle_reset con 0x557ad5eba800 session 0x557ad7911180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 397 ms_handle_reset con 0x557ad5eddc00 session 0x557ad5e4fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:12.084548+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171802624 unmapped: 83869696 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 397 ms_handle_reset con 0x557ad7775c00 session 0x557ad5350000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:13.084663+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 398 ms_handle_reset con 0x557ad8d96800 session 0x557ad7910c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 83836928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 398 heartbeat osd_stat(store_statfs(0x4f81ee000/0x0/0x4ffc00000, data 0x17312d2/0x195e000, compress 0x0/0x0/0x0, omap 0x6f0b0, meta 0x6040f50), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:14.084860+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951346 data_alloc: 218103808 data_used: 148702
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171859968 unmapped: 83812352 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:15.085034+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 398 ms_handle_reset con 0x557ad8d96800 session 0x557ad7b37180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171876352 unmapped: 83795968 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 398 ms_handle_reset con 0x557ad5eba800 session 0x557ad78ed880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:16.085170+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171884544 unmapped: 83787776 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 399 ms_handle_reset con 0x557ad5eddc00 session 0x557ad733c540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:17.085339+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 399 heartbeat osd_stat(store_statfs(0x4f81e6000/0x0/0x4ffc00000, data 0x1734ace/0x1964000, compress 0x0/0x0/0x0, omap 0x6fa79, meta 0x6040587), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171933696 unmapped: 83738624 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:18.085468+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171933696 unmapped: 83738624 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:19.085629+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2953525 data_alloc: 218103808 data_used: 148702
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.667795181s of 10.471121788s, submitted: 176
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb19b000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 399 ms_handle_reset con 0x557ad7775c00 session 0x557ad4eb8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 399 ms_handle_reset con 0x557adb19b000 session 0x557ad6bf01c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171941888 unmapped: 83730432 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 399 ms_handle_reset con 0x557ad5eba800 session 0x557ad5e9d500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:20.085762+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 399 heartbeat osd_stat(store_statfs(0x4f81e6000/0x0/0x4ffc00000, data 0x1734ace/0x1964000, compress 0x0/0x0/0x0, omap 0x6fa79, meta 0x6040587), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171950080 unmapped: 83722240 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:21.085946+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171950080 unmapped: 83722240 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:22.086238+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171950080 unmapped: 83722240 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 399 handle_osd_map epochs [399,400], i have 400, src has [1,400]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:23.086384+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 83714048 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 400 heartbeat osd_stat(store_statfs(0x4f81e4000/0x0/0x4ffc00000, data 0x1736595/0x1966000, compress 0x0/0x0/0x0, omap 0x701a5, meta 0x603fe5b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:24.086512+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 400 ms_handle_reset con 0x557ad5eddc00 session 0x557ad7475340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956772 data_alloc: 218103808 data_used: 148768
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 83714048 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 400 ms_handle_reset con 0x557ad7775c00 session 0x557ad524c1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 400 ms_handle_reset con 0x557ad8d96800 session 0x557ad791b880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:25.086789+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d98000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 400 ms_handle_reset con 0x557ad8d98000 session 0x557ad7939dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d98000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 400 ms_handle_reset con 0x557ad8d98000 session 0x557ad4eb2540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 83714048 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 400 ms_handle_reset con 0x557ad5eba800 session 0x557ad5a85c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:26.086952+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 83714048 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:27.087072+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 401 ms_handle_reset con 0x557ad7775c00 session 0x557ad7b5afc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171974656 unmapped: 83697664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 401 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x17365f7/0x1967000, compress 0x0/0x0/0x0, omap 0x701a5, meta 0x603fe5b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:28.087219+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172695552 unmapped: 82976768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 401 ms_handle_reset con 0x557ad8d96800 session 0x557ad7b5a8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7192c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 401 ms_handle_reset con 0x557ad5eddc00 session 0x557ad78edc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:29.087363+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012410 data_alloc: 218103808 data_used: 8486688
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 401 ms_handle_reset con 0x557ad5eddc00 session 0x557ad7b37dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.889658928s of 10.014397621s, submitted: 43
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 402 ms_handle_reset con 0x557ad7192c00 session 0x557ad791a1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172654592 unmapped: 83017728 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 402 ms_handle_reset con 0x557ad5eba800 session 0x557ad5e9cfc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:30.087535+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172654592 unmapped: 83017728 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 402 ms_handle_reset con 0x557ad7775c00 session 0x557ad5328e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 402 heartbeat osd_stat(store_statfs(0x4f81df000/0x0/0x4ffc00000, data 0x1739d83/0x196d000, compress 0x0/0x0/0x0, omap 0x704ff, meta 0x603fb01), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d96800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 402 ms_handle_reset con 0x557ad8d96800 session 0x557ad6bf0c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:31.087672+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 402 ms_handle_reset con 0x557ad5eba800 session 0x557ad524ddc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172670976 unmapped: 83001344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:32.087834+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 402 ms_handle_reset con 0x557ad5eddc00 session 0x557ad78f08c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7192c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 403 ms_handle_reset con 0x557ad7192c00 session 0x557ad7534380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172679168 unmapped: 82993152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f81e1000/0x0/0x4ffc00000, data 0x1739cbf/0x196b000, compress 0x0/0x0/0x0, omap 0x70d01, meta 0x603f2ff), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:33.087993+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f81de000/0x0/0x4ffc00000, data 0x173b8af/0x196e000, compress 0x0/0x0/0x0, omap 0x71263, meta 0x603ed9d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172679168 unmapped: 82993152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 403 ms_handle_reset con 0x557ad7775c00 session 0x557ad7b5b880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:34.088208+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909618 data_alloc: 218103808 data_used: 149994
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167182336 unmapped: 88489984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:35.088352+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167182336 unmapped: 88489984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d98000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 403 ms_handle_reset con 0x557ad8d98000 session 0x557ad71a9dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:36.088549+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167182336 unmapped: 88489984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f8b2b000/0x0/0x4ffc00000, data 0xdee8af/0x1021000, compress 0x0/0x0/0x0, omap 0x710b2, meta 0x603ef4e), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:37.088764+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 403 ms_handle_reset con 0x557ad5eba800 session 0x557ad6bf08c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 403 ms_handle_reset con 0x557ad5eddc00 session 0x557ad518ea80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167182336 unmapped: 88489984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:38.088942+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7192c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 404 ms_handle_reset con 0x557ad7775c00 session 0x557ad71a8fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167182336 unmapped: 88489984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f8b26000/0x0/0x4ffc00000, data 0xdf032e/0x1024000, compress 0x0/0x0/0x0, omap 0x71260, meta 0x603eda0), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 404 ms_handle_reset con 0x557ad7191c00 session 0x557ad791f340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad738c000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:39.089056+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb194400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 405 ms_handle_reset con 0x557ad738c000 session 0x557ad7b5a380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2922937 data_alloc: 218103808 data_used: 154106
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167182336 unmapped: 88489984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 405 heartbeat osd_stat(store_statfs(0x4f8b21000/0x0/0x4ffc00000, data 0xdf1f3c/0x1029000, compress 0x0/0x0/0x0, omap 0x7185f, meta 0x603e7a1), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.480361938s of 10.646369934s, submitted: 141
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 406 ms_handle_reset con 0x557adb194400 session 0x557ad790a000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:40.089195+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 406 ms_handle_reset con 0x557ad7192c00 session 0x557ad6bf0000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167182336 unmapped: 88489984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:41.089688+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 406 ms_handle_reset con 0x557ad5eba800 session 0x557ad7911180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 406 ms_handle_reset con 0x557ad5eddc00 session 0x557ad53508c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 167927808 unmapped: 87744512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7775c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 406 ms_handle_reset con 0x557ad7775c00 session 0x557ad791e380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:42.090035+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 407 ms_handle_reset con 0x557ad5eba800 session 0x557ad78f1880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 407 heartbeat osd_stat(store_statfs(0x4f8047000/0x0/0x4ffc00000, data 0x18c57aa/0x1b03000, compress 0x0/0x0/0x0, omap 0x71f0c, meta 0x603e0f4), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168083456 unmapped: 87588864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 408 ms_handle_reset con 0x557ad5eddc00 session 0x557ad518ec40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:43.090205+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 408 ms_handle_reset con 0x557ad7191c00 session 0x557ad7b37500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168091648 unmapped: 87580672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:44.090809+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009756 data_alloc: 218103808 data_used: 154106
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7192c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb194400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 408 ms_handle_reset con 0x557adb194400 session 0x557ad5347500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168091648 unmapped: 87580672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:45.090919+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 409 ms_handle_reset con 0x557ad7192c00 session 0x557ad518f500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168124416 unmapped: 87547904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:46.091199+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 409 ms_handle_reset con 0x557ad5eddc00 session 0x557ad791afc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168124416 unmapped: 87547904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 409 ms_handle_reset con 0x557ad7191c00 session 0x557ad7910c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb194400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 409 ms_handle_reset con 0x557adb194400 session 0x557ad5351c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:47.091472+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 410 ms_handle_reset con 0x557ad5edc800 session 0x557ad5350000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 410 heartbeat osd_stat(store_statfs(0x4f803d000/0x0/0x4ffc00000, data 0x18caa8c/0x1b0b000, compress 0x0/0x0/0x0, omap 0x72bbe, meta 0x603d442), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 169172992 unmapped: 86499328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:48.091774+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad92c1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 411 ms_handle_reset con 0x557ad92c1c00 session 0x557ad7b36540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 411 ms_handle_reset con 0x557ad5edc800 session 0x557ad4eb3340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 411 ms_handle_reset con 0x557ad5eba800 session 0x557ad7b5b6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 87318528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:49.092818+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3020048 data_alloc: 218103808 data_used: 155320
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 411 heartbeat osd_stat(store_statfs(0x4f803a000/0x0/0x4ffc00000, data 0x18ccb38/0x1b0f000, compress 0x0/0x0/0x0, omap 0x7312b, meta 0x603ced5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 87318528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:50.093318+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.751594543s of 10.093496323s, submitted: 104
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 412 ms_handle_reset con 0x557ad7191c00 session 0x557ad790b180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb194400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 412 ms_handle_reset con 0x557adb194400 session 0x557ad4eb9880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 412 ms_handle_reset con 0x557ad5eddc00 session 0x557ad73f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 412 ms_handle_reset con 0x557ad5eddc00 session 0x557ad73f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 87318528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 412 ms_handle_reset con 0x557ad5eba800 session 0x557ad4eb2700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:51.093896+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 87318528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 412 ms_handle_reset con 0x557ad5edc800 session 0x557ad7b36e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:52.094063+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 87318528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb194400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:53.094192+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 171728896 unmapped: 83943424 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7509000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:54.094432+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 414 ms_handle_reset con 0x557ad87e3800 session 0x557ad4eb2fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096389 data_alloc: 234881024 data_used: 11304574
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172138496 unmapped: 83533824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:55.094648+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 415 ms_handle_reset con 0x557ad7509000 session 0x557ad789a8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 415 heartbeat osd_stat(store_statfs(0x4f802e000/0x0/0x4ffc00000, data 0x18d368e/0x1b1a000, compress 0x0/0x0/0x0, omap 0x74178, meta 0x603be88), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 415 ms_handle_reset con 0x557ad7862800 session 0x557ad5e4f340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172146688 unmapped: 83525632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 415 ms_handle_reset con 0x557ad5eba800 session 0x557ad5a85880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:56.094776+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172146688 unmapped: 83525632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:57.094947+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 415 heartbeat osd_stat(store_statfs(0x4f802e000/0x0/0x4ffc00000, data 0x18d368e/0x1b1a000, compress 0x0/0x0/0x0, omap 0x74178, meta 0x603be88), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 415 handle_osd_map epochs [415,416], i have 416, src has [1,416]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 416 ms_handle_reset con 0x557ad5edc800 session 0x557ad4eb2e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172154880 unmapped: 83517440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 416 ms_handle_reset con 0x557ad5eddc00 session 0x557ad789a540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:58.095119+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172154880 unmapped: 83517440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:41:59.095235+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099974 data_alloc: 234881024 data_used: 11304574
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 416 heartbeat osd_stat(store_statfs(0x4f802e000/0x0/0x4ffc00000, data 0x18d5238/0x1b1c000, compress 0x0/0x0/0x0, omap 0x743c2, meta 0x603bc3e), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172179456 unmapped: 83492864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:00.095472+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172179456 unmapped: 83492864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 417 ms_handle_reset con 0x557ad87e3800 session 0x557ad789a700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:01.095611+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.220150948s of 11.369648933s, submitted: 109
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 418 ms_handle_reset con 0x557ad5edc800 session 0x557ad7938380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172171264 unmapped: 83501056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:02.095815+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 418 heartbeat osd_stat(store_statfs(0x4f8027000/0x0/0x4ffc00000, data 0x18d8925/0x1b23000, compress 0x0/0x0/0x0, omap 0x74b3c, meta 0x603b4c4), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 419 ms_handle_reset con 0x557ad5eddc00 session 0x557ad53468c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 419 ms_handle_reset con 0x557ad5eba800 session 0x557ad4eb9a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 172171264 unmapped: 83501056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:03.096009+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 180846592 unmapped: 74825728 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:04.096126+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3179066 data_alloc: 234881024 data_used: 11838963
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 ms_handle_reset con 0x557ad7862800 session 0x557ad524ca80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 ms_handle_reset con 0x557ad538c800 session 0x557ad7b27c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181616640 unmapped: 74055680 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:05.122005+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181633024 unmapped: 74039296 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:06.122124+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 ms_handle_reset con 0x557ad538c800 session 0x557ad4eb3c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181657600 unmapped: 74014720 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:07.122316+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 ms_handle_reset con 0x557ad5eba800 session 0x557ad524d340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 ms_handle_reset con 0x557ad5edc800 session 0x557ad789ac40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eddc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f7638000/0x0/0x4ffc00000, data 0x22b8fee/0x2508000, compress 0x0/0x0/0x0, omap 0x754b9, meta 0x603ab47), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 201048064 unmapped: 54624256 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e2800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8f3d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 ms_handle_reset con 0x557ad8f3d400 session 0x557ad4eb8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 ms_handle_reset con 0x557ad5eddc00 session 0x557ad4eb81c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 ms_handle_reset con 0x557ad87e2800 session 0x557ad518ec40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538c800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5eba800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 ms_handle_reset con 0x557ad538c800 session 0x557ad7911a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:08.122525+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 180076544 unmapped: 75595776 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 422 ms_handle_reset con 0x557ad5eba800 session 0x557ad7910c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edc800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 422 ms_handle_reset con 0x557ad5edc800 session 0x557ad53508c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 422 ms_handle_reset con 0x557ad7862800 session 0x557ad5e9c380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:09.122784+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3380202 data_alloc: 234881024 data_used: 12252757
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179740672 unmapped: 75931648 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 423 ms_handle_reset con 0x557ad7862800 session 0x557ad4eb8c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 423 ms_handle_reset con 0x557adcfc1c00 session 0x557ad5347500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:10.123175+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179757056 unmapped: 75915264 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 424 ms_handle_reset con 0x557ada0cc400 session 0x557ad5a84540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb192400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 424 ms_handle_reset con 0x557adb192400 session 0x557ad5351c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:11.123405+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179765248 unmapped: 75907072 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:12.123803+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179765248 unmapped: 75907072 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.824948311s of 11.462730408s, submitted: 224
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 425 heartbeat osd_stat(store_statfs(0x4f5236000/0x0/0x4ffc00000, data 0x46c1aaa/0x4914000, compress 0x0/0x0/0x0, omap 0x77624, meta 0x60389dc), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 425 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b5ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:13.123958+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179789824 unmapped: 75882496 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:14.124127+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3389954 data_alloc: 234881024 data_used: 12252659
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 426 heartbeat osd_stat(store_statfs(0x4f5231000/0x0/0x4ffc00000, data 0x46c3561/0x4917000, compress 0x0/0x0/0x0, omap 0x77c46, meta 0x60383ba), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179789824 unmapped: 75882496 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:15.124277+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 427 ms_handle_reset con 0x557ad5edd800 session 0x557ad791f340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 427 ms_handle_reset con 0x557ad7862800 session 0x557ad790b180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179789824 unmapped: 75882496 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:16.124438+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cc400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179838976 unmapped: 75833344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb192400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 427 ms_handle_reset con 0x557adb192400 session 0x557ad78f0540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:17.124600+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 428 heartbeat osd_stat(store_statfs(0x4f5232000/0x0/0x4ffc00000, data 0x46c60a9/0x4918000, compress 0x0/0x0/0x0, omap 0x77e47, meta 0x60381b9), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179847168 unmapped: 75825152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 428 ms_handle_reset con 0x557adcfc1c00 session 0x557ad78f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:18.124733+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 429 ms_handle_reset con 0x557ada0cc400 session 0x557ad4eb9880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 429 ms_handle_reset con 0x557ad5edd800 session 0x557ad791e1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 429 ms_handle_reset con 0x557ad7862800 session 0x557ad5328fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb192400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179888128 unmapped: 75784192 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:19.124879+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398003 data_alloc: 234881024 data_used: 12252545
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179888128 unmapped: 75784192 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:20.125007+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179888128 unmapped: 75784192 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:21.125200+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179888128 unmapped: 75784192 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:22.125364+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 429 heartbeat osd_stat(store_statfs(0x4f522c000/0x0/0x4ffc00000, data 0x46c98c0/0x491e000, compress 0x0/0x0/0x0, omap 0x78cbf, meta 0x6037341), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 180551680 unmapped: 75120640 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.419602394s of 10.139101982s, submitted: 149
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:23.125498+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179503104 unmapped: 76169216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:24.125642+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407497 data_alloc: 234881024 data_used: 13450113
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 430 heartbeat osd_stat(store_statfs(0x4f5229000/0x0/0x4ffc00000, data 0x46cb35b/0x4921000, compress 0x0/0x0/0x0, omap 0x78e74, meta 0x603718c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 76136448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:25.125871+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 76136448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:26.126014+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 76136448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:27.126163+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 76136448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 430 handle_osd_map epochs [430,431], i have 431, src has [1,431]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:28.126363+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 76136448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:29.126624+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413509 data_alloc: 234881024 data_used: 13450113
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 76136448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:30.126804+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 432 heartbeat osd_stat(store_statfs(0x4f5221000/0x0/0x4ffc00000, data 0x46ceacb/0x4927000, compress 0x0/0x0/0x0, omap 0x795b9, meta 0x6036a47), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 76136448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:31.126921+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 76136448 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:32.127075+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 192626688 unmapped: 63045632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.797373772s of 10.000555038s, submitted: 86
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:33.127179+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189562880 unmapped: 66109440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 432 heartbeat osd_stat(store_statfs(0x4f5225000/0x0/0x4ffc00000, data 0x46ceacb/0x4927000, compress 0x0/0x0/0x0, omap 0x795b9, meta 0x6036a47), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:34.127349+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515225 data_alloc: 234881024 data_used: 13766943
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189562880 unmapped: 66109440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:35.127535+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 432 ms_handle_reset con 0x557ad7513c00 session 0x557ad7b368c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189628416 unmapped: 66043904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:36.127724+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189636608 unmapped: 66035712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:37.127851+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 433 ms_handle_reset con 0x557add021000 session 0x557ad75341c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188882944 unmapped: 66789376 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:38.127988+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 433 ms_handle_reset con 0x557add021800 session 0x557ad7b5aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 433 ms_handle_reset con 0x557ad7193400 session 0x557ad7910fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188891136 unmapped: 66781184 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:39.128116+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b5ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572166 data_alloc: 234881024 data_used: 13767041
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 ms_handle_reset con 0x557ad7513c00 session 0x557ad4eb81c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188899328 unmapped: 66772992 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 heartbeat osd_stat(store_statfs(0x4f423c000/0x0/0x4ffc00000, data 0x5be86d9/0x590e000, compress 0x0/0x0/0x0, omap 0x79965, meta 0x603669b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:40.128249+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188899328 unmapped: 66772992 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 ms_handle_reset con 0x557ad7862800 session 0x557ad790ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:41.128579+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 heartbeat osd_stat(store_statfs(0x4f4239000/0x0/0x4ffc00000, data 0x5bea277/0x5911000, compress 0x0/0x0/0x0, omap 0x79bb5, meta 0x603644b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188923904 unmapped: 66748416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:42.128761+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 ms_handle_reset con 0x557adb192400 session 0x557ad7911340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 ms_handle_reset con 0x557adcfc1c00 session 0x557ad78eca80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188932096 unmapped: 66740224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.805345535s of 10.002385139s, submitted: 98
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:43.128897+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 ms_handle_reset con 0x557ad5edd800 session 0x557ad789bc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 ms_handle_reset con 0x557add021000 session 0x557ad5a85180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 435 ms_handle_reset con 0x557ad7193400 session 0x557ad5347c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188964864 unmapped: 66707456 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:44.129060+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 435 ms_handle_reset con 0x557ad7513c00 session 0x557ad7534000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577131 data_alloc: 234881024 data_used: 13787439
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 435 heartbeat osd_stat(store_statfs(0x4f4237000/0x0/0x4ffc00000, data 0x5bebdfc/0x5912000, compress 0x0/0x0/0x0, omap 0x7a149, meta 0x6035eb7), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 435 handle_osd_map epochs [436,436], i have 436, src has [1,436]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 436 ms_handle_reset con 0x557ad5edd800 session 0x557ad78ec380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 188981248 unmapped: 66691072 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 436 ms_handle_reset con 0x557adcfc1c00 session 0x557ad4eb2380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:45.129283+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 436 ms_handle_reset con 0x557add021000 session 0x557ad5a85a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189095936 unmapped: 66576384 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:46.129407+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 436 ms_handle_reset con 0x557ad7862800 session 0x557ad53508c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189227008 unmapped: 66445312 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:47.129593+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189227008 unmapped: 66445312 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 436 handle_osd_map epochs [436,437], i have 437, src has [1,437]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:48.129745+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 437 ms_handle_reset con 0x557add021800 session 0x557ad4eb8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189235200 unmapped: 66437120 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:49.129893+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 437 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b36c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3591333 data_alloc: 234881024 data_used: 14698865
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189251584 unmapped: 66420736 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:50.129994+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 438 ms_handle_reset con 0x557ad7862800 session 0x557ad78f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 438 heartbeat osd_stat(store_statfs(0x4f422d000/0x0/0x4ffc00000, data 0x5bf11df/0x591b000, compress 0x0/0x0/0x0, omap 0x7b355, meta 0x6034cab), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 438 ms_handle_reset con 0x557adcfc1c00 session 0x557ad4eb2000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 438 ms_handle_reset con 0x557add021000 session 0x557ad5a85340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189267968 unmapped: 66404352 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:51.130137+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189308928 unmapped: 66363392 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 438 ms_handle_reset con 0x557add021800 session 0x557ad5a85500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:52.130341+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189308928 unmapped: 66363392 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:53.130550+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.062546730s of 10.238704681s, submitted: 113
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189308928 unmapped: 66363392 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:54.130730+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 440 ms_handle_reset con 0x557ad5edd800 session 0x557ad78ecc40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602129 data_alloc: 234881024 data_used: 14698865
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 440 ms_handle_reset con 0x557ad7862800 session 0x557ad5346fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189333504 unmapped: 66338816 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:55.130958+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 440 handle_osd_map epochs [440,441], i have 440, src has [1,441]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 441 handle_osd_map epochs [441,441], i have 441, src has [1,441]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 441 heartbeat osd_stat(store_statfs(0x4f4228000/0x0/0x4ffc00000, data 0x5bf4894/0x5922000, compress 0x0/0x0/0x0, omap 0x7be6a, meta 0x6034196), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 441 ms_handle_reset con 0x557adcfc1c00 session 0x557ad5e9c700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 441 ms_handle_reset con 0x557add021000 session 0x557ad5346c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189366272 unmapped: 66306048 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 441 ms_handle_reset con 0x557add021800 session 0x557ad791a380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:56.131100+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 190423040 unmapped: 65249280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:57.131263+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 441 ms_handle_reset con 0x557ad5edd800 session 0x557ad518f6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 190455808 unmapped: 65216512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 441 handle_osd_map epochs [441,442], i have 442, src has [1,442]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:58.131445+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 442 heartbeat osd_stat(store_statfs(0x4f41d6000/0x0/0x4ffc00000, data 0x5c43ecd/0x5974000, compress 0x0/0x0/0x0, omap 0x7ccb4, meta 0x603334c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 190464000 unmapped: 65208320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:42:59.131617+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 443 ms_handle_reset con 0x557ad7862800 session 0x557ad790b6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3632300 data_alloc: 234881024 data_used: 18602865
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 443 ms_handle_reset con 0x557adcfc1c00 session 0x557ad524c000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 190464000 unmapped: 65208320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:00.131767+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 444 ms_handle_reset con 0x557add021000 session 0x557ad791e1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 444 ms_handle_reset con 0x557adb198000 session 0x557ad5328c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 190521344 unmapped: 65150976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:01.131933+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 444 ms_handle_reset con 0x557ad5edd800 session 0x557ad78f1500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 444 ms_handle_reset con 0x557ad7862800 session 0x557ad7b36e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 444 ms_handle_reset con 0x557adb198000 session 0x557ad789b180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 190562304 unmapped: 65110016 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:02.132155+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 444 ms_handle_reset con 0x557adcfc1c00 session 0x557ad5347a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add021000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 444 ms_handle_reset con 0x557add021000 session 0x557ad79396c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 444 ms_handle_reset con 0x557ad5edd800 session 0x557ad5a85180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7862800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 191741952 unmapped: 63930368 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:03.132299+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 191766528 unmapped: 63905792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.429910660s of 10.694281578s, submitted: 208
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:04.132419+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 446 ms_handle_reset con 0x557adb198000 session 0x557ad791e8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3657013 data_alloc: 234881024 data_used: 19008369
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f2f94000/0x0/0x4ffc00000, data 0x5cdcd4d/0x5a14000, compress 0x0/0x0/0x0, omap 0x7e0aa, meta 0x71d1f56), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 191373312 unmapped: 64299008 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:05.132630+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 446 ms_handle_reset con 0x557adcfc1c00 session 0x557ad78ecfc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 191414272 unmapped: 64258048 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:06.132763+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f2f94000/0x0/0x4ffc00000, data 0x5cdcd4d/0x5a14000, compress 0x0/0x0/0x0, omap 0x7e1bd, meta 0x71d1e43), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 447 ms_handle_reset con 0x557adcfbd800 session 0x557ad73f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ada0cd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 447 ms_handle_reset con 0x557ada0cd800 session 0x557ad5a85500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 447 ms_handle_reset con 0x557ad5edd800 session 0x557ad789bc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 191496192 unmapped: 64176128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:07.132923+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f2f94000/0x0/0x4ffc00000, data 0x5cde8db/0x5a16000, compress 0x0/0x0/0x0, omap 0x7eb7e, meta 0x71d1482), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 191496192 unmapped: 64176128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 447 ms_handle_reset con 0x557adb198000 session 0x557ad791afc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 447 handle_osd_map epochs [447,448], i have 448, src has [1,448]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:08.133053+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 191496192 unmapped: 64176128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:09.133195+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662914 data_alloc: 234881024 data_used: 19136881
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 64159744 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:10.133324+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 449 ms_handle_reset con 0x557adcfbd800 session 0x557ad790ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 449 ms_handle_reset con 0x557adcfc1c00 session 0x557ad791f6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad78a8400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189956096 unmapped: 65716224 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:11.133434+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 449 handle_osd_map epochs [449,450], i have 449, src has [1,450]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 450 ms_handle_reset con 0x557ad78a8400 session 0x557ad4eb2c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 194502656 unmapped: 61169664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 450 ms_handle_reset con 0x557ad5edd800 session 0x557ad789a540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:12.133582+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 63406080 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:13.133744+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 451 ms_handle_reset con 0x557adb198000 session 0x557ad4eb9a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 451 heartbeat osd_stat(store_statfs(0x4f2d5b000/0x0/0x4ffc00000, data 0x5f15b1e/0x5c51000, compress 0x0/0x0/0x0, omap 0x7f55d, meta 0x71d0aa3), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193323008 unmapped: 62349312 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:14.133886+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690056 data_alloc: 234881024 data_used: 20869489
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.424650192s of 10.628837585s, submitted: 138
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193323008 unmapped: 62349312 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:15.134005+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 451 handle_osd_map epochs [451,452], i have 451, src has [1,452]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 452 ms_handle_reset con 0x557adcfbd800 session 0x557ad7910540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193339392 unmapped: 62332928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:16.134117+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 452 ms_handle_reset con 0x557adcfc1c00 session 0x557ad78ec380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 452 handle_osd_map epochs [452,453], i have 452, src has [1,453]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 453 ms_handle_reset con 0x557ad538d800 session 0x557ad7911340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193470464 unmapped: 62201856 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:17.134240+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 453 heartbeat osd_stat(store_statfs(0x4f2d4d000/0x0/0x4ffc00000, data 0x5f1adc3/0x5c5b000, compress 0x0/0x0/0x0, omap 0x7fe0e, meta 0x71d01f2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 453 ms_handle_reset con 0x557ad5edd800 session 0x557ad789a8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 453 heartbeat osd_stat(store_statfs(0x4f2d4e000/0x0/0x4ffc00000, data 0x5f1ad61/0x5c5a000, compress 0x0/0x0/0x0, omap 0x7ff42, meta 0x71d00be), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193994752 unmapped: 61677568 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:18.134420+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 454 ms_handle_reset con 0x557adb198000 session 0x557ad7910000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 194306048 unmapped: 61366272 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:19.134591+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718973 data_alloc: 234881024 data_used: 22413169
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 194306048 unmapped: 61366272 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:20.134778+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 455 ms_handle_reset con 0x557adcfbd800 session 0x557ad791e540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 455 ms_handle_reset con 0x557ad7193400 session 0x557ad518ec40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 455 ms_handle_reset con 0x557ad7513c00 session 0x557ad78f0700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 455 ms_handle_reset con 0x557ad7513c00 session 0x557ad7475880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 194396160 unmapped: 61276160 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:21.134939+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 455 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b37a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 194420736 unmapped: 61251584 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:22.135122+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 455 ms_handle_reset con 0x557ad7193400 session 0x557ad5347dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 455 heartbeat osd_stat(store_statfs(0x4f2d46000/0x0/0x4ffc00000, data 0x5f1e3a1/0x5c60000, compress 0x0/0x0/0x0, omap 0x7fe38, meta 0x71d01c8), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 455 handle_osd_map epochs [456,456], i have 456, src has [1,456]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 456 ms_handle_reset con 0x557adb198000 session 0x557ad73f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 456 ms_handle_reset con 0x557adcfbd800 session 0x557ad78f01c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 61194240 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:23.135272+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 456 heartbeat osd_stat(store_statfs(0x4f3034000/0x0/0x4ffc00000, data 0x5c33f81/0x5976000, compress 0x0/0x0/0x0, omap 0x8532b, meta 0x71cacd5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 457 ms_handle_reset con 0x557ad5edd800 session 0x557ad791b340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 457 ms_handle_reset con 0x557ad7193400 session 0x557ad78eda40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 457 ms_handle_reset con 0x557ad7513c00 session 0x557ad74756c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 194568192 unmapped: 61104128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:24.135395+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 457 heartbeat osd_stat(store_statfs(0x4f3034000/0x0/0x4ffc00000, data 0x5c33f81/0x5976000, compress 0x0/0x0/0x0, omap 0x8532b, meta 0x71cacd5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701527 data_alloc: 234881024 data_used: 22302526
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 194592768 unmapped: 61079552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:25.135563+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 457 heartbeat osd_stat(store_statfs(0x4f3030000/0x0/0x4ffc00000, data 0x5c35a0c/0x5978000, compress 0x0/0x0/0x0, omap 0x8560d, meta 0x71ca9f3), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 457 handle_osd_map epochs [458,458], i have 458, src has [1,458]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.092590332s of 10.611246109s, submitted: 151
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 458 ms_handle_reset con 0x557ad7191c00 session 0x557ad5350a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 458 ms_handle_reset con 0x557adb194400 session 0x557ad7534fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 458 ms_handle_reset con 0x557ad5edd800 session 0x557ad4eb2fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189751296 unmapped: 65921024 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:26.135740+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189751296 unmapped: 65921024 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:27.135896+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 458 ms_handle_reset con 0x557ad7191c00 session 0x557ad78f0380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 182616064 unmapped: 73056256 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:28.136033+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181321728 unmapped: 74350592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:29.136142+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444655 data_alloc: 218103808 data_used: 3376958
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181321728 unmapped: 74350592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:30.136309+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 459 heartbeat osd_stat(store_statfs(0x4f44df000/0x0/0x4ffc00000, data 0x42500a7/0x44cb000, compress 0x0/0x0/0x0, omap 0x8613f, meta 0x71c9ec1), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:31.136487+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181321728 unmapped: 74350592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:32.136614+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181321728 unmapped: 74350592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 460 ms_handle_reset con 0x557ad7193400 session 0x557ad53468c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:33.136768+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181321728 unmapped: 74350592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:34.136911+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181444608 unmapped: 74227712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 461 heartbeat osd_stat(store_statfs(0x4f44d1000/0x0/0x4ffc00000, data 0x425374e/0x44d1000, compress 0x0/0x0/0x0, omap 0x8684b, meta 0x71c97b5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451387 data_alloc: 218103808 data_used: 3364670
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:35.137078+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181444608 unmapped: 74227712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:36.137241+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181444608 unmapped: 74227712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.328140259s of 11.656611443s, submitted: 112
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 461 ms_handle_reset con 0x557ad7513c00 session 0x557ad7b36c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:37.137389+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181444608 unmapped: 74227712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:38.137578+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181444608 unmapped: 74227712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 461 handle_osd_map epochs [461,462], i have 462, src has [1,462]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb194400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adb194400 session 0x557ad4eb9a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b361c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:39.137790+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181387264 unmapped: 74285056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3453682 data_alloc: 218103808 data_used: 3364670
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7191c00 session 0x557ad7475880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7193400 session 0x557ad790bdc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f44d6000/0x0/0x4ffc00000, data 0x42551cd/0x44d4000, compress 0x0/0x0/0x0, omap 0x869ed, meta 0x71c9613), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:40.137962+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181387264 unmapped: 74285056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:41.138131+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181387264 unmapped: 74285056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7513c00 session 0x557ad524cfc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7862800 session 0x557ad7534000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad538d400 session 0x557ad78f0fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad5edd800 session 0x557ad4eb8c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:42.138279+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181428224 unmapped: 74244096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7191c00 session 0x557ad6384540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:43.138484+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181436416 unmapped: 74235904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7193400 session 0x557ad78f0000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb194400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adb194400 session 0x557ad518e8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:44.138639+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181813248 unmapped: 73859072 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7513c00 session 0x557ad7475180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad5edd800 session 0x557ad791a380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad538d400 session 0x557ad524d340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f3bc0000/0x0/0x4ffc00000, data 0x4b6c1fc/0x4dea000, compress 0x0/0x0/0x0, omap 0x87052, meta 0x71c8fae), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511772 data_alloc: 218103808 data_used: 3800859
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:45.138774+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 73818112 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7191c00 session 0x557ad78ed6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:46.138913+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 73818112 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f3bc0000/0x0/0x4ffc00000, data 0x4b6c1fc/0x4dea000, compress 0x0/0x0/0x0, omap 0x86ffd, meta 0x71c9003), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7193400 session 0x557ad73f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7193400 session 0x557ad7b5ae00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:47.139056+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 73818112 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.018236160s of 10.447059631s, submitted: 148
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad538d400 session 0x557ad5e9dc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:48.139263+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 76095488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f6f9f000/0x0/0x4ffc00000, data 0x178f1fc/0x1a0d000, compress 0x0/0x0/0x0, omap 0x876ed, meta 0x71c8913), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:49.139427+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 76095488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3215436 data_alloc: 218103808 data_used: 170779
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:50.139573+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 76095488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:51.139785+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 76095488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:52.139957+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 76095488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f6f9f000/0x0/0x4ffc00000, data 0x178f1fc/0x1a0d000, compress 0x0/0x0/0x0, omap 0x876ed, meta 0x71c8913), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:53.140161+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 76095488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:54.140341+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 76095488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad5edd800 session 0x557ad5a84380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3215436 data_alloc: 218103808 data_used: 170779
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7191c00 session 0x557ad75348c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:55.140518+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 76095488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7513c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7513c00 session 0x557ad6bf0000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b5a540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad538d400 session 0x557ad71a8540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7191c00 session 0x557ad4eb2e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:56.140675+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179585024 unmapped: 76087296 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:57.140827+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 179585024 unmapped: 76087296 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:58.140987+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f6f9e000/0x0/0x4ffc00000, data 0x178f20c/0x1a0e000, compress 0x0/0x0/0x0, omap 0x87789, meta 0x71c8877), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:43:59.141140+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274674 data_alloc: 234881024 data_used: 9763611
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:00.141657+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:01.142123+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.576943398s of 14.628355980s, submitted: 26
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:02.142342+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:03.142549+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f6f9d000/0x0/0x4ffc00000, data 0x178f26f/0x1a0f000, compress 0x0/0x0/0x0, omap 0x87789, meta 0x71c8877), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adcfc1c00 session 0x557ad7475c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:04.142779+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3276283 data_alloc: 234881024 data_used: 9763611
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:05.143037+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:06.143176+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:07.143372+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 74391552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f6f9d000/0x0/0x4ffc00000, data 0x178f26f/0x1a0f000, compress 0x0/0x0/0x0, omap 0x87789, meta 0x71c8877), peers [1,2] op hist [0,0,0,0,0,1,5])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f6ac2000/0x0/0x4ffc00000, data 0x1c6a26f/0x1eea000, compress 0x0/0x0/0x0, omap 0x87789, meta 0x71c8877), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:08.143531+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 187736064 unmapped: 67936256 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:09.143639+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 66617344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367267 data_alloc: 234881024 data_used: 11905819
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:10.143765+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 66617344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:11.143940+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 66617344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:12.144114+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 66617344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f6360000/0x0/0x4ffc00000, data 0x23cc26f/0x264c000, compress 0x0/0x0/0x0, omap 0x87789, meta 0x71c8877), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:13.144256+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 66617344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:14.144390+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 66617344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367267 data_alloc: 234881024 data_used: 11905819
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:15.144525+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.602176666s of 13.197429657s, submitted: 113
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 66600960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557add024c00 session 0x557ad71a8380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:16.144633+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 58155008 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad538d400 session 0x557ad7474fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:17.144784+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189136896 unmapped: 66535424 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7193400 session 0x557ad5329500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adb198000 session 0x557ad5346c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f4360000/0x0/0x4ffc00000, data 0x43cc20c/0x464b000, compress 0x0/0x0/0x0, omap 0x878b9, meta 0x71c8747), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:18.144920+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad5edd800 session 0x557ad78f16c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189136896 unmapped: 66535424 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:19.145121+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189136896 unmapped: 66535424 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531190 data_alloc: 234881024 data_used: 11934491
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:20.145318+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f4361000/0x0/0x4ffc00000, data 0x43cc1fc/0x464a000, compress 0x0/0x0/0x0, omap 0x87955, meta 0x71c86ab), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189136896 unmapped: 66535424 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7191c00 session 0x557ad78ed340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7191c00 session 0x557ad790a540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad538d400 session 0x557ad63848c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad5edd800 session 0x557ad5351180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7193400 session 0x557ad71a9500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:21.145443+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189317120 unmapped: 66355200 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:22.145622+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189317120 unmapped: 66355200 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:23.145837+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189317120 unmapped: 66355200 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:24.146019+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189317120 unmapped: 66355200 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575158 data_alloc: 234881024 data_used: 11938488
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f3cb6000/0x0/0x4ffc00000, data 0x4a781fc/0x4cf6000, compress 0x0/0x0/0x0, omap 0x879f1, meta 0x71c860f), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:25.146154+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189317120 unmapped: 66355200 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adb198000 session 0x557ad790aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:26.146313+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adb198000 session 0x557ad71a8fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189317120 unmapped: 66355200 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad538d400 session 0x557ad7938000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.144263268s of 11.659454346s, submitted: 43
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b37a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:27.146438+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189317120 unmapped: 66355200 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7191c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7193400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:28.146634+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189325312 unmapped: 66347008 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adcfc1c00 session 0x557ad4eb9340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:29.146772+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 189325312 unmapped: 66347008 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557add024c00 session 0x557ad5e4fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611384 data_alloc: 234881024 data_used: 17808056
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:30.146958+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 62316544 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f3cb5000/0x0/0x4ffc00000, data 0x4a7820c/0x4cf7000, compress 0x0/0x0/0x0, omap 0x87a8d, meta 0x71c8573), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:31.147121+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 62316544 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad538d400 session 0x557ad7535180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad5edd800 session 0x557ad733c540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:32.147268+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193691648 unmapped: 61980672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:33.147394+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193691648 unmapped: 61980672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adcfc1c00 session 0x557ad4eb8fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:34.147578+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 193691648 unmapped: 61980672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557add024c00 session 0x557ad791e1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617355 data_alloc: 234881024 data_used: 17808056
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:35.147720+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 57827328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:36.147904+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f3c90000/0x0/0x4ffc00000, data 0x4a9c21c/0x4d1c000, compress 0x0/0x0/0x0, omap 0x87e2a, meta 0x71c81d6), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 198664192 unmapped: 57008128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad7508c00 session 0x557ad790ac40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad538d400 session 0x557ad78ec1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.109252930s of 10.181113243s, submitted: 22
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:37.148054+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 198672384 unmapped: 56999936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:38.148185+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 198844416 unmapped: 56827904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f3c8e000/0x0/0x4ffc00000, data 0x4a9c24f/0x4d1e000, compress 0x0/0x0/0x0, omap 0x88174, meta 0x71c7e8c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:39.148348+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 198844416 unmapped: 56827904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703725 data_alloc: 234881024 data_used: 22758584
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:40.148475+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 204259328 unmapped: 51412992 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:41.148625+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206454784 unmapped: 49217536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f2e97000/0x0/0x4ffc00000, data 0x588424f/0x5b06000, compress 0x0/0x0/0x0, omap 0x88174, meta 0x71c7e8c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:42.148830+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206454784 unmapped: 49217536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:43.149044+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206454784 unmapped: 49217536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:44.149233+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206454784 unmapped: 49217536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744645 data_alloc: 234881024 data_used: 23139512
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:45.149418+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206454784 unmapped: 49217536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:46.149596+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 217047040 unmapped: 38625280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.595234871s of 10.001080513s, submitted: 179
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:47.149735+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f1996000/0x0/0x4ffc00000, data 0x588424f/0x5b06000, compress 0x0/0x0/0x0, omap 0x88174, meta 0x8367e8c), peers [1,2] op hist [1])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 214335488 unmapped: 41336832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:48.149879+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 214933504 unmapped: 40738816 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f1343000/0x0/0x4ffc00000, data 0x621724f/0x6499000, compress 0x0/0x0/0x0, omap 0x881c3, meta 0x8367e3d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:49.150047+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 40558592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f1343000/0x0/0x4ffc00000, data 0x621724f/0x6499000, compress 0x0/0x0/0x0, omap 0x881c3, meta 0x8367e3d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817297 data_alloc: 234881024 data_used: 24327864
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:50.150192+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 40558592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:51.150388+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 40558592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:52.150574+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 40558592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:53.150776+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216162304 unmapped: 39510016 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f1343000/0x0/0x4ffc00000, data 0x621724f/0x6499000, compress 0x0/0x0/0x0, omap 0x87e79, meta 0x8368187), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:54.150960+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 40558592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f1367000/0x0/0x4ffc00000, data 0x622324f/0x64a5000, compress 0x0/0x0/0x0, omap 0x87e79, meta 0x8368187), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3814181 data_alloc: 234881024 data_used: 24327864
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:55.151111+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adb198000 session 0x557ad518ec40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557ad87e3800 session 0x557ad78ecfc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215138304 unmapped: 40534016 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557add024c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 ms_handle_reset con 0x557adcfc1c00 session 0x557ad6bf0a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:56.151272+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215146496 unmapped: 40525824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:57.151422+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215146496 unmapped: 40525824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.491937637s of 10.656938553s, submitted: 101
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:58.151560+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215187456 unmapped: 40484864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 464 heartbeat osd_stat(store_statfs(0x4f1367000/0x0/0x4ffc00000, data 0x621ce3d/0x64a0000, compress 0x0/0x0/0x0, omap 0x888be, meta 0x8367742), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:44:59.151760+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215187456 unmapped: 40484864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3820978 data_alloc: 234881024 data_used: 24223416
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:00.151912+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215212032 unmapped: 40460288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:01.152060+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215212032 unmapped: 40460288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:02.152269+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215212032 unmapped: 40460288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:03.152428+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 465 heartbeat osd_stat(store_statfs(0x4f1359000/0x0/0x4ffc00000, data 0x622c575/0x64af000, compress 0x0/0x0/0x0, omap 0x89094, meta 0x8366f6c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215228416 unmapped: 40443904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:04.152565+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215228416 unmapped: 40443904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3825528 data_alloc: 234881024 data_used: 24224001
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:05.152687+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215228416 unmapped: 40443904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:06.152857+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215228416 unmapped: 40443904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:07.152974+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215228416 unmapped: 40443904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:08.153198+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 465 heartbeat osd_stat(store_statfs(0x4f1358000/0x0/0x4ffc00000, data 0x622d575/0x64b0000, compress 0x0/0x0/0x0, omap 0x89094, meta 0x8366f6c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215261184 unmapped: 40411136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:09.153395+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215261184 unmapped: 40411136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3825776 data_alloc: 234881024 data_used: 24224001
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:10.153535+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215261184 unmapped: 40411136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:11.153823+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215261184 unmapped: 40411136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.999625206s of 14.053805351s, submitted: 19
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 465 ms_handle_reset con 0x557adbe94000 session 0x557ad5346000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:12.154335+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 465 heartbeat osd_stat(store_statfs(0x4f1358000/0x0/0x4ffc00000, data 0x622d575/0x64b0000, compress 0x0/0x0/0x0, omap 0x89094, meta 0x8366f6c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215310336 unmapped: 40361984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:13.154482+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215310336 unmapped: 40361984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:14.154637+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215326720 unmapped: 40345600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557adbe94000 session 0x557ad5e9c380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833759 data_alloc: 234881024 data_used: 25305345
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:15.155373+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215351296 unmapped: 40321024 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:16.155568+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557adb198000 session 0x557ad5e4ec40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad87e3800 session 0x557ad4eb96c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215359488 unmapped: 40312832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1355000/0x0/0x4ffc00000, data 0x622f144/0x64b5000, compress 0x0/0x0/0x0, omap 0x89469, meta 0x8366b97), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:17.155745+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215392256 unmapped: 40280064 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:18.155888+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215400448 unmapped: 40271872 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:19.156054+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215416832 unmapped: 40255488 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1356000/0x0/0x4ffc00000, data 0x629e144/0x64b6000, compress 0x0/0x0/0x0, omap 0x898b1, meta 0x836674f), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844705 data_alloc: 234881024 data_used: 25394945
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:20.156222+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215433216 unmapped: 40239104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:21.156405+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215433216 unmapped: 40239104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:22.156616+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215433216 unmapped: 40239104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1356000/0x0/0x4ffc00000, data 0x629e144/0x64b6000, compress 0x0/0x0/0x0, omap 0x898b1, meta 0x836674f), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:23.156749+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215433216 unmapped: 40239104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557adcfc1c00 session 0x557ad4eb8380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:24.156923+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1356000/0x0/0x4ffc00000, data 0x629e144/0x64b6000, compress 0x0/0x0/0x0, omap 0x898b1, meta 0x836674f), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad7512800 session 0x557ad73f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215433216 unmapped: 40239104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad7512800 session 0x557ad7475c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844705 data_alloc: 234881024 data_used: 25394945
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:25.157082+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.396501541s of 13.434133530s, submitted: 20
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad87e3800 session 0x557ad7b27340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215457792 unmapped: 40214528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:26.157217+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1355000/0x0/0x4ffc00000, data 0x629e154/0x64b7000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215498752 unmapped: 40173568 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:27.157413+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216875008 unmapped: 38797312 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:28.157549+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216875008 unmapped: 38797312 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:29.157806+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216875008 unmapped: 38797312 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:30.158011+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3864083 data_alloc: 234881024 data_used: 26656017
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216875008 unmapped: 38797312 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1355000/0x0/0x4ffc00000, data 0x629e154/0x64b7000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:31.158162+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216891392 unmapped: 38780928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1355000/0x0/0x4ffc00000, data 0x629e154/0x64b7000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:32.158392+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216891392 unmapped: 38780928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:33.158584+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216891392 unmapped: 38780928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:34.158765+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1355000/0x0/0x4ffc00000, data 0x629e154/0x64b7000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216891392 unmapped: 38780928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:35.158898+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862659 data_alloc: 234881024 data_used: 26652945
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216891392 unmapped: 38780928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:36.159070+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.538640022s of 11.569927216s, submitted: 34
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216891392 unmapped: 38780928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:37.159258+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1355000/0x0/0x4ffc00000, data 0x629e154/0x64b7000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216891392 unmapped: 38780928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:38.159394+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216891392 unmapped: 38780928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:39.159515+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f1355000/0x0/0x4ffc00000, data 0x629e154/0x64b7000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 216891392 unmapped: 38780928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:40.159618+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916247 data_alloc: 251658240 data_used: 27889425
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 33423360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:41.159733+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f0d7f000/0x0/0x4ffc00000, data 0x6874154/0x6a8d000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 33390592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:42.160436+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219267072 unmapped: 36405248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f0d7f000/0x0/0x4ffc00000, data 0x6874154/0x6a8d000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:43.160808+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219267072 unmapped: 36405248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:44.160923+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f0d7f000/0x0/0x4ffc00000, data 0x6874154/0x6a8d000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219267072 unmapped: 36405248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:45.161081+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3912071 data_alloc: 251658240 data_used: 27987729
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219267072 unmapped: 36405248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:46.161210+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219439104 unmapped: 36233216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:47.161384+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219439104 unmapped: 36233216 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:48.161522+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219447296 unmapped: 36225024 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:49.161774+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219447296 unmapped: 36225024 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f0d7f000/0x0/0x4ffc00000, data 0x6874154/0x6a8d000, compress 0x0/0x0/0x0, omap 0x89951, meta 0x83666af), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.231088638s of 13.277519226s, submitted: 16
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:50.161946+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3938961 data_alloc: 251658240 data_used: 27963153
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219561984 unmapped: 36110336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:51.162141+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad538d400 session 0x557ad5a84380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219922432 unmapped: 35749888 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557adcfc1c00 session 0x557ad5a84540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:52.162431+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219922432 unmapped: 35749888 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:53.162622+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219922432 unmapped: 35749888 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:54.162744+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219922432 unmapped: 35749888 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:55.162897+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f0d7a000/0x0/0x4ffc00000, data 0x6b7f131/0x6a91000, compress 0x0/0x0/0x0, omap 0x89ae1, meta 0x836651f), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3943437 data_alloc: 251658240 data_used: 29130513
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 219922432 unmapped: 35749888 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:56.163085+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71ac400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad71ac400 session 0x557ad6bf0700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f0d7a000/0x0/0x4ffc00000, data 0x6b7f131/0x6a91000, compress 0x0/0x0/0x0, omap 0x89ae1, meta 0x836651f), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 40124416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:57.163308+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 40124416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:58.163504+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 40124416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:45:59.163648+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 40124416 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:00.163816+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3677669 data_alloc: 234881024 data_used: 21550318
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f370e000/0x0/0x4ffc00000, data 0x41ec131/0x40fe000, compress 0x0/0x0/0x0, omap 0x89c11, meta 0x83663ef), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.439261436s of 10.515376091s, submitted: 40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215621632 unmapped: 40050688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:01.163955+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215621632 unmapped: 40050688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:02.164169+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215621632 unmapped: 40050688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:03.164373+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215621632 unmapped: 40050688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:04.164546+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557adb198000 session 0x557ad78f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557adbe94000 session 0x557ad4eb2c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215621632 unmapped: 40050688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f3706000/0x0/0x4ffc00000, data 0x41ec131/0x40fe000, compress 0x0/0x0/0x0, omap 0x89c61, meta 0x836639f), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:05.164687+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71ac400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679316 data_alloc: 234881024 data_used: 22013166
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad71ac400 session 0x557ad78ec8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f3706000/0x0/0x4ffc00000, data 0x41ec131/0x40fe000, compress 0x0/0x0/0x0, omap 0x89c61, meta 0x836639f), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215621632 unmapped: 40050688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:06.164845+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215621632 unmapped: 40050688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:07.164994+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f370f000/0x0/0x4ffc00000, data 0x41ec121/0x40fd000, compress 0x0/0x0/0x0, omap 0x89d01, meta 0x83662ff), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215621632 unmapped: 40050688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad538d400 session 0x557ad78ed500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:08.165149+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad7512800 session 0x557ad5328fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215629824 unmapped: 40042496 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:09.165298+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 ms_handle_reset con 0x557ad7512800 session 0x557ad4eb3c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215638016 unmapped: 40034304 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:10.165464+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f3ce6000/0x0/0x4ffc00000, data 0x3c16111/0x3b26000, compress 0x0/0x0/0x0, omap 0x8a101, meta 0x8365eff), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 466 handle_osd_map epochs [466,467], i have 467, src has [1,467]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 467 ms_handle_reset con 0x557ad538d400 session 0x557ad73f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627224 data_alloc: 234881024 data_used: 21787870
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215646208 unmapped: 40026112 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:11.165633+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215646208 unmapped: 40026112 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.457561493s of 11.542878151s, submitted: 53
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 467 ms_handle_reset con 0x557ad5edd800 session 0x557ad5a85880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 467 ms_handle_reset con 0x557ad7508c00 session 0x557ad7b26c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:12.165781+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad71ac400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 467 ms_handle_reset con 0x557ad71ac400 session 0x557ad5fe41c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215703552 unmapped: 39968768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:13.165915+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215703552 unmapped: 39968768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:14.166052+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 467 heartbeat osd_stat(store_statfs(0x4f3cea000/0x0/0x4ffc00000, data 0x389dcce/0x3b21000, compress 0x0/0x0/0x0, omap 0x8a78b, meta 0x8365875), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215703552 unmapped: 39968768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:15.166175+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616333 data_alloc: 234881024 data_used: 21329118
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215703552 unmapped: 39968768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:16.166330+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 215703552 unmapped: 39968768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 467 ms_handle_reset con 0x557ad538d400 session 0x557ad4eb9180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:17.166437+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 467 ms_handle_reset con 0x557ad5edd800 session 0x557ad5e4f180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:18.166574+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 467 handle_osd_map epochs [467,468], i have 468, src has [1,468]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f526c000/0x0/0x4ffc00000, data 0x231a6eb/0x259e000, compress 0x0/0x0/0x0, omap 0x8ace5, meta 0x836531b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:19.166681+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:20.166895+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395791 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:21.167024+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:22.167158+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f526c000/0x0/0x4ffc00000, data 0x231a6eb/0x259e000, compress 0x0/0x0/0x0, omap 0x8ace5, meta 0x836531b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:23.167301+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:24.167440+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:25.167559+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395791 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f526c000/0x0/0x4ffc00000, data 0x231a6eb/0x259e000, compress 0x0/0x0/0x0, omap 0x8ace5, meta 0x836531b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:26.167731+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:27.167994+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:28.168126+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:29.168469+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:30.168603+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395791 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:31.168787+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f526c000/0x0/0x4ffc00000, data 0x231a6eb/0x259e000, compress 0x0/0x0/0x0, omap 0x8ace5, meta 0x836531b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:32.168991+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f526c000/0x0/0x4ffc00000, data 0x231a6eb/0x259e000, compress 0x0/0x0/0x0, omap 0x8ace5, meta 0x836531b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:33.169145+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:34.169291+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:35.169449+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395791 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:36.169589+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:37.169763+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:38.169907+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f526c000/0x0/0x4ffc00000, data 0x231a6eb/0x259e000, compress 0x0/0x0/0x0, omap 0x8ace5, meta 0x836531b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:39.170061+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.104429245s of 27.231143951s, submitted: 83
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 469 ms_handle_reset con 0x557ad7508c00 session 0x557ad7911180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:40.170209+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3400293 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:41.170362+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 469 ms_handle_reset con 0x557ad7512800 session 0x557ad78ec000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 469 heartbeat osd_stat(store_statfs(0x4f5268000/0x0/0x4ffc00000, data 0x231c687/0x25a2000, compress 0x0/0x0/0x0, omap 0x8b315, meta 0x8364ceb), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adb198000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 469 ms_handle_reset con 0x557adb198000 session 0x557ad78ed180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:42.170532+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 469 ms_handle_reset con 0x557ad538d400 session 0x557ad5329340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 469 ms_handle_reset con 0x557ad5edd800 session 0x557ad5328000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 469 heartbeat osd_stat(store_statfs(0x4f5267000/0x0/0x4ffc00000, data 0x231c6c9/0x25a3000, compress 0x0/0x0/0x0, omap 0x8b315, meta 0x8364ceb), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 48783360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:43.170662+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 470 ms_handle_reset con 0x557ad7508c00 session 0x557ad7b37500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 48775168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:44.170820+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 470 heartbeat osd_stat(store_statfs(0x4f5266000/0x0/0x4ffc00000, data 0x231de77/0x25a4000, compress 0x0/0x0/0x0, omap 0x8b469, meta 0x8364b97), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 48775168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:45.170940+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403280 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 48775168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:46.171109+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 48775168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:47.171281+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 48775168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:48.171461+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 48775168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:49.171651+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 48775168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:50.171823+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403280 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 470 heartbeat osd_stat(store_statfs(0x4f5266000/0x0/0x4ffc00000, data 0x231de77/0x25a4000, compress 0x0/0x0/0x0, omap 0x8b469, meta 0x8364b97), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 48775168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:51.172000+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.571687698s of 12.633746147s, submitted: 33
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 470 handle_osd_map epochs [471,471], i have 471, src has [1,471]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 48775168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:52.172238+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 471 ms_handle_reset con 0x557ad7512800 session 0x557ad789a540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adbe94000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 471 ms_handle_reset con 0x557adbe94000 session 0x557ad5e4e8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206905344 unmapped: 48766976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:53.172460+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 471 handle_osd_map epochs [471,472], i have 471, src has [1,472]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557ad538d400 session 0x557ad789ba40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b5a700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206921728 unmapped: 48750592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:54.172614+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557ad7508c00 session 0x557ad7534c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:55.172772+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206929920 unmapped: 48742400 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 heartbeat osd_stat(store_statfs(0x4f525b000/0x0/0x4ffc00000, data 0x23214dd/0x25ad000, compress 0x0/0x0/0x0, omap 0x8bf93, meta 0x836406d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557ad7512800 session 0x557ad6bf0380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad87e3800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557ad87e3800 session 0x557ad4eb8c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415378 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 112K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.70 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 32.94 MB, 0.05 MB/s
                                           Interval WAL: 12K writes, 5355 syncs, 2.37 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:56.172932+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206929920 unmapped: 48742400 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557ad538d400 session 0x557ad7910a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:57.173107+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206954496 unmapped: 48717824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557ad5edd800 session 0x557ad7938c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557ad7508c00 session 0x557ad791fdc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:58.173230+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206979072 unmapped: 48693248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557ad7512800 session 0x557ad518ea80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557adcfc1c00 session 0x557ad4eb8700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:46:59.173408+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207020032 unmapped: 48652288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 heartbeat osd_stat(store_statfs(0x4f5262000/0x0/0x4ffc00000, data 0x23214ae/0x25aa000, compress 0x0/0x0/0x0, omap 0x8ce55, meta 0x83631ab), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:00.173558+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 ms_handle_reset con 0x557adcfc1c00 session 0x557ad791f500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207020032 unmapped: 48652288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416487 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:01.173746+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207020032 unmapped: 48652288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 473 ms_handle_reset con 0x557ad538d400 session 0x557ad790a8c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:02.173910+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 473 heartbeat osd_stat(store_statfs(0x4f525d000/0x0/0x4ffc00000, data 0x232309e/0x25ad000, compress 0x0/0x0/0x0, omap 0x8cf61, meta 0x836309f), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:03.174027+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:04.174165+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:05.174291+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416487 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:06.174415+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 473 heartbeat osd_stat(store_statfs(0x4f525d000/0x0/0x4ffc00000, data 0x232309e/0x25ad000, compress 0x0/0x0/0x0, omap 0x8d095, meta 0x8362f6b), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.990242004s of 15.166657448s, submitted: 106
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 473 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b36e00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:07.174530+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 473 heartbeat osd_stat(store_statfs(0x4f525d000/0x0/0x4ffc00000, data 0x232309e/0x25ad000, compress 0x0/0x0/0x0, omap 0x8d35c, meta 0x8362ca4), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:08.174650+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:09.174773+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 474 ms_handle_reset con 0x557ad7512800 session 0x557ad78f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 474 heartbeat osd_stat(store_statfs(0x4f525c000/0x0/0x4ffc00000, data 0x2324b1d/0x25b0000, compress 0x0/0x0/0x0, omap 0x8da3f, meta 0x83625c1), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:10.174906+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad8d97400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 475 ms_handle_reset con 0x557ad8d97400 session 0x557ad4eb2c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3423815 data_alloc: 218103808 data_used: 6454361
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:11.175039+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207028224 unmapped: 48644096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 476 ms_handle_reset con 0x557ad7508c00 session 0x557ad4eb3340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:12.175184+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207036416 unmapped: 48635904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 477 ms_handle_reset con 0x557ad538d400 session 0x557ad4eb9180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:13.175304+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 47570944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 477 heartbeat osd_stat(store_statfs(0x4f5251000/0x0/0x4ffc00000, data 0x2329e45/0x25b9000, compress 0x0/0x0/0x0, omap 0x8e02d, meta 0x8361fd3), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:14.175448+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208109568 unmapped: 47562752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 478 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b268c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 478 ms_handle_reset con 0x557ad7512800 session 0x557ad5350c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 478 heartbeat osd_stat(store_statfs(0x4f524c000/0x0/0x4ffc00000, data 0x232ba51/0x25bc000, compress 0x0/0x0/0x0, omap 0x8e768, meta 0x8361898), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:15.175581+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 47521792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431910 data_alloc: 218103808 data_used: 6454946
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:16.175724+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 47521792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:17.175893+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 47521792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:18.176016+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 478 handle_osd_map epochs [478,479], i have 479, src has [1,479]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.796933174s of 11.081786156s, submitted: 82
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 47521792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:19.176160+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 47521792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:20.176282+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208150528 unmapped: 47521792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434348 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 479 heartbeat osd_stat(store_statfs(0x4f524b000/0x0/0x4ffc00000, data 0x232d4ec/0x25bf000, compress 0x0/0x0/0x0, omap 0x8ed0b, meta 0x83612f5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:21.176626+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 479 heartbeat osd_stat(store_statfs(0x4f524b000/0x0/0x4ffc00000, data 0x232d4ec/0x25bf000, compress 0x0/0x0/0x0, omap 0x8ed0b, meta 0x83612f5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:22.176792+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:23.176940+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 479 handle_osd_map epochs [479,480], i have 479, src has [1,480]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:24.177053+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:25.177233+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437122 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:26.177393+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:27.177517+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f5248000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8eeb3, meta 0x836114d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:28.177677+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:29.177816+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.595879555s of 11.609797478s, submitted: 26
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 ms_handle_reset con 0x557adcfc1c00 session 0x557ad37c3180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:30.177939+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438828 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f5247000/0x0/0x4ffc00000, data 0x232efcd/0x25c3000, compress 0x0/0x0/0x0, omap 0x8eeb3, meta 0x836114d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:31.178067+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 ms_handle_reset con 0x557adcfc1c00 session 0x557ad6bf0700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8efe7, meta 0x8361019), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:32.178267+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8efe7, meta 0x8361019), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:33.178406+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 47751168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8efe7, meta 0x8361019), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 ms_handle_reset con 0x557ad538d400 session 0x557ad7b26c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:34.178531+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:35.178645+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437367 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:36.178763+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:37.178895+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:38.179016+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:39.179133+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:40.179251+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437367 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:41.179383+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:42.179550+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:43.179680+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:44.179843+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:45.179999+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437367 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:46.180129+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:47.180247+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:48.180377+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:49.180501+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:50.180623+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437367 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:51.180753+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.687473297s of 21.715770721s, submitted: 18
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:52.180927+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:53.181069+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207929344 unmapped: 47742976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:54.181213+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:55.181436+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437367 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:56.181615+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:57.181759+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:58.181880+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:47:59.182003+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:00.182144+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437367 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:01.182365+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19250 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:31 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} v 0)
Feb 02 16:02:31 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:02.182544+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:03.182668+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:04.182820+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:05.182945+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437367 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:06.183110+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:07.183299+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:08.183440+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:09.183609+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:10.184359+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437367 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:11.185119+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:12.186294+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:13.187332+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:14.187449+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f524a000/0x0/0x4ffc00000, data 0x232ef6b/0x25c2000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:15.187897+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437367 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:16.188635+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:17.188872+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.679111481s of 25.838649750s, submitted: 106
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 ms_handle_reset con 0x557ad5edd800 session 0x557ad5e4f180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:18.188983+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f5249000/0x0/0x4ffc00000, data 0x232ef7b/0x25c3000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f5249000/0x0/0x4ffc00000, data 0x232ef7b/0x25c3000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:19.189409+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f5249000/0x0/0x4ffc00000, data 0x232ef7b/0x25c3000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 480 handle_osd_map epochs [480,481], i have 481, src has [1,481]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 481 ms_handle_reset con 0x557ad7508c00 session 0x557ad78ed880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:20.189549+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbc800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 481 ms_handle_reset con 0x557adcfbc800 session 0x557ad7535340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 481 ms_handle_reset con 0x557ad7512800 session 0x557ad5a85340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3446089 data_alloc: 218103808 data_used: 6455575
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:21.189701+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 481 ms_handle_reset con 0x557ad538d400 session 0x557ad4eb96c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 481 ms_handle_reset con 0x557ad5edd800 session 0x557ad790ac40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:22.189884+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f5242000/0x0/0x4ffc00000, data 0x2330b89/0x25c8000, compress 0x0/0x0/0x0, omap 0x8f76d, meta 0x8360893), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 481 ms_handle_reset con 0x557ad7508c00 session 0x557ad7535880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:23.190030+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad86d8c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 481 ms_handle_reset con 0x557ad86d8c00 session 0x557ad5346fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 207986688 unmapped: 47685632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 482 ms_handle_reset con 0x557ad538d400 session 0x557ad71a8540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 482 ms_handle_reset con 0x557adcfc1c00 session 0x557ad7b5a700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:24.190178+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 482 ms_handle_reset con 0x557ad5edd800 session 0x557ad5347a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 482 ms_handle_reset con 0x557ad7508c00 session 0x557ad5e9c700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:25.190308+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444891 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:26.190400+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:27.190498+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f5244000/0x0/0x4ffc00000, data 0x23326f7/0x25c8000, compress 0x0/0x0/0x0, omap 0x8fae4, meta 0x836051c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:28.190673+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:29.190788+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f5244000/0x0/0x4ffc00000, data 0x23326f7/0x25c8000, compress 0x0/0x0/0x0, omap 0x8fae4, meta 0x836051c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:30.190913+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f5244000/0x0/0x4ffc00000, data 0x23326f7/0x25c8000, compress 0x0/0x0/0x0, omap 0x8fae4, meta 0x836051c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444891 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f5244000/0x0/0x4ffc00000, data 0x23326f7/0x25c8000, compress 0x0/0x0/0x0, omap 0x8fae4, meta 0x836051c), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:31.191049+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:32.191214+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:33.191336+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.772833824s of 15.853541374s, submitted: 31
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:34.191478+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:35.191640+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208003072 unmapped: 47669248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448385 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:36.191832+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:37.192026+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:38.192148+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:39.192423+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:40.192613+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448385 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:41.192774+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:42.192965+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:43.193201+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:44.193427+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:45.193589+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448385 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:46.193769+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:47.193989+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:48.194335+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:49.194467+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:50.194619+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448385 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:51.194757+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:52.194919+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:53.195071+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:54.195242+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:55.195399+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448385 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:56.195650+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:57.195776+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208011264 unmapped: 47661056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:58.195923+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:48:59.196163+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:00.196375+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448385 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:01.196518+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:02.196690+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:03.196847+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x2334176/0x25cb000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:04.197008+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.725296021s of 31.735389709s, submitted: 14
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:05.197183+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451240 data_alloc: 218103808 data_used: 6455559
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:06.197357+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 ms_handle_reset con 0x557ad7512800 session 0x557ad5e4fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:07.197511+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:08.197653+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f523f000/0x0/0x4ffc00000, data 0x23341e8/0x25cd000, compress 0x0/0x0/0x0, omap 0x8fc8d, meta 0x8360373), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 483 handle_osd_map epochs [483,484], i have 484, src has [1,484]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad7512800 session 0x557ad7b26540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:09.197769+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:10.197925+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad538d400 session 0x557ad790bdc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455327 data_alloc: 218103808 data_used: 6455574
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:11.198072+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f523b000/0x0/0x4ffc00000, data 0x2335de7/0x25d1000, compress 0x0/0x0/0x0, omap 0x8fd9c, meta 0x8360264), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:12.198250+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:13.198384+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:14.198547+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:15.198808+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455327 data_alloc: 218103808 data_used: 6455574
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:16.199044+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208019456 unmapped: 47652864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:17.199224+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f523b000/0x0/0x4ffc00000, data 0x2335de7/0x25d1000, compress 0x0/0x0/0x0, omap 0x8fd9c, meta 0x8360264), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208027648 unmapped: 47644672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:18.199361+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208027648 unmapped: 47644672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:19.199519+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f523b000/0x0/0x4ffc00000, data 0x2335de7/0x25d1000, compress 0x0/0x0/0x0, omap 0x8fd9c, meta 0x8360264), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208027648 unmapped: 47644672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:20.199804+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208027648 unmapped: 47644672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455327 data_alloc: 218103808 data_used: 6455574
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:21.200000+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208027648 unmapped: 47644672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:22.200266+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad5edd800 session 0x557ad791a000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad7508c00 session 0x557ad790a1c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557adcfc1c00 session 0x557ad7911dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557adcfc1c00 session 0x557ad5fe5180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.458904266s of 17.480876923s, submitted: 12
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad538d400 session 0x557ad5350540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad5edd800 session 0x557ad5328fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad7508c00 session 0x557ad7939a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad7512800 session 0x557ad5fe4000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad7512800 session 0x557ad5a85a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 47570944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:23.200390+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f4d14000/0x0/0x4ffc00000, data 0x285ae59/0x2af8000, compress 0x0/0x0/0x0, omap 0x8fe3a, meta 0x83601c6), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 47570944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:24.200518+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f4d14000/0x0/0x4ffc00000, data 0x285ae59/0x2af8000, compress 0x0/0x0/0x0, omap 0x8fe3a, meta 0x83601c6), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 47570944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:25.200660+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 47570944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3499140 data_alloc: 218103808 data_used: 6455574
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:26.200784+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 47570944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f4d14000/0x0/0x4ffc00000, data 0x285ae59/0x2af8000, compress 0x0/0x0/0x0, omap 0x8fe3a, meta 0x83601c6), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:27.200914+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 47570944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f4d14000/0x0/0x4ffc00000, data 0x285ae59/0x2af8000, compress 0x0/0x0/0x0, omap 0x8fe3a, meta 0x83601c6), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:28.201047+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208101376 unmapped: 47570944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:29.201201+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad538d400 session 0x557ad524cc40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208437248 unmapped: 47235072 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:30.202002+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208437248 unmapped: 47235072 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525199 data_alloc: 234881024 data_used: 10168614
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:31.202107+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f4ce9000/0x0/0x4ffc00000, data 0x2884e7c/0x2b23000, compress 0x0/0x0/0x0, omap 0x8fed8, meta 0x8360128), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:32.202257+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f4ce9000/0x0/0x4ffc00000, data 0x2884e7c/0x2b23000, compress 0x0/0x0/0x0, omap 0x8fed8, meta 0x8360128), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:33.202373+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:34.202508+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:35.202657+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3535055 data_alloc: 234881024 data_used: 11839782
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:36.202833+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:37.202950+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f4ce9000/0x0/0x4ffc00000, data 0x2884e7c/0x2b23000, compress 0x0/0x0/0x0, omap 0x8fed8, meta 0x8360128), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:38.203080+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:39.203206+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:40.203310+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 47038464 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.429323196s of 18.551042557s, submitted: 38
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550131 data_alloc: 234881024 data_used: 11874598
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:41.203428+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f460d000/0x0/0x4ffc00000, data 0x2f60e7c/0x31ff000, compress 0x0/0x0/0x0, omap 0x8fed8, meta 0x8360128), peers [1,2] op hist [0,0,0,0,1])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 212844544 unmapped: 42827776 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:42.203621+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 44081152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:43.203804+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 44081152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:44.203931+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f42df000/0x0/0x4ffc00000, data 0x3286e7c/0x3525000, compress 0x0/0x0/0x0, omap 0x8fed8, meta 0x8360128), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 44081152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:45.204067+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 44081152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615997 data_alloc: 234881024 data_used: 13035814
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:46.204213+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 44081152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:47.204357+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:48.204531+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:49.204799+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:50.205002+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x3289e7c/0x3528000, compress 0x0/0x0/0x0, omap 0x8fed8, meta 0x8360128), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612405 data_alloc: 234881024 data_used: 13039910
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:51.205226+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:52.205403+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x3289e7c/0x3528000, compress 0x0/0x0/0x0, omap 0x8fed8, meta 0x8360128), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:53.205549+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:54.205687+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:55.205869+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612661 data_alloc: 234881024 data_used: 13048102
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:56.205991+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x3289e7c/0x3528000, compress 0x0/0x0/0x0, omap 0x8fed8, meta 0x8360128), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:57.206151+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:58.206299+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.565576553s of 17.903003693s, submitted: 112
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 44146688 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b361c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad7508c00 session 0x557ad7911180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfc1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:49:59.206420+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557adcfc1c00 session 0x557ad5e9ca80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206102528 unmapped: 49569792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x3289e7c/0x3528000, compress 0x0/0x0/0x0, omap 0x8fed8, meta 0x8360128), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:00.206579+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206102528 unmapped: 49569792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:01.206744+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469969 data_alloc: 218103808 data_used: 6455574
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206102528 unmapped: 49569792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:02.206900+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206102528 unmapped: 49569792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f4ccf000/0x0/0x4ffc00000, data 0x2335de7/0x25d1000, compress 0x0/0x0/0x0, omap 0x90014, meta 0x835ffec), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:03.207019+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad538d400 session 0x557ad791e000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206102528 unmapped: 49569792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:04.207175+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f4ccf000/0x0/0x4ffc00000, data 0x2335de7/0x25d1000, compress 0x0/0x0/0x0, omap 0x90014, meta 0x835ffec), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 ms_handle_reset con 0x557ad5edd800 session 0x557ad7474fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206102528 unmapped: 49569792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:05.207297+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f5236000/0x0/0x4ffc00000, data 0x23379d7/0x25d4000, compress 0x0/0x0/0x0, omap 0x906b8, meta 0x835f948), peers [1,2] op hist [0,1])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 485 ms_handle_reset con 0x557ad7508c00 session 0x557ad790b340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:06.207424+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472723 data_alloc: 218103808 data_used: 6463633
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 485 ms_handle_reset con 0x557ad7512800 session 0x557ad78f1500
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:07.207562+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbf400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 486 ms_handle_reset con 0x557adcfbf400 session 0x557ad791b180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbf400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 486 ms_handle_reset con 0x557adcfbf400 session 0x557ad518fc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:08.207688+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 486 ms_handle_reset con 0x557ad538d400 session 0x557ad791ea80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:09.207760+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.657094955s of 10.372067451s, submitted: 114
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:10.207905+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:11.208630+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475337 data_alloc: 218103808 data_used: 6463717
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 487 heartbeat osd_stat(store_statfs(0x4f5246000/0x0/0x4ffc00000, data 0x23230e3/0x25c2000, compress 0x0/0x0/0x0, omap 0x90afb, meta 0x835f505), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 487 heartbeat osd_stat(store_statfs(0x4f5246000/0x0/0x4ffc00000, data 0x23230e3/0x25c2000, compress 0x0/0x0/0x0, omap 0x90afb, meta 0x835f505), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:12.208875+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:13.209033+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:14.209185+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 488 heartbeat osd_stat(store_statfs(0x4f5245000/0x0/0x4ffc00000, data 0x2324b7e/0x25c5000, compress 0x0/0x0/0x0, omap 0x911a3, meta 0x835ee5d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:15.209374+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:16.209524+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477775 data_alloc: 218103808 data_used: 6464330
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:17.210046+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 488 heartbeat osd_stat(store_statfs(0x4f5245000/0x0/0x4ffc00000, data 0x2324b7e/0x25c5000, compress 0x0/0x0/0x0, omap 0x911a3, meta 0x835ee5d), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:18.210369+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 488 handle_osd_map epochs [488,489], i have 489, src has [1,489]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:19.211049+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b5a700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:20.211439+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad7508c00 session 0x557ad4eb8a80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:21.211644+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480549 data_alloc: 218103808 data_used: 6464330
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:22.211835+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f5242000/0x0/0x4ffc00000, data 0x23265fd/0x25c8000, compress 0x0/0x0/0x0, omap 0x9134e, meta 0x835ecb2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:23.212100+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:24.212252+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:25.212458+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:26.212977+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480549 data_alloc: 218103808 data_used: 6464330
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f5242000/0x0/0x4ffc00000, data 0x23265fd/0x25c8000, compress 0x0/0x0/0x0, omap 0x9134e, meta 0x835ecb2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.854288101s of 17.896957397s, submitted: 40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:27.213410+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:28.213873+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad7512800 session 0x557ad6bf1c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:29.214083+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:30.214244+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:31.214567+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482994 data_alloc: 218103808 data_used: 6464330
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f5243000/0x0/0x4ffc00000, data 0x232660d/0x25c9000, compress 0x0/0x0/0x0, omap 0x91201, meta 0x835edff), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:32.215177+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:33.215323+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:34.215562+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:35.215699+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f5243000/0x0/0x4ffc00000, data 0x232660d/0x25c9000, compress 0x0/0x0/0x0, omap 0x91201, meta 0x835edff), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:36.215859+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482994 data_alloc: 218103808 data_used: 6464330
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:37.216101+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f5243000/0x0/0x4ffc00000, data 0x232660d/0x25c9000, compress 0x0/0x0/0x0, omap 0x91201, meta 0x835edff), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:38.216213+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206143488 unmapped: 49528832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:39.216354+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.654039383s of 12.704616547s, submitted: 6
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad7512800 session 0x557ad5347c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206151680 unmapped: 49520640 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad538d400 session 0x557ad7b5b180
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:40.216533+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad5edd800 session 0x557ad78f08c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad7508c00 session 0x557ad5347340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:41.216660+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798495 data_alloc: 218103808 data_used: 6464330
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:42.216825+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f17fc000/0x0/0x4ffc00000, data 0x5d6d60d/0x6010000, compress 0x0/0x0/0x0, omap 0x918ff, meta 0x835e701), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:43.217021+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:44.217137+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:45.217286+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f17fc000/0x0/0x4ffc00000, data 0x5d6d60d/0x6010000, compress 0x0/0x0/0x0, omap 0x918ff, meta 0x835e701), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:46.217431+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798495 data_alloc: 218103808 data_used: 6464330
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:47.217607+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:48.217754+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:49.217907+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbf400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557adcfbf400 session 0x557ad78f1340
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:50.218263+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbf400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557adcfbf400 session 0x557ad71a8380
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:51.218528+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798495 data_alloc: 218103808 data_used: 6464330
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f17fc000/0x0/0x4ffc00000, data 0x5d6d60d/0x6010000, compress 0x0/0x0/0x0, omap 0x918ff, meta 0x835e701), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 49455104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad538d400 session 0x557ad71a96c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.877251625s of 12.318900108s, submitted: 49
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:52.218746+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad5edd800 session 0x557ad7b5b880
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7512800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206528512 unmapped: 49143808 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:53.219049+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206528512 unmapped: 49143808 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:54.219344+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 206528512 unmapped: 49143808 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:55.219571+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:56.219825+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869366 data_alloc: 234881024 data_used: 17587114
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:57.220030+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f17d7000/0x0/0x4ffc00000, data 0x5d9161d/0x6035000, compress 0x0/0x0/0x0, omap 0x91c59, meta 0x835e3a7), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:58.220162+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:50:59.220318+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:00.220459+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:01.220585+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869366 data_alloc: 234881024 data_used: 17587114
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f17d7000/0x0/0x4ffc00000, data 0x5d9161d/0x6035000, compress 0x0/0x0/0x0, omap 0x91c59, meta 0x835e3a7), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:02.220755+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:03.220871+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:04.220998+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f17d7000/0x0/0x4ffc00000, data 0x5d9161d/0x6035000, compress 0x0/0x0/0x0, omap 0x91c59, meta 0x835e3a7), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 45096960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:05.221176+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.081077576s of 13.122850418s, submitted: 10
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 224501760 unmapped: 31170560 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:06.221303+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900834 data_alloc: 234881024 data_used: 18699178
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f1417000/0x0/0x4ffc00000, data 0x5d9161d/0x6035000, compress 0x0/0x0/0x0, omap 0x91c59, meta 0x835e3a7), peers [1,2] op hist [0,0,2,9])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 220372992 unmapped: 35299328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:07.221512+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 220372992 unmapped: 35299328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:08.221754+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 220372992 unmapped: 35299328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:09.221884+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 220372992 unmapped: 35299328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:10.222057+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f03c6000/0x0/0x4ffc00000, data 0x71a261d/0x7446000, compress 0x0/0x0/0x0, omap 0x91c59, meta 0x835e3a7), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 220372992 unmapped: 35299328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:11.222186+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4002146 data_alloc: 234881024 data_used: 18650026
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 220372992 unmapped: 35299328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:12.222393+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 220372992 unmapped: 35299328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:13.222587+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 220372992 unmapped: 35299328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:14.222769+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221290496 unmapped: 34381824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:15.222996+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f03c6000/0x0/0x4ffc00000, data 0x71a261d/0x7446000, compress 0x0/0x0/0x0, omap 0x91c59, meta 0x835e3a7), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.532989502s of 10.072671890s, submitted: 165
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad7508c00 session 0x557ad5a84700
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad7512800 session 0x557ad7b26c40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 ms_handle_reset con 0x557ad538d400 session 0x557ad78ec540
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:16.223154+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221323264 unmapped: 34349056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3991809 data_alloc: 234881024 data_used: 18545578
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:17.223273+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221323264 unmapped: 34349056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:18.223425+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221323264 unmapped: 34349056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:19.223565+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221339648 unmapped: 34332672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad5edd800 session 0x557ad763dc00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:20.223761+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad7508c00 session 0x557ad78f1a40
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03eb000/0x0/0x4ffc00000, data 0x717e60d/0x7421000, compress 0x0/0x0/0x0, omap 0x91b76, meta 0x835e48a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:21.223939+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3996568 data_alloc: 234881024 data_used: 18549082
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e5000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x9223e, meta 0x835ddc2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:22.224126+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e5000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x9223e, meta 0x835ddc2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:23.224265+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:24.224428+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:25.224547+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:26.224636+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3996568 data_alloc: 234881024 data_used: 18549082
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:27.224756+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _renew_subs
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:28.224826+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e5000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x9223e, meta 0x835ddc2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:29.224914+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:30.225050+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:31.225178+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3996568 data_alloc: 234881024 data_used: 18549082
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:32.225357+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221347840 unmapped: 34324480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbf400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.072566986s of 17.145099640s, submitted: 38
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557adcfbf400 session 0x557ad763d6c0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7192c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:33.225472+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e5000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x9271e, meta 0x835d8e2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:34.225610+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x9271e, meta 0x835d8e2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:35.225762+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:36.225859+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3997244 data_alloc: 234881024 data_used: 18731866
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:37.225981+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:38.226115+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:39.226276+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x9271e, meta 0x835d8e2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:40.226430+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:41.226557+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3997244 data_alloc: 234881024 data_used: 18731866
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:42.226731+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:43.226886+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x9271e, meta 0x835d8e2), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:44.227011+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:45.227145+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.525421143s of 13.543110847s, submitted: 2
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:46.227275+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 221356032 unmapped: 34316288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3997244 data_alloc: 234881024 data_used: 18731866
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:47.227415+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 226852864 unmapped: 28819456 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:48.227480+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 227082240 unmapped: 28590080 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:49.227644+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x9276c, meta 0x835d894), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 227082240 unmapped: 28590080 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:50.227783+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 227082240 unmapped: 28590080 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:51.227919+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 227082240 unmapped: 28590080 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4056804 data_alloc: 251658240 data_used: 28957530
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:52.228083+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 227082240 unmapped: 28590080 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:53.228196+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 231227392 unmapped: 24444928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:54.228281+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 231227392 unmapped: 24444928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x9276c, meta 0x835d894), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:55.228354+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 231227392 unmapped: 24444928 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets getting new tickets!
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:56.228581+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _finish_auth 0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:56.229990+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 231243776 unmapped: 24428544 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4064996 data_alloc: 251658240 data_used: 33151834
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:57.228659+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 231243776 unmapped: 24428544 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x9276c, meta 0x835d894), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:58.228854+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 231243776 unmapped: 24428544 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:51:59.228975+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 231243776 unmapped: 24428544 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:00.229120+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: mgrc ms_handle_reset ms_handle_reset con 0x557ad829d800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/273425939
Feb 02 16:02:31 compute-0 ceph-osd[86115]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/273425939,v1:192.168.122.100:6801/273425939]
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: get_auth_request con 0x557ad7512800 auth_method 0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: mgrc handle_mgr_configure stats_period=5
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 231260160 unmapped: 24412160 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:01.229246+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 231260160 unmapped: 24412160 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4064996 data_alloc: 251658240 data_used: 33151834
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x9276c, meta 0x835d894), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.066240311s of 16.089530945s, submitted: 3
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:02.229475+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235233280 unmapped: 20439040 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:03.229599+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x9276c, meta 0x835d894), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235233280 unmapped: 20439040 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:04.229774+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235233280 unmapped: 20439040 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:05.229895+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad86d8000 session 0x557ad7535dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 234938368 unmapped: 20733952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:06.230028+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 234938368 unmapped: 20733952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4073780 data_alloc: 251658240 data_used: 36641626
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x9276c, meta 0x835d894), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:07.230179+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 234938368 unmapped: 20733952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:08.230353+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 234938368 unmapped: 20733952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:09.230508+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 234938368 unmapped: 20733952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:10.230656+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x9276c, meta 0x835d894), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 234938368 unmapped: 20733952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:11.230795+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 234938368 unmapped: 20733952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4073780 data_alloc: 251658240 data_used: 36641626
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:12.230973+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 234938368 unmapped: 20733952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x9276c, meta 0x835d894), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:13.231213+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 234938368 unmapped: 20733952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.980907440s of 11.991889000s, submitted: 5
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad7192c00 session 0x557ad71a8fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:14.231362+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x9276c, meta 0x835d894), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235388928 unmapped: 20283392 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad5edd800 session 0x557ad78f1dc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x923bf, meta 0x835dc41), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:15.231502+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235421696 unmapped: 20250624 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:16.231770+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235421696 unmapped: 20250624 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4078340 data_alloc: 251658240 data_used: 37808986
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:17.231924+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235421696 unmapped: 20250624 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4effe7000/0x0/0x4ffc00000, data 0x758020b/0x7825000, compress 0x0/0x0/0x0, omap 0x923bf, meta 0x835dc41), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:18.232142+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235421696 unmapped: 20250624 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad7508c00 session 0x557add5f8000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557adcfbf400
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557adcfbf400 session 0x557ad790aa80
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:19.232383+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235438080 unmapped: 20234240 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:20.232646+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 235438080 unmapped: 20234240 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:21.232806+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad538d000 session 0x557ad71a9c00
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad538d000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad538d000 session 0x557ad5a84fc0
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049888 data_alloc: 251658240 data_used: 36813740
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:22.233011+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:23.233172+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:24.233337+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:25.233465+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:26.233602+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049888 data_alloc: 251658240 data_used: 36813740
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:27.233803+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:28.233960+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:29.234085+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:30.234265+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:31.234398+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049888 data_alloc: 251658240 data_used: 36813740
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:32.234540+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:33.234645+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:34.234802+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:35.234932+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:36.235142+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049888 data_alloc: 251658240 data_used: 36813740
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:37.235273+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:38.235428+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:39.235573+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:40.235733+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:41.235885+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049888 data_alloc: 251658240 data_used: 36813740
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:42.236062+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:43.236239+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ada0cd400 session 0x557ad6384000
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad5edd800
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:44.236404+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:45.236535+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:46.236654+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049888 data_alloc: 251658240 data_used: 36813740
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:47.236960+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:48.237125+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:49.237290+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:50.237457+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:51.237760+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236486656 unmapped: 19185664 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.375411987s of 37.459743500s, submitted: 34
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:31 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:31 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:52.237988+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:53.238117+0000)
Feb 02 16:02:31 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:31 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:54.238258+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:55.238410+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:56.238549+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:57.238782+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:58.238911+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:52:59.239043+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:00.239248+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:01.239433+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:02.239599+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:03.239757+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:04.239896+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:05.240036+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:06.240161+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:07.240290+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:08.240413+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:09.240560+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:10.240637+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:11.240789+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:12.240913+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:13.241066+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:14.241230+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:15.241362+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:16.241494+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:17.241636+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:18.241781+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:19.241946+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:20.242093+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:21.242198+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:22.242332+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:23.242439+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:24.242577+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:25.242783+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:26.242900+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:27.243025+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:28.243154+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:29.243270+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236503040 unmapped: 19169280 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:30.243471+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:31.243600+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:32.243737+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:33.243893+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:34.244046+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:35.244203+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:36.244353+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:37.244515+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:38.244668+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:39.244858+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:40.245057+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:41.245213+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:42.245468+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:43.245611+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:44.245831+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:45.246038+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:46.246193+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:47.246319+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:48.246479+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:49.246625+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:50.246754+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:51.246869+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:52.247063+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:53.247266+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:54.247375+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:55.247533+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:56.247771+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:57.247996+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:58.248158+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:53:59.248346+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:00.248470+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:01.248633+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:02.248994+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:03.249120+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:04.249261+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:05.249470+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:06.249764+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:07.249977+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:08.250105+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 19161088 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:09.250179+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:10.250514+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:11.250648+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:12.250898+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:13.251060+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:14.251402+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:15.251516+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:16.251665+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:17.251938+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:18.252100+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:19.252222+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:20.252365+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:21.252527+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:22.252823+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:23.252996+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:24.253145+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:25.253297+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:26.253493+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:27.253734+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:28.253926+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:29.254053+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:30.254182+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:31.254310+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:32.254500+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:33.254861+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:34.255085+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:35.255260+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:36.255475+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:37.255640+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:38.255900+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:39.256028+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236519424 unmapped: 19152896 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:40.256154+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236527616 unmapped: 19144704 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:41.256568+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236527616 unmapped: 19144704 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:42.256760+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236527616 unmapped: 19144704 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:43.256899+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236527616 unmapped: 19144704 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:44.257049+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:45.257180+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:46.257329+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:47.257549+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:48.257784+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:49.257932+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:50.258062+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:51.258174+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:52.258399+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:53.258543+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:54.258675+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:55.258805+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:56.258924+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:57.259043+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:58.259192+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:54:59.259352+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:00.259537+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:01.259797+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:02.260063+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:03.260255+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:04.260479+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:05.260671+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:06.260831+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:07.261003+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236535808 unmapped: 19136512 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:08.261129+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:09.261272+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:10.261443+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:11.261598+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:12.261798+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:13.261985+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:14.262136+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:15.262327+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:16.262503+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:17.262760+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:18.262899+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:19.263063+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 19128320 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:20.263221+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236552192 unmapped: 19120128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:21.263399+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236552192 unmapped: 19120128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:22.263667+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236552192 unmapped: 19120128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:23.263844+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236552192 unmapped: 19120128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:24.264065+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236552192 unmapped: 19120128 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:25.264247+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:26.264570+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:27.264771+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:28.264973+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:29.265201+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:30.265416+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:31.265583+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:32.265812+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:33.266017+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:34.266206+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 19111936 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:35.266357+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236568576 unmapped: 19103744 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:36.266513+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236568576 unmapped: 19103744 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:37.266675+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236568576 unmapped: 19103744 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:38.266822+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236568576 unmapped: 19103744 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:39.267305+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236568576 unmapped: 19103744 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:40.268215+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236576768 unmapped: 19095552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:41.269159+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236576768 unmapped: 19095552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:42.270942+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236576768 unmapped: 19095552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:43.272756+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236576768 unmapped: 19095552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:44.272993+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236576768 unmapped: 19095552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:45.273621+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236576768 unmapped: 19095552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:46.273785+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236576768 unmapped: 19095552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:47.274132+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236576768 unmapped: 19095552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:48.274317+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236576768 unmapped: 19095552 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:49.274485+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236584960 unmapped: 19087360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:50.274696+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236584960 unmapped: 19087360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:51.274980+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236584960 unmapped: 19087360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:52.275376+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236584960 unmapped: 19087360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:53.275566+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236584960 unmapped: 19087360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:54.275843+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236584960 unmapped: 19087360 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:55.276129+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:56.276455+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:57.276609+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:58.276754+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:55:59.276858+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:00.276975+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:01.277123+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:02.277324+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:03.277441+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:04.277566+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:05.277781+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:06.277962+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236593152 unmapped: 19079168 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:07.278156+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236601344 unmapped: 19070976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:08.278319+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236601344 unmapped: 19070976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:09.278476+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236601344 unmapped: 19070976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:10.278653+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236601344 unmapped: 19070976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:11.278891+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236601344 unmapped: 19070976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:12.279138+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236601344 unmapped: 19070976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:13.279355+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236601344 unmapped: 19070976 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:14.279595+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236609536 unmapped: 19062784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:15.279876+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236609536 unmapped: 19062784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:16.280088+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236609536 unmapped: 19062784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:17.280420+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236609536 unmapped: 19062784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:18.280647+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236609536 unmapped: 19062784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:19.280877+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236609536 unmapped: 19062784 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:20.281016+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236617728 unmapped: 19054592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:21.281211+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236617728 unmapped: 19054592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:22.281377+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236617728 unmapped: 19054592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:23.281534+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236617728 unmapped: 19054592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:24.281828+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236617728 unmapped: 19054592 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:25.282056+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236625920 unmapped: 19046400 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:26.282274+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236625920 unmapped: 19046400 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:27.282565+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236625920 unmapped: 19046400 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:28.282722+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:29.282947+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:30.283197+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:31.283380+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:32.283623+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:33.283813+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:34.283989+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:35.284160+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:36.284401+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 19005440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:37.284818+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 19005440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:38.284991+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 19005440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:39.285149+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 19005440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:40.285357+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 19005440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:41.285640+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 19005440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:42.285973+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 19005440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:43.286282+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 19005440 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:44.286520+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236675072 unmapped: 18997248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:45.286741+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236675072 unmapped: 18997248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:46.287191+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236675072 unmapped: 18997248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:47.287577+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236675072 unmapped: 18997248 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:48.287963+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:49.288138+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:50.288308+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:51.288512+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:52.288746+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:53.289047+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:54.289226+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:55.289415+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 118K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 30K writes, 11K syncs, 2.68 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1700 writes, 5419 keys, 1700 commit groups, 1.0 writes per commit group, ingest: 7.45 MB, 0.01 MB/s
                                           Interval WAL: 1700 writes, 710 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:56.289625+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:57.289808+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:58.289962+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:56:59.290156+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:00.290353+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236716032 unmapped: 18956288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:01.290528+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236716032 unmapped: 18956288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:02.290793+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236716032 unmapped: 18956288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:03.290989+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236716032 unmapped: 18956288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:04.291126+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236716032 unmapped: 18956288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:05.291349+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236716032 unmapped: 18956288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:06.291542+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236716032 unmapped: 18956288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:07.291699+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236716032 unmapped: 18956288 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:08.291886+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236724224 unmapped: 18948096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:09.292156+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236724224 unmapped: 18948096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:10.292316+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236724224 unmapped: 18948096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:11.292542+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:12.292795+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:13.292996+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:14.293190+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:15.293383+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:16.293826+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:17.294071+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:18.294236+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:19.294485+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:20.294665+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:21.294849+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:22.295100+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236765184 unmapped: 18907136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:23.295259+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236765184 unmapped: 18907136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:24.295473+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236773376 unmapped: 18898944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:25.295682+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236773376 unmapped: 18898944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:26.295921+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236773376 unmapped: 18898944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:27.296064+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236773376 unmapped: 18898944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:28.296289+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236773376 unmapped: 18898944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:29.296457+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236773376 unmapped: 18898944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:30.296596+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236773376 unmapped: 18898944 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:31.296838+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:32.297101+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:33.297311+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:34.297528+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:35.297893+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:36.298123+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236789760 unmapped: 18882560 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:37.298367+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236797952 unmapped: 18874368 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:38.298512+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236797952 unmapped: 18874368 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:39.298686+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:40.298911+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:41.299106+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236814336 unmapped: 18857984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:42.299369+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236814336 unmapped: 18857984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:43.299511+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236814336 unmapped: 18857984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:44.299810+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236814336 unmapped: 18857984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:45.299990+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236822528 unmapped: 18849792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:46.300198+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236822528 unmapped: 18849792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:47.300403+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236830720 unmapped: 18841600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:48.300600+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236830720 unmapped: 18841600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:49.300827+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236830720 unmapped: 18841600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:50.301067+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236830720 unmapped: 18841600 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:51.301734+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.997192383s of 300.021240234s, submitted: 18
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236863488 unmapped: 18808832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:52.302081+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236863488 unmapped: 18808832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:53.302470+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236863488 unmapped: 18808832 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:54.302611+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236937216 unmapped: 18735104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:55.302762+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236937216 unmapped: 18735104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:56.302925+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236937216 unmapped: 18735104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:57.303106+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236937216 unmapped: 18735104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:58.303414+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236937216 unmapped: 18735104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:57:59.303919+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236937216 unmapped: 18735104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:00.304138+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236937216 unmapped: 18735104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:01.304525+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236937216 unmapped: 18735104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:02.304824+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236937216 unmapped: 18735104 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:03.305043+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236945408 unmapped: 18726912 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:04.305225+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236945408 unmapped: 18726912 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:05.305431+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236945408 unmapped: 18726912 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:06.305639+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236953600 unmapped: 18718720 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:07.305955+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236953600 unmapped: 18718720 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:08.306337+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236961792 unmapped: 18710528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:09.306586+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236961792 unmapped: 18710528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:10.306854+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236961792 unmapped: 18710528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:11.307062+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236961792 unmapped: 18710528 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:12.307306+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236969984 unmapped: 18702336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:13.307441+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236969984 unmapped: 18702336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:14.307618+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236969984 unmapped: 18702336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:15.307797+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236969984 unmapped: 18702336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:16.307952+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236969984 unmapped: 18702336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:17.308149+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236969984 unmapped: 18702336 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:18.308295+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236978176 unmapped: 18694144 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:19.308495+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236986368 unmapped: 18685952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:20.308677+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236986368 unmapped: 18685952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:21.308833+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236986368 unmapped: 18685952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:22.309030+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236986368 unmapped: 18685952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:23.309168+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236986368 unmapped: 18685952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:24.309261+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236986368 unmapped: 18685952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:25.309416+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236986368 unmapped: 18685952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:26.309580+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236986368 unmapped: 18685952 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:27.309798+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236994560 unmapped: 18677760 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:28.310126+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236634112 unmapped: 19038208 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:29.310263+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236634112 unmapped: 19038208 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:30.310421+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236642304 unmapped: 19030016 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:31.310545+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236642304 unmapped: 19030016 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:32.310818+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236642304 unmapped: 19030016 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:33.310922+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236642304 unmapped: 19030016 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:34.311061+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236642304 unmapped: 19030016 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:35.311196+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:36.311319+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:37.311469+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:38.311646+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:39.311789+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:40.311933+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:41.312115+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:42.312332+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236650496 unmapped: 19021824 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:43.312489+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:44.312675+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:45.312801+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:46.312937+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:47.313103+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:48.313248+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:49.313374+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:50.313506+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236658688 unmapped: 19013632 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:51.313668+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:52.313964+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:53.314119+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:54.314274+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:55.314405+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:56.314555+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:57.314772+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:58.314978+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236683264 unmapped: 18989056 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:58:59.315142+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:00.315345+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:01.315508+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:02.315754+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:03.315883+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:04.316040+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:05.316231+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:06.316413+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236691456 unmapped: 18980864 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:07.316576+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236699648 unmapped: 18972672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:08.316768+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236699648 unmapped: 18972672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:09.316934+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236699648 unmapped: 18972672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:10.317059+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236699648 unmapped: 18972672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:11.317183+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236699648 unmapped: 18972672 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:12.317353+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:13.317471+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:14.317625+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:15.318003+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:16.318130+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:17.318314+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:18.318485+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236707840 unmapped: 18964480 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:19.318646+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236724224 unmapped: 18948096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:20.318792+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236724224 unmapped: 18948096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:21.318946+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236724224 unmapped: 18948096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:22.319630+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236724224 unmapped: 18948096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:23.319797+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236724224 unmapped: 18948096 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:24.319941+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236732416 unmapped: 18939904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:25.320091+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236732416 unmapped: 18939904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:26.320212+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236732416 unmapped: 18939904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:27.320363+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236732416 unmapped: 18939904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad7193400 session 0x557ad78f0fc0
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7192c00
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:28.320485+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236732416 unmapped: 18939904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:29.320668+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236732416 unmapped: 18939904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:30.320823+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236732416 unmapped: 18939904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:31.320971+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236732416 unmapped: 18939904 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:32.321207+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:33.321356+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:34.321498+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:35.321634+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:36.321779+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236740608 unmapped: 18931712 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:37.321909+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236748800 unmapped: 18923520 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:38.322086+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236748800 unmapped: 18923520 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:39.346297+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:40.346490+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:41.346658+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:42.346994+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236756992 unmapped: 18915328 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:43.347157+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236765184 unmapped: 18907136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:44.347315+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236765184 unmapped: 18907136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:45.347547+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236765184 unmapped: 18907136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:46.347786+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236765184 unmapped: 18907136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:47.348136+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236765184 unmapped: 18907136 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:48.348329+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:49.348479+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:50.348633+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:51.348820+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:52.349058+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236781568 unmapped: 18890752 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:53.349220+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236789760 unmapped: 18882560 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:54.349427+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236789760 unmapped: 18882560 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:55.349665+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236797952 unmapped: 18874368 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:56.349894+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:57.350055+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:58.350215+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T15:59:59.350385+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:00.350656+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:01.350838+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:02.351041+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:03.351173+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:04.351326+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236806144 unmapped: 18866176 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:05.351487+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236814336 unmapped: 18857984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:06.351760+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236814336 unmapped: 18857984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:07.351928+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236814336 unmapped: 18857984 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f03e7000/0x0/0x4ffc00000, data 0x718020b/0x7425000, compress 0x0/0x0/0x0, omap 0x92976, meta 0x835d68a), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:08.352150+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236822528 unmapped: 18849792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049744 data_alloc: 251658240 data_used: 36813892
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:09.352353+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236822528 unmapped: 18849792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:10.352527+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad7191c00 session 0x557ad791f180
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad7192c00 session 0x557ad8f06c40
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557add024c00 session 0x557ad5e9d500
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 236822528 unmapped: 18849792 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: handle_auth_request added challenge on 0x557ad7508c00
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 138.871917725s of 139.042541504s, submitted: 106
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 ms_handle_reset con 0x557ad7508c00 session 0x557adcf27dc0
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187b000/0x0/0x4ffc00000, data 0x5cec20b/0x5f91000, compress 0x0/0x0/0x0, omap 0x92a10, meta 0x835d5f0), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:11.352734+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:12.352966+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:13.353226+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:14.353443+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:15.353627+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:16.353801+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:17.353980+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:18.354234+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:19.354424+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:20.354641+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:21.354808+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:22.355017+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:23.355240+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:24.355424+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:25.355558+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:26.355730+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:27.355941+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:28.356108+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:29.356274+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:30.356527+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:31.356842+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:32.357071+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:33.357227+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:34.357450+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:35.357671+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:36.357846+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:37.358050+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:38.358239+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:39.358414+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:40.358587+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:41.358770+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:42.359069+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:43.359207+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:44.359426+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:45.359586+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:46.359801+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:47.360063+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:48.360254+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:49.360395+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:50.360546+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:51.360747+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:52.360998+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:53.361164+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:54.361345+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:55.361517+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:56.361782+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:57.362936+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:58.363092+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:00:59.363239+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:00.363401+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:01.363612+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:02.363812+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:03.364007+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:04.364198+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:05.364394+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:06.364632+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:07.364839+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:08.365011+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:09.365209+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232054784 unmapped: 23617536 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:10.365355+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:11.365505+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:12.365733+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:13.365967+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:14.366151+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:15.366306+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:16.366539+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:17.366786+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:18.366946+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:19.367143+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:20.367330+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:21.367491+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:22.367691+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:23.367864+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:24.368011+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:25.368232+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:26.368408+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:27.368565+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232062976 unmapped: 23609344 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:28.368748+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232071168 unmapped: 23601152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:29.368925+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232071168 unmapped: 23601152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:30.369108+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232071168 unmapped: 23601152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:31.369258+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232071168 unmapped: 23601152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:32.369444+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232071168 unmapped: 23601152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:33.369610+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232071168 unmapped: 23601152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:34.369781+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232071168 unmapped: 23601152 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:35.370038+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:36.370199+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:37.370377+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:38.370542+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:39.370659+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:40.370790+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:41.370966+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:42.371150+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:43.371292+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:44.371464+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232079360 unmapped: 23592960 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:45.371618+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:46.371766+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:47.372518+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:48.372652+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:49.372786+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:50.373041+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:51.373184+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:52.373378+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:53.373541+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:54.373717+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:55.373855+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:56.374016+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:57.374151+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232087552 unmapped: 23584768 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:58.374312+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: do_command 'config diff' '{prefix=config diff}'
Feb 02 16:02:32 compute-0 ceph-osd[86115]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 02 16:02:32 compute-0 ceph-osd[86115]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232210432 unmapped: 23461888 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3897149 data_alloc: 251658240 data_used: 30547012
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:01:59.374465+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: do_command 'config show' '{prefix=config show}'
Feb 02 16:02:32 compute-0 ceph-osd[86115]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 02 16:02:32 compute-0 ceph-osd[86115]: do_command 'counter dump' '{prefix=counter dump}'
Feb 02 16:02:32 compute-0 ceph-osd[86115]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 02 16:02:32 compute-0 ceph-osd[86115]: do_command 'counter schema' '{prefix=counter schema}'
Feb 02 16:02:32 compute-0 ceph-osd[86115]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232030208 unmapped: 23642112 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:02:00.374626+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: prioritycache tune_memory target: 4294967296 mapped: 232112128 unmapped: 23560192 heap: 255672320 old mem: 2845415832 new mem: 2845415832
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: tick
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_tickets
Feb 02 16:02:32 compute-0 ceph-osd[86115]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-02T16:02:01.374831+0000)
Feb 02 16:02:32 compute-0 ceph-osd[86115]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f187c000/0x0/0x4ffc00000, data 0x5cec1fb/0x5f90000, compress 0x0/0x0/0x0, omap 0x92aaa, meta 0x835d556), peers [1,2] op hist [])
Feb 02 16:02:32 compute-0 ceph-osd[86115]: do_command 'log dump' '{prefix=log dump}'
Feb 02 16:02:32 compute-0 ceph-mon[75334]: pgmap v2153: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:32 compute-0 ceph-mon[75334]: from='client.19242 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:32 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 16:02:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 16:02:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416813194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.123 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.265 239549 WARNING nova.virt.libvirt.driver [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.266 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3870MB free_disk=59.98818066995591GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.266 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.266 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 02 16:02:32 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.350 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.350 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.370 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing inventories for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.390 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating ProviderTree inventory for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.390 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Updating inventory in ProviderTree for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 02 16:02:32 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19256 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} v 0)
Feb 02 16:02:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.418 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing aggregate associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.444 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Refreshing trait associations for resource provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75, traits: COMPUTE_NODE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_F16C,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 02 16:02:32 compute-0 nova_compute[239545]: 2026-02-02 16:02:32.467 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 02 16:02:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb 02 16:02:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1144801379' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 02 16:02:32 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:32 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 02 16:02:32 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3705210084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:02:33 compute-0 nova_compute[239545]: 2026-02-02 16:02:33.015 239549 DEBUG oslo_concurrency.processutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 02 16:02:33 compute-0 nova_compute[239545]: 2026-02-02 16:02:33.022 239549 DEBUG nova.compute.provider_tree [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed in ProviderTree for provider: b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 02 16:02:33 compute-0 nova_compute[239545]: 2026-02-02 16:02:33.048 239549 DEBUG nova.scheduler.client.report [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Inventory has not changed for provider b7d3f1a7-cf61-4724-a3a4-d9df4b77ee75 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 02 16:02:33 compute-0 nova_compute[239545]: 2026-02-02 16:02:33.050 239549 DEBUG nova.compute.resource_tracker [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 02 16:02:33 compute-0 nova_compute[239545]: 2026-02-02 16:02:33.050 239549 DEBUG oslo_concurrency.lockutils [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 02 16:02:33 compute-0 ceph-mon[75334]: from='client.19244 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:33 compute-0 ceph-mon[75334]: from='client.19246 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:33 compute-0 ceph-mon[75334]: from='client.19248 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:33 compute-0 ceph-mon[75334]: from='client.19250 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/416813194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:02:33 compute-0 ceph-mon[75334]: from='mgr.14122 192.168.122.100:0/2849446170' entity='mgr.compute-0.rxryxi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.bzshzr", "name": "rgw_frontends"} : dispatch
Feb 02 16:02:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1144801379' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 02 16:02:33 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3705210084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 02 16:02:33 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19266 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:33 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Feb 02 16:02:33 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682723303' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 02 16:02:33 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19268 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb 02 16:02:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2452527768' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 02 16:02:34 compute-0 nova_compute[239545]: 2026-02-02 16:02:34.051 239549 DEBUG oslo_service.periodic_task [None req-14a5b3f3-3c21-409c-955c-c30221c487e0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 02 16:02:34 compute-0 ceph-mon[75334]: pgmap v2154: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:34 compute-0 ceph-mon[75334]: from='client.19256 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:34 compute-0 ceph-mon[75334]: from='client.19260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/682723303' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 02 16:02:34 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2452527768' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 02 16:02:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:02:34 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:34 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Feb 02 16:02:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2510368805' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 02 16:02:34 compute-0 systemd[1]: Starting Hostname Service...
Feb 02 16:02:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 16:02:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 16:02:34 compute-0 systemd[1]: Started Hostname Service.
Feb 02 16:02:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 16:02:34 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 16:02:35 compute-0 ceph-mon[75334]: from='client.19266 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:35 compute-0 ceph-mon[75334]: from='client.19268 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 02 16:02:35 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2510368805' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 02 16:02:35 compute-0 ceph-mon[75334]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 02 16:02:35 compute-0 ceph-mon[75334]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 02 16:02:35 compute-0 ceph-mon[75334]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 02 16:02:35 compute-0 ceph-mon[75334]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 02 16:02:35 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Feb 02 16:02:35 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596903550' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 02 16:02:35 compute-0 nova_compute[239545]: 2026-02-02 16:02:35.407 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:35 compute-0 nova_compute[239545]: 2026-02-02 16:02:35.643 239549 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 02 16:02:35 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19284 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:36 compute-0 ceph-mon[75334]: pgmap v2155: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:36 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2596903550' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 02 16:02:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Feb 02 16:02:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007477391' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 02 16:02:36 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:36 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Feb 02 16:02:36 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1463204440' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 02 16:02:37 compute-0 ceph-mon[75334]: from='client.19284 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3007477391' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 02 16:02:37 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/1463204440' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 02 16:02:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb 02 16:02:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995222887' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 02 16:02:37 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Feb 02 16:02:37 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481553013' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 02 16:02:38 compute-0 ceph-mon[75334]: pgmap v2156: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/2995222887' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 02 16:02:38 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/481553013' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 02 16:02:38 compute-0 ceph-mgr[75628]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:38 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19294 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:38 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Feb 02 16:02:38 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945680542' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Feb 02 16:02:39 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/3945680542' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Feb 02 16:02:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 02 16:02:39 compute-0 ceph-mon[75334]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Feb 02 16:02:39 compute-0 ceph-mon[75334]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494786565' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Feb 02 16:02:39 compute-0 ceph-mgr[75628]: log_channel(audit) log [DBG] : from='client.19300 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:40 compute-0 ceph-mon[75334]: pgmap v2157: 305 pgs: 305 active+clean; 273 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb 02 16:02:40 compute-0 ceph-mon[75334]: from='client.19294 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 02 16:02:40 compute-0 ceph-mon[75334]: from='client.? 192.168.122.100:0/494786565' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Feb 02 16:02:40 compute-0 ceph-mon[75334]: from='client.19300 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
